1. 17 Mar, 2018 6 commits
  2. 16 Mar, 2018 9 commits
  3. 15 Mar, 2018 9 commits
    • Tom Lane's avatar
      Clean up duplicate table and function names in regression tests. · 2cf8c7aa
      Tom Lane authored
      Many of the objects we create during the regression tests are put in the
      public schema, so that using the same names in different regression tests
      creates a hazard of test failures if any two such scripts run concurrently.
      This patch cleans up a bunch of latent hazards of that sort, as well as two
      live hazards.
      
      The current situation in this regard is far worse than it was a year or two
      back, because practically all of the partitioning-related test cases have
      reused table names with enthusiasm.  I despaired of cleaning up that mess
      within the five most-affected tests (create_table, alter_table, insert,
      update, inherit); fortunately those don't run concurrently.
      
      Other than partitioning problems, most of the issues boil down to using
      names like "foo", "bar", "tmp", etc, without thought for the fact that
      other test scripts might use similar names concurrently.  I've made an
      effort to make all such names more specific.
      
      One of the live hazards was that commit 7421f4b8 caused with.sql to
      create a table named "test", conflicting with a similarly-named table
      in alter_table.sql; this was exposed in the buildfarm recently.
      The other one was that join.sql and transactions.sql both create tables
      named "foo" and "bar"; but join.sql's uses of those names date back
      only to December or so.
      
      Since commit 7421f4b8 was back-patched to v10, back-patch a minimal
      fix for that problem.  The rest of this is just future-proofing.
      
      Discussion: https://postgr.es/m/4627.1521070268@sss.pgh.pa.us
      2cf8c7aa
    • Robert Haas's avatar
      Split create_grouping_paths into degenerate and non-degenerate cases. · 1466bcfa
      Robert Haas authored
      There's no functional change here, or at least I hope there isn't,
      just code rearrangement.  The rearrangement is motivated by
      partition-wise aggregate, which doesn't need to consider the
      degenerate case but wants to reuse the logic for the ordinary case.
      
      Based loosely on a patch from Ashutosh Bapat and Jeevan Chalke, but I
      whacked it around pretty heavily. The larger patch series of which
      this patch is a part was also reviewed and tested by Antonin Houska,
      Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin Knizhnik,
      Pascal Legrand, Rafia Sabih, and me.
      
      Discussion: http://postgr.es/m/CAFjFpRewpqCmVkwvq6qrRjmbMDpN0CZvRRzjd8UvncczA3Oz1Q@mail.gmail.com
      1466bcfa
    • Alvaro Herrera's avatar
      test_ddl_deparse: rename matview · a446a1c7
      Alvaro Herrera authored
      Should have done this in e69f5e0e ...
      Per note from Tom Lane.
      a446a1c7
    • Tom Lane's avatar
      Clean up duplicate role and schema names in regression tests. · fb7db40a
      Tom Lane authored
      Since these names are global, using the same ones in different regression
      tests creates a hazard of test failures if any two such scripts run
      concurrently.  Let's establish a policy of not doing that.  In the cases
      where a conflict existed, I chose to rename both sides: in principle one
      script or the other could've been left in possession of the common name,
      but that seems to just invite more trouble of the same sort.
      
      There are a number of places where scripts are using names that seem
      unduly generic, but in the absence of actual conflicts I left them alone.
      
      In addition, fix insert.sql's use of "someone_else" as a role name.
      That's a flat out violation of longstanding project policy, so back-patch
      that change to v10 where the usage appeared.  The rest of this is just
      future-proofing, as no two of these scripts are actually run concurrently
      in the existing parallel_schedule.
      
      Conflicts of schema-qualified names also exist, but will be dealt with
      separately.
      
      Discussion: https://postgr.es/m/4627.1521070268@sss.pgh.pa.us
      fb7db40a
    • Peter Eisentraut's avatar
      Fix more format truncation issues · 3a4b8919
      Peter Eisentraut authored
      Fix the warnings created by the compiler warning options
      -Wformat-overflow=2 -Wformat-truncation=2, supported since GCC 7.  This
      is a more aggressive variant of the fixes in
      6275f5d2, which GCC 7 warned about by
      default.
      
      The issues are all harmless, but some dubious coding patterns are
      cleaned up.
      
      One issue that is of external interest is that BGW_MAXLEN is increased
      from 64 to 96.  Apparently, the old value would cause the bgw_name of
      logical replication workers to be truncated in some circumstances.
      
      But this doesn't actually add those warning options.  It appears that
      the warnings depend a bit on compilation and optimization options, so it
      would be annoying to have to keep up with that.  This is more of a
      once-in-a-while cleanup.
      Reviewed-by: default avatarMichael Paquier <michael@paquier.xyz>
      3a4b8919
    • Robert Haas's avatar
      Pass additional arguments to a couple of grouping-related functions. · 648a6c7b
      Robert Haas authored
      get_number_of_groups() and make_partial_grouping_target() currently
      fish information directly out of the PlannerInfo; in the former case,
      the target list, and in the latter case, the HAVING qual.  This works
      fine if there's only one grouping relation, but if the pending patch
      for partition-wise aggregate gets committed, we'll have multiple
      grouping relations and must therefore use appropriately translated
      versions of these values for each one.  To make that simpler, pass the
      values to be used as arguments.
      
      Jeevan Chalke.  The larger patch series of which this patch is a part
      was also reviewed and tested by Antonin Houska, Rajkumar Raghuwanshi,
      David Rowley, Dilip Kumar, Konstantin Knizhnik, Pascal Legrand, Rafia
      Sabih, and me.
      
      Discussion: http://postgr.es/m/CAM2+6=UqFnFUypOvLdm5TgC+2M=-E0Q7_LOh0VDFFzmk2BBPzQ@mail.gmail.com
      Discussion: http://postgr.es/m/CAM2+6=W+L=C4yBqMrgrfTfNtbtmr4T53-hZhwbA2kvbZ9VMrrw@mail.gmail.com
      648a6c7b
    • Alvaro Herrera's avatar
      restrict -> pg_restrict · 8d1b805f
      Alvaro Herrera authored
      So that it works on MSVC, too.
      
      Author: Michaël Paquier
      Discussion: https://postgr.es/m/29889.1520968202@sss.pgh.pa.us
      8d1b805f
    • Alvaro Herrera's avatar
      test_ddl_deparse: Don't use pg_class as source for a matview · e69f5e0e
      Alvaro Herrera authored
      Doing so causes a pg_upgrade of a database containing these objects to
      fail whenever pg_class changes.  And it's pointless anyway: we have more
      interesting tables anyhow.
      
      Discussion: https://postgr.es/m/CAD5tBc+s8pW9WvH2+_z=B4x95FD4QuzZKcaMpff_9H4rS0VU1A@mail.gmail.com
      e69f5e0e
    • Alvaro Herrera's avatar
      logical replication: fix OID type mapping mechanism · 24c0a6c6
      Alvaro Herrera authored
      The logical replication type map seems to have been misused by its only
      caller -- it would try to use the remote OID as input for local type
      routines, which unsurprisingly could result in bogus "cache lookup
      failed for type XYZ" errors, or random other type names being picked up
      if they happened to use the right OID.  Fix that, changing
      Oid logicalrep_typmap_getid(Oid remoteid) to
      char *logicalrep_typmap_gettypname(Oid remoteid)
      which is more useful.  If the remote type is not part of the typmap,
      this simply prints "unrecognized type" instead of choking trying to
      figure out -- a pointless exercise (because the only input for that
      comes from replication messages, which are not under the local node's
      control) and dangerous to boot, when called from within an error context
      callback.
      
      Once that is done, it comes to light that the local OID in the typmap
      entry was not being used for anything; the type/schema names are what we
      need, so remove local type OID from that struct.
      
      Once you do that, it becomes pointless to attach a callback to regular
      syscache invalidation.  So remove that also.
      
      Reported-by: Dang Minh Huong
      Author: Masahiko Sawada
      Reviewed-by: Álvaro Herrera, Petr Jelínek, Dang Minh Huong, Atsushi Torikoshi
      Discussion: https://postgr.es/m/75DB81BEEA95B445AE6D576A0A5C9E936A6BE964@BPXM05GP.gisp.nec.co.jp
      Discussion: https://postgr.es/m/75DB81BEEA95B445AE6D576A0A5C9E936A6C4B0A@BPXM05GP.gisp.nec.co.jp
      24c0a6c6
  4. 14 Mar, 2018 9 commits
  5. 13 Mar, 2018 7 commits
    • Michael Meskes's avatar
      Add Oracle like handling of char arrays. · 3b7ab438
      Michael Meskes authored
      In some cases Oracle Pro*C handles char array differently than ECPG. This patch
      adds a Oracle compatibility mode to make ECPG behave like Pro*C.
      
      Patch by David Rader <davidr@openscg.com>
      3b7ab438
    • Michael Meskes's avatar
      Fix double frees in ecpg. · db2fc801
      Michael Meskes authored
      Patch by Patrick Krecker <patrick@judicata.com>
      db2fc801
    • Andres Freund's avatar
      Expand AND/OR regression tests around NULL handling. · 1e22166e
      Andres Freund authored
      Previously there were no tests verifying that NULL handling in AND/OR
      was correct (i.e. that NULL rather than false is returned if
      expression doesn't return true).
      
      Author: Andres Freund
      1e22166e
    • Andres Freund's avatar
    • Robert Haas's avatar
      Let Parallel Append over simple UNION ALL have partial subpaths. · 0927d2f4
      Robert Haas authored
      A simple UNION ALL gets flattened into an appendrel of subquery
      RTEs, but up until now it's been impossible for the appendrel to use
      the partial paths for the subqueries, so we can implement the
      appendrel as a Parallel Append but only one with non-partial paths
      as children.
      
      There are three separate obstacles to removing that limitation.
      First, when planning a subquery, propagate any partial paths to the
      final_rel so that they are potentially visible to outer query levels
      (but not if they have initPlans attached, because that wouldn't be
      safe).  Second, after planning a subquery, propagate any partial paths
      for the final_rel to the subquery RTE in the outer query level in the
      same way we do for non-partial paths.  Third, teach finalize_plan() to
      account for the possibility that the fake parameter we use for rescan
      signalling when the plan contains a Gather (Merge) node may be
      propagated from an outer query level.
      
      Patch by me, reviewed and tested by Amit Khandekar, Rajkumar
      Raghuwanshi, and Ashutosh Bapat.  Test cases based on examples by
      Rajkumar Raghuwanshi.
      
      Discussion: http://postgr.es/m/CA+Tgmoa6L9A1nNCk3aTDVZLZ4KkHDn1+tm7mFyFvP+uQPS7bAg@mail.gmail.com
      0927d2f4
    • Tom Lane's avatar
      When updating reltuples after ANALYZE, just extrapolate from our sample. · d04900de
      Tom Lane authored
      The existing logic for updating pg_class.reltuples trusted the sampling
      results only for the pages ANALYZE actually visited, preferring to
      believe the previous tuple density estimate for all the unvisited pages.
      While there's some rationale for doing that for VACUUM (first that
      VACUUM is likely to visit a very nonrandom subset of pages, and second
      that we know for sure that the unvisited pages did not change), there's
      no such rationale for ANALYZE: by assumption, it's looked at an unbiased
      random sample of the table's pages.  Furthermore, in a very large table
      ANALYZE will have examined only a tiny fraction of the table's pages,
      meaning it cannot slew the overall density estimate very far at all.
      In a table that is physically growing, this causes reltuples to increase
      nearly proportionally to the change in relpages, regardless of what is
      actually happening in the table.  This has been observed to cause reltuples
      to become so much larger than reality that it effectively shuts off
      autovacuum, whose threshold for doing anything is a fraction of reltuples.
      (Getting to the point where that would happen seems to require some
      additional, not well understood, conditions.  But it's undeniable that if
      reltuples is seriously off in a large table, ANALYZE alone will not fix it
      in any reasonable number of iterations, especially not if the table is
      continuing to grow.)
      
      Hence, restrict the use of vac_estimate_reltuples() to VACUUM alone,
      and in ANALYZE, just extrapolate from the sample pages on the assumption
      that they provide an accurate model of the whole table.  If, by very bad
      luck, they don't, at least another ANALYZE will fix it; in the old logic
      a single bad estimate could cause problems indefinitely.
      
      In HEAD, let's remove vac_estimate_reltuples' is_analyze argument
      altogether; it was never used for anything and now it's totally pointless.
      But keep it in the back branches, in case any third-party code is calling
      this function.
      
      Per bug #15005.  Back-patch to all supported branches.
      
      David Gould, reviewed by Alexander Kuzmenkov, cosmetic changes by me
      
      Discussion: https://postgr.es/m/20180117164916.3fdcf2e9@engels
      d04900de
    • Tom Lane's avatar
      Avoid holding AutovacuumScheduleLock while rechecking table statistics. · 38f7831d
      Tom Lane authored
      In databases with many tables, re-fetching the statistics takes some time,
      so that this behavior seriously decreases the available concurrency for
      multiple autovac workers.  There's discussion afoot about more complete
      fixes, but a simple and back-patchable amelioration is to claim the table
      and release the lock before rechecking stats.  If we find out there's no
      longer a reason to process the table, re-taking the lock to un-claim the
      table is cheap enough.
      
      (This patch is quite old, but got lost amongst a discussion of more
      aggressive fixes.  It's not clear when or if such a fix will be
      accepted, but in any case it'd be unlikely to get back-patched.
      Let's do this now so we have some improvement for the back branches.)
      
      In passing, make the normal un-claim step take AutovacuumScheduleLock
      not AutovacuumLock, since that is what is documented to protect the
      wi_tableoid field.  This wasn't an actual bug in view of the fact that
      readers of that field hold both locks, but it creates some concurrency
      penalty against operations that need only AutovacuumLock.
      
      Back-patch to all supported versions.
      
      Jeff Janes
      
      Discussion: https://postgr.es/m/26118.1520865816@sss.pgh.pa.us
      38f7831d