1. 01 Dec, 2015 3 commits
    • Tom Lane's avatar
      Use "g" not "f" format in ecpg's PGTYPESnumeric_from_double(). · db4a5cfc
      Tom Lane authored
      The previous coding could overrun the provided buffer size for a very large
      input, or lose precision for a very small input.  Adopt the methodology
      that's been in use in the equivalent backend code for a long time.
      
      Per private report from Bas van Schaik.  Back-patch to all supported
      branches.
      db4a5cfc
    • Tom Lane's avatar
      Further adjustment to psql's print_aligned_vertical() function. · 2287b874
      Tom Lane authored
      We should ignore output_columns unless it's greater than zero.
      A zero means we couldn't get any information from ioctl(TIOCGWINSZ);
      in that case the expected behavior is to print the data at native width,
      not to wrap it at the smallest possible value.  print_aligned_text()
      gets this consideration right, but print_aligned_vertical() lost track
      of this detail somewhere along the line.
      2287b874
    • Teodor Sigaev's avatar
      Use pg_rewind when target timeline was switched · e50cda78
      Teodor Sigaev authored
      Allow pg_rewind to work when target timeline was switched. Now
      user can return promoted standby to old master.
      
      Target timeline history becomes a global variable. Index
      in target timeline history is used in function interfaces instead of
      specifying TLI directly. Thus, SimpleXLogPageRead() can easily start
      reading XLOGs from next timeline when current timeline ends.
      
      Author: Alexander Korotkov
      Review: Michael Paquier
      e50cda78
  2. 30 Nov, 2015 2 commits
    • Tom Lane's avatar
      Rework wrap-width calculation in psql's print_aligned_vertical() function. · 0e0776bc
      Tom Lane authored
      This area was rather heavily whacked around in 6513633b and follow-on
      commits, and it was showing it, because the logic to calculate the
      allowable data width in wrapped expanded mode had only the vaguest
      relationship to the logic that was actually printing the data.  It was
      not very close to being right about the conditions requiring overhead
      columns to be added.  Aside from being wrong, it was pretty unreadable
      and under-commented.  Rewrite it so it corresponds to what the printing
      code actually does.
      
      In passing, remove a couple of dead tests in the printing logic, too.
      
      Per a complaint from Jeff Janes, though this doesn't look much like his
      patch because it fixes a number of other corner-case bogosities too.
      One such fix that's visible in the regression test results is that
      although the code was attempting to enforce a minimum data width of
      3 columns, it sometimes left less space than that available.
      0e0776bc
    • Robert Haas's avatar
      Fix obsolete comment. · 3690dc6b
      Robert Haas authored
      It's amazing how fast things become obsolete these days.
      
      Amit Langote
      3690dc6b
  3. 29 Nov, 2015 1 commit
    • Tom Lane's avatar
      Avoid caching expression state trees for domain constraints across queries. · ec7eef6b
      Tom Lane authored
      In commit 8abb3cda I attempted to cache
      the expression state trees constructed for domain CHECK constraints for
      the life of the backend (assuming the domain's constraints don't get
      redefined).  However, this turns out not to work very well, because
      execQual.c will run those state trees with ecxt_per_query_memory pointing
      to a query-lifespan context, and in some situations we'll end up with
      pointers into that context getting stored into the state trees.  This
      happens in particular with SQL-language functions, as reported by
      Emre Hasegeli, but there are many other cases.
      
      To fix, keep only the expression plan trees for domain CHECK constraints
      in the typcache's data structure, and revert to performing ExecInitExpr
      (at least) once per query to set up expression state trees in the query's
      context.
      
      Eventually it'd be nice to undo this, but that will require some careful
      thought about memory management for expression state trees, and it seems
      far too late for any such redesign in 9.5.  This way is still much more
      efficient than what happened before 8abb3cda.
      ec7eef6b
  4. 28 Nov, 2015 1 commit
    • Tom Lane's avatar
      Avoid doing encoding conversions by double-conversion via MULE_INTERNAL. · 8d32717b
      Tom Lane authored
      Previously, we did many conversions for Cyrillic and Central European
      single-byte encodings by converting to a related MULE_INTERNAL coding
      scheme before converting to the destination.  This seems unnecessarily
      inefficient.  Moreover, if the conversion encounters an untranslatable
      character, the error message will confusingly complain about failure
      to convert to or from MULE_INTERNAL, rather than the user-visible
      encodings.  Worse still, this approach results in some completely
      unnecessary conversion failures; there are cases where the chosen
      MULE subset lacks characters that exist in both of the user-visible
      encodings, causing a conversion failure that need not occur.
      
      This patch fixes the first two of those deficiencies by introducing
      a new local2local() conversion support subroutine for direct conversion
      between any two single-byte character sets, and adding new conversion
      tables where needed.  However, I generated the new conversion tables by
      testing PG 9.5's behavior, so that the actual conversion behavior is
      bug-compatible with previous releases; the only user-visible behavior
      change is that the error messages for conversion failures are saner.
      Changes in the conversion behavior will probably ensue after discussion.
      
      Interestingly, although this approach requires more tables, the .so files
      actually end up smaller (at least on my x86_64 machine); the tables are
      smaller than the management code needed for double conversion.
      
      Per a complaint from Albe Laurenz.
      8d32717b
  5. 27 Nov, 2015 4 commits
    • Tom Lane's avatar
    • Tom Lane's avatar
      Auto-generate file header comments in Unicode mapping files. · e17dab53
      Tom Lane authored
      Some of the Unicode/*.map files had identification comments added to them,
      evidently by hand.  Others did not.  Modify the generating scripts to
      produce these comments automatically, and update the generated files that
      lacked them.
      
      This is just minor cleanup as a by-product of trying to verify that the
      *.map files can indeed be reproduced from authoritative data.  There are a
      depressingly large number that fail to reproduce from the claimed sources.
      I have not touched those in this commit, except for the JIS 2004-related
      files which required only a single comment update to match.
      
      Since this only affects comments, no need to consider a back-patch.
      e17dab53
    • Tom Lane's avatar
      Improve PQhost() to return useful data for default Unix-socket connections. · 40cb21f7
      Tom Lane authored
      Previously, if no host information had been specified at connection time,
      PQhost() would return NULL (unless you are on Windows, in which case you
      got "localhost").  This is an unhelpful definition for a couple of reasons:
      it can cause corner-case crashes in applications (cf commit c5ef8ce5),
      and there's no well-defined way for applications to find out the socket
      directory path that's actually in use.  As an example of the latter
      problem, psql substituted DEFAULT_PGSOCKET_DIR for NULL in a couple of
      places, but this is subtly wrong because it's conceivable that psql is
      using a libpq shared library that was built with a different setting.
      
      Hence, change PQhost() to return DEFAULT_PGSOCKET_DIR when appropriate,
      and strip out the now-dead substitutions in psql.  (There is still one
      remaining reference to DEFAULT_PGSOCKET_DIR in psql, in prompt.c, which
      I don't see a nice way to get rid of.  But it only controls a prompt
      abbreviation decision, so it seems noncritical.)
      
      Also update the docs for PQhost, which had never previously mentioned
      the possibility of a socket directory path being returned.  In passing
      fix the outright-incorrect code comment about PGconn.pgunixsocket.
      40cb21f7
    • Teodor Sigaev's avatar
      COPY (INSERT/UPDATE/DELETE .. RETURNING ..) · 92e38182
      Teodor Sigaev authored
      Attached is a patch for being able to do COPY (query) without a CTE.
      
      Author: Marko Tiikkaja
      Review: Michael Paquier
      92e38182
  6. 26 Nov, 2015 1 commit
    • Tom Lane's avatar
      Fix failure to consider failure cases in GetComboCommandId(). · 0da3a9be
      Tom Lane authored
      Failure to initially palloc the comboCids array, or to realloc it bigger
      when needed, left combocid's data structures in an inconsistent state that
      would cause trouble if the top transaction continues to execute.  Noted
      while examining a user complaint about the amount of memory used for this.
      (There's not much we can do about that, but it does point up that repalloc
      failure has a non-negligible chance of occurring here.)
      
      In HEAD/9.5, also avoid possible invocation of memcpy() with a null pointer
      in SerializeComboCIDState; cf commit 13bba022.
      0da3a9be
  7. 25 Nov, 2015 4 commits
    • Tom Lane's avatar
      Be more paranoid about null return values from libpq status functions. · c5ef8ce5
      Tom Lane authored
      PQhost() can return NULL in non-error situations, namely when a Unix-socket
      connection has been selected by default.  That behavior is a tad debatable
      perhaps, but for the moment we should make sure that psql copes with it.
      Unfortunately, do_connect() failed to: it could pass a NULL pointer to
      strcmp(), resulting in crashes on most platforms.  This was reported as a
      security issue by ChenQin of Topsec Security Team, but the consensus of
      the security list is that it's just a garden-variety bug with no security
      implications.
      
      For paranoia's sake, I made the keep_password test not trust PQuser or
      PQport either, even though I believe those will never return NULL given
      a valid PGconn.
      
      Back-patch to all supported branches.
      c5ef8ce5
    • Tom Lane's avatar
      Improve div_var_fast(), mostly by making comments better. · 46166197
      Tom Lane authored
      The integer overflow situation in div_var_fast() is a great deal more
      complicated than the pre-existing comments would suggest.  Moreover, the
      comments were also flat out incorrect as to the precise statement of the
      maxdiv loop invariant.  Upon clarifying that, it becomes apparent that the
      way in which we updated maxdiv after a carry propagation pass was overly
      slow, complex, and conservative: we can just reset it to one, which is much
      easier and also reduces the number of times carry propagation occurs.
      Fix that and improve the relevant comments.
      
      Since this is mostly a comment fix, with only a rather marginal performance
      boost, no need for back-patch.
      
      Tom Lane and Dean Rasheed
      46166197
    • Teodor Sigaev's avatar
      Add forgotten file in commit d6061f83 · 0271e27c
      Teodor Sigaev authored
      0271e27c
    • Teodor Sigaev's avatar
      Improve pageinspect module · d6061f83
      Teodor Sigaev authored
      Now pageinspect can show data stored in the heap tuple.
      
      Nikolay Shaplov
      d6061f83
  8. 24 Nov, 2015 2 commits
  9. 23 Nov, 2015 2 commits
  10. 22 Nov, 2015 1 commit
    • Tom Lane's avatar
      Adopt the GNU convention for handling tar-archive members exceeding 8GB. · 00cdd835
      Tom Lane authored
      The POSIX standard for tar headers requires archive member sizes to be
      printed in octal with at most 11 digits, limiting the representable file
      size to 8GB.  However, GNU tar and apparently most other modern tars
      support a convention in which oversized values can be stored in base-256,
      allowing any practical file to be a tar member.  Adopt this convention
      to remove two limitations:
      * pg_dump with -Ft output format failed if the contents of any one table
      exceeded 8GB.
      * pg_basebackup failed if the data directory contained any file exceeding
      8GB.  (This would be a fatal problem for installations configured with a
      table segment size of 8GB or more, and it has also been seen to fail when
      large core dump files exist in the data directory.)
      
      File sizes under 8GB are still printed in octal, so that no compatibility
      issues are created except in cases that would have failed entirely before.
      
      In addition, this patch fixes several bugs in the same area:
      
      * In 9.3 and later, we'd defined tarCreateHeader's file-size argument as
      size_t, which meant that on 32-bit machines it would write a corrupt tar
      header for file sizes between 4GB and 8GB, even though no error was raised.
      This broke both "pg_dump -Ft" and pg_basebackup for such cases.
      
      * pg_restore from a tar archive would fail on tables of size between 4GB
      and 8GB, on machines where either "size_t" or "unsigned long" is 32 bits.
      This happened even with an archive file not affected by the previous bug.
      
      * pg_basebackup would fail if there were files of size between 4GB and 8GB,
      even on 64-bit machines.
      
      * In 9.3 and later, "pg_basebackup -Ft" failed entirely, for any file size,
      on 64-bit big-endian machines.
      
      In view of these potential data-loss bugs, back-patch to all supported
      branches, even though removal of the documented 8GB limit might otherwise
      be considered a new feature rather than a bug fix.
      00cdd835
  11. 20 Nov, 2015 2 commits
    • Tom Lane's avatar
      Fix handling of inherited check constraints in ALTER COLUMN TYPE (again). · 074c5cfb
      Tom Lane authored
      The previous way of reconstructing check constraints was to do a separate
      "ALTER TABLE ONLY tab ADD CONSTRAINT" for each table in an inheritance
      hierarchy.  However, that way has no hope of reconstructing the check
      constraints' own inheritance properties correctly, as pointed out in
      bug #13779 from Jan Dirk Zijlstra.  What we should do instead is to do
      a regular "ALTER TABLE", allowing recursion, at the topmost table that
      has a particular constraint, and then suppress the work queue entries
      for inherited instances of the constraint.
      
      Annoyingly, we'd tried to fix this behavior before, in commit 5ed6546c,
      but we failed to notice that it wasn't reconstructing the pg_constraint
      field values correctly.
      
      As long as I'm touching pg_get_constraintdef_worker anyway, tweak it to
      always schema-qualify the target table name; this seems like useful backup
      to the protections installed by commit 5f173040.
      
      In HEAD/9.5, get rid of get_constraint_relation_oids, which is now unused.
      (I could alternatively have modified it to also return conislocal, but that
      seemed like a pretty single-purpose API, so let's not pretend it has some
      other use.)  It's unused in the back branches as well, but I left it in
      place just in case some third-party code has decided to use it.
      
      In HEAD/9.5, also rename pg_get_constraintdef_string to
      pg_get_constraintdef_command, as the previous name did nothing to explain
      what that entry point did differently from others (and its comment was
      equally useless).  Again, that change doesn't seem like material for
      back-patching.
      
      I did a bit of re-pgindenting in tablecmds.c in HEAD/9.5, as well.
      
      Otherwise, back-patch to all supported branches.
      074c5cfb
    • Robert Haas's avatar
      Avoid server crash when worker registration fails at execution time. · 6c878a75
      Robert Haas authored
      The previous coding attempts to destroy the DSM in this case, but
      child nodes might have stored data there and still be holding onto
      pointers in this case.  So don't do that.
      
      Also, free the reader array instead of leaking it.
      
      Extracted from two different patch versions both by Amit Kapila.
      6c878a75
  12. 19 Nov, 2015 12 commits
  13. 18 Nov, 2015 4 commits
    • Tom Lane's avatar
      Accept flex > 2.5.x in configure. · 32f15d05
      Tom Lane authored
      Per buildfarm member anchovy, 2.6.0 exists in the wild now.
      Hopefully it works with Postgres; if not, we'll have to do something
      about that, but in any case claiming it's "too old" is pretty silly.
      32f15d05
    • Robert Haas's avatar
      Make a comment more precise. · e0734904
      Robert Haas authored
      Remote expressions now also matter to make_foreignscan()
      
      Noted by Etsuro Fujita.
      e0734904
    • Robert Haas's avatar
      Avoid aggregating worker instrumentation multiple times. · 166b61a8
      Robert Haas authored
      Amit Kapila, per design ideas from me.
      166b61a8
    • Robert Haas's avatar
      Fix dumb bug in tqueue.c · adeee974
      Robert Haas authored
      When I wrote this code originally, the intention was to recompute the
      remapinfo only when the tupledesc changes.  This presumably only
      happens once per query, but I copied the design pattern from other
      DestReceivers.  However, due to a silly oversight on my part,
      tqueue->tupledesc never got set, leading to recomputation for every
      tuple.
      
      This should improve the performance of parallel scans that return a
      significant number of tuples.
      
      Report by Amit Kapila; patch by me, reviewed by him.
      adeee974
  14. 17 Nov, 2015 1 commit
    • Tom Lane's avatar
      Fix possible internal overflow in numeric division. · 5f10b7a6
      Tom Lane authored
      div_var_fast() postpones propagating carries in the same way as mul_var(),
      so it has the same corner-case overflow risk we fixed in 246693e5,
      namely that the size of the carries has to be accounted for when setting
      the threshold for executing a carry propagation step.  We've not devised
      a test case illustrating the brokenness, but the required fix seems clear
      enough.  Like the previous fix, back-patch to all active branches.
      
      Dean Rasheed
      5f10b7a6