1. 22 Feb, 2012 3 commits
  2. 21 Feb, 2012 5 commits
  3. 20 Feb, 2012 4 commits
    • Tom Lane's avatar
      Don't reject threaded Python on FreeBSD. · c0efc2c2
      Tom Lane authored
      According to Chris Rees, this has worked for awhile, and the current
      FreeBSD port is removing the test anyway.
      c0efc2c2
    • Andrew Dunstan's avatar
      Fix a couple of cases of JSON output. · 83fcaffe
      Andrew Dunstan authored
      First, as noted by Itagaki Takahiro, a datum of type JSON doesn't
      need to be escaped. Second, ensure that numeric output not in
      the form of a legal JSON number is quoted and escaped.
      83fcaffe
    • Tom Lane's avatar
      Fix regex back-references that are directly quantified with *. · 5223f96d
      Tom Lane authored
      The syntax "\n*", that is a backref with a * quantifier directly applied
      to it, has never worked correctly in Spencer's library.  This has been an
      open bug in the Tcl bug tracker since 2005:
      https://sourceforge.net/tracker/index.php?func=detail&aid=1115587&group_id=10894&atid=110894
      
      The core of the problem is in parseqatom(), which first changes "\n*" to
      "\n+|" and then applies repeat() to the NFA representing the backref atom.
      repeat() thinks that any arc leading into its "rp" argument is part of the
      sub-NFA to be repeated.  Unfortunately, since parseqatom() already created
      the arc that was intended to represent the empty bypass around "\n+", this
      arc gets moved too, so that it now leads into the state loop created by
      repeat().  Thus, what was supposed to be an "empty" bypass gets turned into
      something that represents zero or more repetitions of the NFA representing
      the backref atom.  In the original example, in place of
      	^([bc])\1*$
      we now have something that acts like
      	^([bc])(\1+|[bc]*)$
      At runtime, the branch involving the actual backref fails, as it's supposed
      to, but then the other branch succeeds anyway.
      
      We could no doubt fix this by some rearrangement of the operations in
      parseqatom(), but that code is plenty ugly already, and what's more the
      whole business of converting "x*" to "x+|" probably needs to go away to fix
      another problem I'll mention in a moment.  Instead, this patch suppresses
      the *-conversion when the target is a simple backref atom, leaving the case
      of m == 0 to be handled at runtime.  This makes the patch in regcomp.c a
      one-liner, at the cost of having to tweak cbrdissect() a little.  In the
      event I went a bit further than that and rewrote cbrdissect() to check all
      the string-length-related conditions before it starts comparing characters.
      It seems a bit stupid to possibly iterate through many copies of an
      n-character backreference, only to fail at the end because the target
      string's length isn't a multiple of n --- we could have found that out
      before starting.  The existing coding could only be a win if integer
      division is hugely expensive compared to character comparison, but I don't
      know of any modern machine where that might be true.
      
      This does not fix all the problems with quantified back-references.  In
      particular, the code is still broken for back-references that appear within
      a larger expression that is quantified (so that direct insertion of the
      quantification limits into the BACKREF node doesn't apply).  I think fixing
      that will take some major surgery on the NFA code, specifically introducing
      an explicit iteration node type instead of trying to transform iteration
      into concatenation of modified regexps.
      
      Back-patch to all supported branches.  In HEAD, also add a regression test
      case for this.  (It may seem a bit silly to create a regression test file
      for just one test case; but I'm expecting that we will soon import a whole
      bunch of regex regression tests from Tcl, so might as well create the
      infrastructure now.)
      5223f96d
    • Tom Lane's avatar
      Add caching of ctype.h/wctype.h results in regc_locale.c. · e00f68e4
      Tom Lane authored
      While this doesn't save a huge amount of runtime, it still seems worth
      doing, especially since I realized that the data copying I did in my first
      draft was quite unnecessary.  In this version, once we have the results
      cached, getting them back for re-use is really very cheap.
      
      Also, remove the hard-wired limitation to not consider wctype.h results for
      character codes above 255.  It turns out that we can't push the limit as
      far up as I'd originally hoped, because the regex colormap code is not
      efficient enough to cope very well with character classes containing many
      thousand letters, which a Unicode locale is entirely capable of producing.
      Still, we can push it up to U+7FF (which I chose as the limit of 2-byte
      UTF8 characters), which will at least make Eastern Europeans happy pending
      a better solution.  Thus, this commit resolves the specific complaint in
      bug #6457, but not the more general issue that letters of non-western
      alphabets are mostly not recognized as matching [[:alpha:]].
      e00f68e4
  4. 19 Feb, 2012 3 commits
  5. 18 Feb, 2012 5 commits
  6. 17 Feb, 2012 1 commit
    • Tom Lane's avatar
      Fix longstanding error in contrib/intarray's int[] & int[] operator. · 06d9afa6
      Tom Lane authored
      The array intersection code would give wrong results if the first entry of
      the correct output array would be "1".  (I think only this value could be
      at risk, since the previous word would always be a lower-bound entry with
      that fixed value.)
      
      Problem spotted by Julien Rouhaud, initial patch by Guillaume Lelarge,
      cosmetic improvements by me.
      06d9afa6
  7. 16 Feb, 2012 5 commits
    • Tom Lane's avatar
      Improve statistics estimation to make some use of DISTINCT in sub-queries. · 4767bc8f
      Tom Lane authored
      Formerly, we just punted when trying to estimate stats for variables coming
      out of sub-queries using DISTINCT, on the grounds that whatever stats we
      might have for underlying table columns would be inapplicable.  But if the
      sub-query has only one DISTINCT column, we can consider its output variable
      as being unique, which is useful information all by itself.  The scope of
      this improvement is pretty narrow, but it costs nearly nothing, so we might
      as well do it.  Per discussion with Andres Freund.
      
      This patch differs from the draft I submitted yesterday in updating various
      comments about vardata.isunique (to reflect its extended meaning) and in
      tweaking the interaction with security_barrier views.  There does not seem
      to be a reason why we can't use this sort of knowledge even when the
      sub-query is such a view.
      4767bc8f
    • Robert Haas's avatar
      pg_dump: Miscellaneous tightening based on recent refactorings. · 1cc1b91d
      Robert Haas authored
      Use exit_horribly() and ExecuteSqlQueryForSingleRow() in various
      places where it's equivalent, or nearly equivalent, to the prior
      coding. Apart from being more compact, this also makes the error
      messages for the wrong-number-of-tuples case more consistent.
      1cc1b91d
    • Robert Haas's avatar
      pg_dump: Remove global connection pointer. · 689d0eb7
      Robert Haas authored
      Parallel pg_dump wants to have multiple ArchiveHandle objects, and
      therefore multiple PGconns, in play at the same time.  This should
      be just about the end of the refactoring that we need in order to
      make that workable.
      689d0eb7
    • Robert Haas's avatar
      Refactor pg_dump.c to avoid duplicating returns-one-row check. · 549e93c9
      Robert Haas authored
      Any patches apt to get broken have probably already been broken by the
      error-handling cleanups I just did, so we might as well clean this up
      at the same time.
      549e93c9
    • Robert Haas's avatar
      Invent on_exit_nicely for pg_dump. · e9a22259
      Robert Haas authored
      Per recent discussions on pgsql-hackers regarding parallel pg_dump.
      e9a22259
  8. 15 Feb, 2012 10 commits
  9. 14 Feb, 2012 4 commits