1. 24 Feb, 2012 2 commits
    • Tom Lane's avatar
      Fix the general case of quantified regex back-references. · 173e29aa
      Tom Lane authored
      Cases where a back-reference is part of a larger subexpression that
      is quantified have never worked in Spencer's regex engine, because
      he used a compile-time transformation that neglected the need to
      check the back-reference match in iterations before the last one.
      (That was okay for capturing parens, and we still do it if the
      regex has *only* capturing parens ... but it's not okay for backrefs.)
      
      To make this work properly, we have to add an "iteration" node type
      to the regex engine's vocabulary of sub-regex nodes.  Since this is a
      moderately large change with a fair risk of introducing new bugs of its
      own, apply to HEAD only, even though it's a fix for a longstanding bug.
      173e29aa
    • Andrew Dunstan's avatar
      Correctly handle NULLs in JSON output. · 0c9e5d5e
      Andrew Dunstan authored
      Error reported by David Wheeler.
      0c9e5d5e
  2. 23 Feb, 2012 10 commits
    • Tom Lane's avatar
      Last-minute release note updates. · b2ce6070
      Tom Lane authored
      Security: CVE-2012-0866, CVE-2012-0867, CVE-2012-0868
      b2ce6070
    • Tom Lane's avatar
      Convert newlines to spaces in names written in pg_dump comments. · 89e0bac8
      Tom Lane authored
      pg_dump was incautious about sanitizing object names that are emitted
      within SQL comments in its output script.  A name containing a newline
      would at least render the script syntactically incorrect.  Maliciously
      crafted object names could present a SQL injection risk when the script
      is reloaded.
      
      Reported by Heikki Linnakangas, patch by Robert Haas
      
      Security: CVE-2012-0868
      89e0bac8
    • Tom Lane's avatar
      Remove arbitrary limitation on length of common name in SSL certificates. · 077711c2
      Tom Lane authored
      Both libpq and the backend would truncate a common name extracted from a
      certificate at 32 bytes.  Replace that fixed-size buffer with dynamically
      allocated string so that there is no hard limit.  While at it, remove the
      code for extracting peer_dn, which we weren't using for anything; and
      don't bother to store peer_cn longer than we need it in libpq.
      
      This limit was not so terribly unreasonable when the code was written,
      because we weren't using the result for anything critical, just logging it.
      But now that there are options for checking the common name against the
      server host name (in libpq) or using it as the user's name (in the server),
      this could result in undesirable failures.  In the worst case it even seems
      possible to spoof a server name or user name, if the correct name is
      exactly 32 bytes and the attacker can persuade a trusted CA to issue a
      certificate in which that string is a prefix of the certificate's common
      name.  (To exploit this for a server name, he'd also have to send the
      connection astray via phony DNS data or some such.)  The case that this is
      a realistic security threat is a bit thin, but nonetheless we'll treat it
      as one.
      
      Back-patch to 8.4.  Older releases contain the faulty code, but it's not
      a security problem because the common name wasn't used for anything
      interesting.
      
      Reported and patched by Heikki Linnakangas
      
      Security: CVE-2012-0867
      077711c2
    • Tom Lane's avatar
      Require execute permission on the trigger function for CREATE TRIGGER. · 891e6e7b
      Tom Lane authored
      This check was overlooked when we added function execute permissions to the
      system years ago.  For an ordinary trigger function it's not a big deal,
      since trigger functions execute with the permissions of the table owner,
      so they couldn't do anything the user issuing the CREATE TRIGGER couldn't
      have done anyway.  However, if a trigger function is SECURITY DEFINER,
      that is not the case.  The lack of checking would allow another user to
      install it on his own table and then invoke it with, essentially, forged
      input data; which the trigger function is unlikely to realize, so it might
      do something undesirable, for instance insert false entries in an audit log
      table.
      
      Reported by Dinesh Kumar, patch by Robert Haas
      
      Security: CVE-2012-0866
      891e6e7b
    • Tom Lane's avatar
      Allow MinGW builds to use standardly-named OpenSSL libraries. · 74e29162
      Tom Lane authored
      In the Fedora variant of MinGW, the openssl libraries have their normal
      names, not libeay32 and libssleay32.  Adjust configure probes to allow
      that, per bug #6486.
      
      Tomasz Ostrowski
      74e29162
    • Peter Eisentraut's avatar
      Remove inappropriate quotes · c9d70044
      Peter Eisentraut authored
      And adjust wording for consistency.
      c9d70044
    • Peter Eisentraut's avatar
      Fix build without OpenSSL · 8251670c
      Peter Eisentraut authored
      This is a fixup for commit a445cb92.
      8251670c
    • Robert Haas's avatar
      Don't install hstore--1.0.sql any more. · d4fb2f99
      Robert Haas authored
      Since the current version is 1.1, the 1.0 file isn't really needed.  We do
      need the 1.0--1.1 upgrade file, so people on 1.0 can upgrade.
      
      Per recent discussion on pgsql-hackers.
      d4fb2f99
    • Robert Haas's avatar
      Make EXPLAIN (BUFFERS) track blocks dirtied, as well as those written. · 22543674
      Robert Haas authored
      Also expose the new counters through pg_stat_statements.
      
      Patch by me.  Review by Fujii Masao and Greg Smith.
      22543674
    • Robert Haas's avatar
      Fix typo in comment. · f74f9a27
      Robert Haas authored
      Sandro Santilli
      f74f9a27
  3. 22 Feb, 2012 4 commits
  4. 21 Feb, 2012 5 commits
  5. 20 Feb, 2012 4 commits
    • Tom Lane's avatar
      Don't reject threaded Python on FreeBSD. · c0efc2c2
      Tom Lane authored
      According to Chris Rees, this has worked for awhile, and the current
      FreeBSD port is removing the test anyway.
      c0efc2c2
    • Andrew Dunstan's avatar
      Fix a couple of cases of JSON output. · 83fcaffe
      Andrew Dunstan authored
      First, as noted by Itagaki Takahiro, a datum of type JSON doesn't
      need to be escaped. Second, ensure that numeric output not in
      the form of a legal JSON number is quoted and escaped.
      83fcaffe
    • Tom Lane's avatar
      Fix regex back-references that are directly quantified with *. · 5223f96d
      Tom Lane authored
      The syntax "\n*", that is a backref with a * quantifier directly applied
      to it, has never worked correctly in Spencer's library.  This has been an
      open bug in the Tcl bug tracker since 2005:
      https://sourceforge.net/tracker/index.php?func=detail&aid=1115587&group_id=10894&atid=110894
      
      The core of the problem is in parseqatom(), which first changes "\n*" to
      "\n+|" and then applies repeat() to the NFA representing the backref atom.
      repeat() thinks that any arc leading into its "rp" argument is part of the
      sub-NFA to be repeated.  Unfortunately, since parseqatom() already created
      the arc that was intended to represent the empty bypass around "\n+", this
      arc gets moved too, so that it now leads into the state loop created by
      repeat().  Thus, what was supposed to be an "empty" bypass gets turned into
      something that represents zero or more repetitions of the NFA representing
      the backref atom.  In the original example, in place of
      	^([bc])\1*$
      we now have something that acts like
      	^([bc])(\1+|[bc]*)$
      At runtime, the branch involving the actual backref fails, as it's supposed
      to, but then the other branch succeeds anyway.
      
      We could no doubt fix this by some rearrangement of the operations in
      parseqatom(), but that code is plenty ugly already, and what's more the
      whole business of converting "x*" to "x+|" probably needs to go away to fix
      another problem I'll mention in a moment.  Instead, this patch suppresses
      the *-conversion when the target is a simple backref atom, leaving the case
      of m == 0 to be handled at runtime.  This makes the patch in regcomp.c a
      one-liner, at the cost of having to tweak cbrdissect() a little.  In the
      event I went a bit further than that and rewrote cbrdissect() to check all
      the string-length-related conditions before it starts comparing characters.
      It seems a bit stupid to possibly iterate through many copies of an
      n-character backreference, only to fail at the end because the target
      string's length isn't a multiple of n --- we could have found that out
      before starting.  The existing coding could only be a win if integer
      division is hugely expensive compared to character comparison, but I don't
      know of any modern machine where that might be true.
      
      This does not fix all the problems with quantified back-references.  In
      particular, the code is still broken for back-references that appear within
      a larger expression that is quantified (so that direct insertion of the
      quantification limits into the BACKREF node doesn't apply).  I think fixing
      that will take some major surgery on the NFA code, specifically introducing
      an explicit iteration node type instead of trying to transform iteration
      into concatenation of modified regexps.
      
      Back-patch to all supported branches.  In HEAD, also add a regression test
      case for this.  (It may seem a bit silly to create a regression test file
      for just one test case; but I'm expecting that we will soon import a whole
      bunch of regex regression tests from Tcl, so might as well create the
      infrastructure now.)
      5223f96d
    • Tom Lane's avatar
      Add caching of ctype.h/wctype.h results in regc_locale.c. · e00f68e4
      Tom Lane authored
      While this doesn't save a huge amount of runtime, it still seems worth
      doing, especially since I realized that the data copying I did in my first
      draft was quite unnecessary.  In this version, once we have the results
      cached, getting them back for re-use is really very cheap.
      
      Also, remove the hard-wired limitation to not consider wctype.h results for
      character codes above 255.  It turns out that we can't push the limit as
      far up as I'd originally hoped, because the regex colormap code is not
      efficient enough to cope very well with character classes containing many
      thousand letters, which a Unicode locale is entirely capable of producing.
      Still, we can push it up to U+7FF (which I chose as the limit of 2-byte
      UTF8 characters), which will at least make Eastern Europeans happy pending
      a better solution.  Thus, this commit resolves the specific complaint in
      bug #6457, but not the more general issue that letters of non-western
      alphabets are mostly not recognized as matching [[:alpha:]].
      e00f68e4
  6. 19 Feb, 2012 3 commits
  7. 18 Feb, 2012 5 commits
  8. 17 Feb, 2012 1 commit
    • Tom Lane's avatar
      Fix longstanding error in contrib/intarray's int[] & int[] operator. · 06d9afa6
      Tom Lane authored
      The array intersection code would give wrong results if the first entry of
      the correct output array would be "1".  (I think only this value could be
      at risk, since the previous word would always be a lower-bound entry with
      that fixed value.)
      
      Problem spotted by Julien Rouhaud, initial patch by Guillaume Lelarge,
      cosmetic improvements by me.
      06d9afa6
  9. 16 Feb, 2012 5 commits
    • Tom Lane's avatar
      Improve statistics estimation to make some use of DISTINCT in sub-queries. · 4767bc8f
      Tom Lane authored
      Formerly, we just punted when trying to estimate stats for variables coming
      out of sub-queries using DISTINCT, on the grounds that whatever stats we
      might have for underlying table columns would be inapplicable.  But if the
      sub-query has only one DISTINCT column, we can consider its output variable
      as being unique, which is useful information all by itself.  The scope of
      this improvement is pretty narrow, but it costs nearly nothing, so we might
      as well do it.  Per discussion with Andres Freund.
      
      This patch differs from the draft I submitted yesterday in updating various
      comments about vardata.isunique (to reflect its extended meaning) and in
      tweaking the interaction with security_barrier views.  There does not seem
      to be a reason why we can't use this sort of knowledge even when the
      sub-query is such a view.
      4767bc8f
    • Robert Haas's avatar
      pg_dump: Miscellaneous tightening based on recent refactorings. · 1cc1b91d
      Robert Haas authored
      Use exit_horribly() and ExecuteSqlQueryForSingleRow() in various
      places where it's equivalent, or nearly equivalent, to the prior
      coding. Apart from being more compact, this also makes the error
      messages for the wrong-number-of-tuples case more consistent.
      1cc1b91d
    • Robert Haas's avatar
      pg_dump: Remove global connection pointer. · 689d0eb7
      Robert Haas authored
      Parallel pg_dump wants to have multiple ArchiveHandle objects, and
      therefore multiple PGconns, in play at the same time.  This should
      be just about the end of the refactoring that we need in order to
      make that workable.
      689d0eb7
    • Robert Haas's avatar
      Refactor pg_dump.c to avoid duplicating returns-one-row check. · 549e93c9
      Robert Haas authored
      Any patches apt to get broken have probably already been broken by the
      error-handling cleanups I just did, so we might as well clean this up
      at the same time.
      549e93c9
    • Robert Haas's avatar
      Invent on_exit_nicely for pg_dump. · e9a22259
      Robert Haas authored
      Per recent discussions on pgsql-hackers regarding parallel pg_dump.
      e9a22259
  10. 15 Feb, 2012 1 commit
    • Tom Lane's avatar
      Run a portal's cleanup hook immediately when pushing it to FAILED state. · 4bfe68df
      Tom Lane authored
      This extends the changes of commit 6252c4f9
      so that we run the cleanup hook earlier for failure cases as well as
      success cases.  As before, the point is to avoid an assertion failure from
      an Assert I added in commit a874fe7b, which
      was meant to check that no user-written code can be called during portal
      cleanup.  This fixes a case reported by Pavan Deolasee in which the Assert
      could be triggered during backend exit (see the new regression test case),
      and also prevents the possibility that the cleanup hook is run after
      portions of the portal's state have already been recycled.  That doesn't
      really matter in current usage, but it foreseeably could matter in the
      future.
      
      Back-patch to 9.1 where the Assert in question was added.
      4bfe68df