1. 07 Apr, 2017 17 commits
    • Alvaro Herrera's avatar
      Fix printf format to use %zd when printing sizes · 8acc1e0f
      Alvaro Herrera authored
      Using %ld as we were doing raises compiler warnings on 32 bit platforms.
      
      Reported by Andres Freund.
      Discussion: https://postgr.es/m/20170407214022.fidezl2e6rk3tuiz@alap3.anarazel.de
      8acc1e0f
    • Alvaro Herrera's avatar
      Reduce the number of pallocs() in BRIN · 8bf74967
      Alvaro Herrera authored
      Instead of allocating memory in brin_deform_tuple and brin_copy_tuple
      over and over during a scan, allow reuse of previously allocated memory.
      This is said to make for a measurable performance improvement.
      
      Author: Jinyu Zhang, Álvaro Herrera
      Reviewed by: Tomas Vondra
      Discussion: https://postgr.es/m/495deb78.4186.1500dacaa63.Coremail.beijing_pg@163.com
      8bf74967
    • Andres Freund's avatar
      Improve 64bit atomics support. · e8fdbd58
      Andres Freund authored
      When adding atomics back in b64d92f1, I added 64bit support as
      optional; there wasn't yet a direct user in sight.  That turned out to
      be a bit short-sighted, it'd already have been useful a number of times.
      
      Add a fallback implementation of 64bit atomics, just like the one we
      have for 32bit atomics.
      
      Additionally optimize reads/writes to 64bit on a number of platforms
      where aligned writes of that size are atomic. This can now be tested
      with PG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY.
      
      Author: Andres Freund
      Reviewed-By: Amit Kapila
      Discussion: https://postgr.es/m/20160330230914.GH13305@awork2.anarazel.de
      e8fdbd58
    • Peter Eisentraut's avatar
      Fix compiler warning · 28afad5c
      Peter Eisentraut authored
      on MSVC 2010
      
      Author: Michael Paquier <michael.paquier@gmail.com>
      28afad5c
    • Peter Eisentraut's avatar
      Avoid using a C++ keyword in header file · 0cb2e519
      Peter Eisentraut authored
      per cpluspluscheck
      0cb2e519
    • Alvaro Herrera's avatar
      Fix new BRIN desummarize WAL record · 817cb100
      Alvaro Herrera authored
      The WAL-writing piece was forgetting to set the pages-per-range value.
      Also, fix the declared type of struct member heapBlk, which I mistakenly
      set as OffsetNumber rather than BlockNumber.
      
      Problem was introduced by commit c655899b (April 1st).  Any system
      that tries to replay the new WAL record written before this fix is
      likely to die on replay and require pg_resetwal.
      
      Reported by Tom Lane.
      Discussion: https://postgr.es/m/20191.1491524824@sss.pgh.pa.us
      817cb100
    • Robert Haas's avatar
    • Robert Haas's avatar
    • Tom Lane's avatar
    • Heikki Linnakangas's avatar
      Remove duplicate assignment. · 0c732850
      Heikki Linnakangas authored
      Harmless, but clearly wrong.
      
      Kyotaro Horiguchi
      0c732850
    • Tom Lane's avatar
      Fix planner error (or assert trap) with nested set operations. · 89deca58
      Tom Lane authored
      As reported by Sean Johnston in bug #14614, since 9.6 the planner can fail
      due to trying to look up the referent of a Var with varno 0.  This happens
      because we generate such Vars in generate_append_tlist, for lack of any
      better way to describe the output of a SetOp node.  In typical situations
      nothing really cares about that, but given nested set-operation queries
      we will call estimate_num_groups on the output of the subquery, and that
      wants to know what a Var actually refers to.  That logic used to look at
      subquery->targetList, but in commit 3fc6e2d7 I'd switched it to look at
      subroot->processed_tlist, ie the actual output of the subquery plan not the
      parser's idea of the result.  It seemed like a good idea at the time :-(.
      As a band-aid fix, change it back.
      
      Really we ought to have an honest way of naming the outputs of SetOp steps,
      which suggests that it'd be a good idea for the parser to emit an RTE
      corresponding to each one.  But that's a task for another day, and it
      certainly wouldn't yield a back-patchable fix.
      
      Report: https://postgr.es/m/20170407115808.25934.51866@wrigleys.postgresql.org
      89deca58
    • Heikki Linnakangas's avatar
      Use SASLprep to normalize passwords for SCRAM authentication. · 60f11b87
      Heikki Linnakangas authored
      An important step of SASLprep normalization, is to convert the string to
      Unicode normalization form NFKC. Unicode normalization requires a fairly
      large table of character decompositions, which is generated from data
      published by the Unicode consortium. The script to generate the table is
      put in src/common/unicode, as well test code for the normalization.
      A pre-generated version of the tables is included in src/include/common,
      so you don't need the code in src/common/unicode to build PostgreSQL, only
      if you wish to modify the normalization tables.
      
      The SASLprep implementation depends on the UTF-8 functions from
      src/backend/utils/mb/wchar.c. So to use it, you must also compile and link
      that. That doesn't change anything for the current users of these
      functions, the backend and libpq, as they both already link with wchar.o.
      It would be good to move those functions into a separate file in
      src/commmon, but I'll leave that for another day.
      
      No documentation changes included, because there is no details on the
      SCRAM mechanism in the docs anyway. An overview on that in the protocol
      specification would probably be good, even though SCRAM is documented in
      detail in RFC5802. I'll write that as a separate patch. An important thing
      to mention there is that we apply SASLprep even on invalid UTF-8 strings,
      to support other encodings.
      
      Patch by Michael Paquier and me.
      
      Discussion: https://www.postgresql.org/message-id/CAB7nPqSByyEmAVLtEf1KxTRh=PWNKiWKEKQR=e1yGehz=wbymQ@mail.gmail.com
      60f11b87
    • Magnus Hagander's avatar
      Fix typo in comment · 32e33a79
      Magnus Hagander authored
      Masahiko Sawada
      32e33a79
    • Andrew Dunstan's avatar
      Remove extraneous comma to satisfy picky compiler · 88dd4e48
      Andrew Dunstan authored
      per buildfarm
      88dd4e48
    • Andrew Dunstan's avatar
      Make json_populate_record and friends operate recursively · cf35346e
      Andrew Dunstan authored
      With this change array fields are populated from json(b) arrays, and
      composite fields are populated from json(b) objects.
      
      Along the way, some significant code refactoring is done to remove
      redundancy in the way to populate_record[_set] and to_record[_set]
      functions operate, and some significant efficiency gains are made by
      caching tuple descriptors.
      
      Nikita Glukhov, edited some by me.
      
      Reviewed by Aleksander Alekseev and Tom Lane.
      cf35346e
    • Peter Eisentraut's avatar
      Remove use of Jade and DSSSL · 510074f9
      Peter Eisentraut authored
      All documentation is now built using XSLT.  Remove all references to
      Jade, DSSSL, also JadeTex and some other outdated tooling.
      
      For chunked HTML builds, this changes nothing, but removes the
      transitional "oldhtml" target.  The single-page HTML build is ported
      over to XSLT.  For PDF builds, this removes the JadeTex builds and moves
      the FOP builds in their place.
      510074f9
    • Tom Lane's avatar
      Clean up after insufficiently-researched optimization of tuple conversions. · 3f902354
      Tom Lane authored
      tupconvert.c's functions formerly considered that an explicit tuple
      conversion was necessary if the input and output tupdescs contained
      different type OIDs.  The point of that was to make sure that a composite
      datum resulting from the conversion would contain the destination rowtype
      OID in its composite-datum header.  However, commit 3838074f entirely
      misunderstood what that check was for, thinking that it had something to do
      with presence or absence of an OID column within the tuple.  Removal of the
      check broke the no-op conversion path in ExecEvalConvertRowtype, as
      reported by Ashutosh Bapat.
      
      It turns out that of the dozen or so call sites for tupconvert.c functions,
      ExecEvalConvertRowtype is the only one that cares about the composite-datum
      header fields in the output tuple.  In all the rest, we'd much rather avoid
      an unnecessary conversion whenever the tuples are physically compatible.
      Moreover, the comments in tupconvert.c only promise physical compatibility
      not a metadata match.  So, let's accept the removal of the guarantee about
      the output tuple's rowtype marking, recognizing that this is a API change
      that could conceivably break third-party callers of tupconvert.c.  (So,
      let's remember to mention it in the v10 release notes.)
      
      However, commit 3838074f did have a bit of a point here, in that two
      tuples mustn't be considered physically compatible if one has HEAP_HASOID
      set and the other doesn't.  (Some of the callers of tupconvert.c might not
      really care about that, but we can't assume it in general.)  The previous
      check accidentally covered that issue, because no RECORD types ever have
      OIDs, while if two tupdescs have the same named composite type OID then,
      a fortiori, they have the same tdhasoid setting.  If we're removing the
      type OID match check then we'd better include tdhasoid match as part of
      the physical compatibility check.
      
      Without that hack in tupconvert.c, we need ExecEvalConvertRowtype to take
      responsibility for inserting the correct rowtype OID label whenever
      tupconvert.c decides it need not do anything.  This is easily done with
      heap_copy_tuple_as_datum, which will be considerably faster than a tuple
      disassembly and reassembly anyway; so from a performance standpoint this
      change is a win all around compared to what happened in earlier branches.
      It just means a couple more lines of code in ExecEvalConvertRowtype.
      
      Ashutosh Bapat and Tom Lane
      
      Discussion: https://postgr.es/m/CAFjFpRfvHABV6+oVvGcshF8rHn+1LfRUhj7Jz1CDZ4gPUwehBg@mail.gmail.com
      3f902354
  2. 06 Apr, 2017 23 commits