• Alvaro Herrera's avatar
    Permit dump/reload of not-too-large >1GB tuples · fa2fa995
    Alvaro Herrera authored
    Our documentation states that our maximum field size is 1 GB, and that
    our maximum row size of 1.6 TB.  However, while this might be attainable
    in theory with enough contortions, it is not workable in practice; for
    starters, pg_dump fails to dump tables containing rows larger than 1 GB,
    even if individual columns are well below the limit; and even if one
    does manage to manufacture a dump file containing a row that large, the
    server refuses to load it anyway.
    
    This commit enables dumping and reloading of such tuples, provided two
    conditions are met:
    
    1. no single column is larger than 1 GB (in output size -- for bytea
       this includes the formatting overhead)
    2. the whole row is not larger than 2 GB
    
    There are three related changes to enable this:
    
    a. StringInfo's API now has two additional functions that allow creating
    a string that grows beyond the typical 1GB limit (and "long" string).
    ABI compatibility is maintained.  We still limit these strings to 2 GB,
    though, for reasons explained below.
    
    b. COPY now uses long StringInfos, so that pg_dump doesn't choke
    trying to emit rows longer than 1GB.
    
    c. heap_form_tuple now uses the MCXT_ALLOW_HUGE flag in its allocation
    for the input tuple, which means that large tuples are accepted on
    input.  Note that at this point we do not apply any further limit to the
    input tuple size.
    
    The main reason to limit to 2 GB is that the FE/BE protocol uses 32 bit
    length words to describe each row; and because the documentation is
    ambiguous on its signedness and libpq does consider it signed, we cannot
    use the highest-order bit.  Additionally, the StringInfo API uses "int"
    (which is 4 bytes wide in most platforms) in many places, so we'd need
    to change that API too in order to improve, which has lots of fallout.
    
    Backpatch to 9.5, which is the oldest that has
    MemoryContextAllocExtended, a necessary piece of infrastructure.  We
    could apply to 9.4 with very minimal additional effort, but any further
    than that would require backpatching "huge" allocations too.
    
    This is the largest set of changes we could find that can be
    back-patched without breaking compatibility with existing systems.
    Fixing a bigger set of problems (for example, dumping tuples bigger than
    2GB, or dumping fields bigger than 1GB) would require changing the FE/BE
    protocol and/or changing the StringInfo API in an ABI-incompatible way,
    neither of which would be back-patchable.
    
    Authors: Daniel Vérité, Álvaro Herrera
    Reviewed by: Tomas Vondra
    Discussion: https://postgr.es/m/20160229183023.GA286012@alvherre.pgsql
    fa2fa995
stringinfo.h 5.75 KB