• Kevin Grittner's avatar
    Increase number of hash join buckets for underestimate. · 30d7ae3c
    Kevin Grittner authored
    If we expect batching at the very beginning, we size nbuckets for
    "full work_mem" (see how many tuples we can get into work_mem,
    while not breaking NTUP_PER_BUCKET threshold).
    
    If we expect to be fine without batching, we start with the 'right'
    nbuckets and track the optimal nbuckets as we go (without actually
    resizing the hash table). Once we hit work_mem (considering the
    optimal nbuckets value), we keep the value.
    
    At the end of the first batch, we check whether (nbuckets !=
    nbuckets_optimal) and resize the hash table if needed. Also, we
    keep this value for all batches (it's OK because it assumes full
    work_mem, and it makes the batchno evaluation trivial). So the
    resize happens only once.
    
    There could be cases where it would improve performance to allow
    the NTUP_PER_BUCKET threshold to be exceeded to keep everything in
    one batch rather than spilling to a second batch, but attempts to
    generate such a case have so far been unsuccessful; that issue may
    be addressed with a follow-on patch after further investigation.
    
    Tomas Vondra with minor format and comment cleanup by me
    Reviewed by Robert Haas, Heikki Linnakangas, and Kevin Grittner
    30d7ae3c
explain.c 74.8 KB