Commit 86a2218e authored by Thomas Munro's avatar Thomas Munro

Limit Parallel Hash's bucket array to MaxAllocSize.

Make sure that we don't exceed MaxAllocSize when increasing the number of
buckets.  Perhaps later we'll remove that limit and use DSA_ALLOC_HUGE, but
for now just prevent further increases like the non-parallel code.  This
change avoids the error from bug report #15225.

Author: Thomas Munro
Reviewed-By: Tom Lane
Reported-by: Frits Jalvingh
Discussion: https://postgr.es/m/152802081668.26724.16985037679312485972%40wrigleys.postgresql.org
parent f6b95ff4
......@@ -2818,9 +2818,12 @@ ExecParallelHashTupleAlloc(HashJoinTable hashtable, size_t size,
{
hashtable->batches[0].shared->ntuples += hashtable->batches[0].ntuples;
hashtable->batches[0].ntuples = 0;
/* Guard against integer overflow and alloc size overflow */
if (hashtable->batches[0].shared->ntuples + 1 >
hashtable->nbuckets * NTUP_PER_BUCKET &&
hashtable->nbuckets < (INT_MAX / 2))
hashtable->nbuckets < (INT_MAX / 2) &&
hashtable->nbuckets * 2 <=
MaxAllocSize / sizeof(dsa_pointer_atomic))
{
pstate->growth = PHJ_GROWTH_NEED_MORE_BUCKETS;
LWLockRelease(&pstate->lock);
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment