Commit 92c4c269 authored by David Rowley's avatar David Rowley

Move memory accounting Asserts for Result Cache code

In 9eacee2e, I included some code to verify the cache's memory tracking
is correct by counting up the number of entries and the memory they use
each time we evict something from the cache.  Those values are then
compared to the expected values using Assert.  The problem is that this
requires looping over the entire cache hash table each time we evict an
entry from the cache.  That can be pretty expensive, as noted by Pavel
Stehule.

Here we move this memory accounting checking code so that we only verify
it on cassert builds once when shutting down the Result Cache node.

Aside from the performance increase, this has two distinct advantages:

1) We do the memory checks at the last possible moment before destroying
   the cache.  This means we'll now catch accounting problems that might
   sneak in after a cache eviction.

2) We now do the memory Assert checks when there were no cache evictions.
   This increases the coverage.

One small disadvantage is that we'll now miss any memory tracking issues
that somehow managed to resolve themselves by the end of execution.
However, it seems to me that such a memory tracking problem would be quite
unlikely, and likely somewhat less harmful if one were to exist.

In passing, adjust the loop over the hash table to use the standard
simplehash.h method of iteration.

Reported-by: Pavel Stehule
Discussion: https://postgr.es/m/CAFj8pRAzgoSkdEiqrKbT=7yG9FA5fjUAP3jmJywuDqYq6Ki5ug@mail.gmail.com
parent a55a9847
......@@ -298,41 +298,6 @@ remove_cache_entry(ResultCacheState *rcstate, ResultCacheEntry *entry)
dlist_delete(&entry->key->lru_node);
#ifdef USE_ASSERT_CHECKING
/*
* Validate the memory accounting code is correct in assert builds. XXX is
* this too expensive for USE_ASSERT_CHECKING?
*/
{
int i,
count;
uint64 mem = 0;
count = 0;
for (i = 0; i < rcstate->hashtable->size; i++)
{
ResultCacheEntry *entry = &rcstate->hashtable->data[i];
if (entry->status == resultcache_SH_IN_USE)
{
ResultCacheTuple *tuple = entry->tuplehead;
mem += EMPTY_ENTRY_MEMORY_BYTES(entry);
while (tuple != NULL)
{
mem += CACHE_TUPLE_BYTES(tuple);
tuple = tuple->next;
}
count++;
}
}
Assert(count == rcstate->hashtable->members);
Assert(mem == rcstate->mem_used);
}
#endif
/* Remove all of the tuples from this entry */
entry_purge_tuples(rcstate, entry);
......@@ -977,6 +942,35 @@ ExecInitResultCache(ResultCache *node, EState *estate, int eflags)
void
ExecEndResultCache(ResultCacheState *node)
{
#ifdef USE_ASSERT_CHECKING
/* Validate the memory accounting code is correct in assert builds. */
{
int count;
uint64 mem = 0;
resultcache_iterator i;
ResultCacheEntry *entry;
resultcache_start_iterate(node->hashtable, &i);
count = 0;
while ((entry = resultcache_iterate(node->hashtable, &i)) != NULL)
{
ResultCacheTuple *tuple = entry->tuplehead;
mem += EMPTY_ENTRY_MEMORY_BYTES(entry);
while (tuple != NULL)
{
mem += CACHE_TUPLE_BYTES(tuple);
tuple = tuple->next;
}
count++;
}
Assert(count == node->hashtable->members);
Assert(mem == node->mem_used);
}
#endif
/*
* When ending a parallel worker, copy the statistics gathered by the
* worker back into shared memory so that it can be picked up by the main
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment