-
Tom Lane authored
Historically, the notices output by DROP CASCADE tended to come out in uncertain order, and in some cases you might get different claims about which object depends on which other one. This is because we just traversed the dependency tree in the order in which pg_depend entries are seen, and nbtree has never promised anything about the order of equal-keyed index entries. We've put up with that for years, hacking regression tests when necessary to prevent them from emitting unstable output. However, it's a problem for pending work that will change nbtree's behavior for equal keys, as that causes unexpected changes in the regression test results. Hence, adjust findDependentObjects to sort the results of each indexscan before processing them. The sort is on descending OID of the dependent objects, hence more or less reverse creation order. While this rule could still result in bogus regression test failures if an OID wraparound occurred mid-test, that seems unlikely to happen in any plausible development or packaging-test scenario. This is enough to ensure output stability for ordinary DROP CASCADE commands, but not for DROP OWNED BY, because that has a different code path with the same problem. We might later choose to sort in the DROP OWNED BY code as well, but this patch doesn't do so. I've also not done anything about reverting the existing hacks to suppress unstable DROP CASCADE output in specific regression tests. It might be worth undoing those, but it seems like a distinct question. The first indexscan loop in findDependentObjects is not touched, meaning there is a hazard of unstable error reports from that too. However, said hazard is not the fault of that code: it was designed on the assumption that there could be at most one "owning" object to complain about, and that assumption does not seem unreasonable. The recent patch that added the possibility of multiple DEPENDENCY_INTERNAL_AUTO links broke that assumption, but we should fix that situation not band-aid around it. That's a matter for another patch, though. Discussion: https://postgr.es/m/12244.1547854440@sss.pgh.pa.us
f1ad067f