Use a separate random seed for SQL random()/setseed() functions.
Previously, the SQL random() function depended on libc's random(3), and setseed() invoked srandom(3). This results in interference between these functions and backend-internal uses of random(3). We'd never paid too much mind to that, but in the wake of commit 88bdbd3f which added log_statement_sample_rate, the interference arguably has a security consequence: if log_statement_sample_rate is active then an unprivileged user could probably control which if any of his SQL commands get logged, by issuing setseed() at the right times. That seems bad. To fix this reliably, we need random() and setseed() to use their own private random state variable. Standard random(3) isn't amenable to such usage, so let's switch to pg_erand48(). It's hard to say whether that's more or less "random" than any particular platform's version of random(3), but it does have a wider seed value and a longer period than are required by POSIX, so we can hope that this isn't a big downgrade. Also, we should now have uniform behavior of random() across platforms, which is worth something. While at it, upgrade the per-process seed initialization method to use pg_strong_random() if available, greatly reducing the predictability of the initial seed value. (I'll separately do something similar for the internal uses of random().) In addition to forestalling the possible security problem, this has a benefit in the other direction, which is that we can now document setseed() as guaranteeing a reproducible sequence of random() values. Previously, because of the possibility of internal calls of random(3), we could not promise any such thing. Discussion: https://postgr.es/m/3859.1545849900@sss.pgh.pa.us
Showing
Please register or sign in to comment