- 19 Aug, 2021 3 commits
-
-
Andres Freund authored
Previously log messages late during shutdown could end up using either another backend's PgBackendStatus (multi user) or segfault (single user) because pgstat_get_my_query_id()'s check for !MyBEEntry didn't filter out use after pgstat_beshutdown_hook(). This became a bug in 4f0b0966, but was a bit fishy before. But given there's no known problematic cases before 14, it doesn't seem worth backpatching further. Also fixes a wrong filename in a comment, introduced in e1025044. Reported-By: Andres Freund <andres@anarazel.de> Reviewed-By: Julien Rouhaud <rjuju123@gmail.com> Discussion: https://postgr.es/m/Julien Rouhaud <rjuju123@gmail.com> Backpatch: 14-
-
Amit Kapila authored
The 'Stream Stop' message is misspelled as 'Stream End' in the docs. Author: Masahiko Sawada Backpatch-through: 14, where it was introduced Discussion: https://postgr.es/m/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com
-
Michael Paquier authored
This is a combined revert of the following commits: - c3826f83, a refactoring piece that moved the hex decoding code to src/common/. This code was cleaned up by aef8948f, as it originally included no overflow checks in the same way as the base64 routines in src/common/ used by SCRAM, making it unsafe for its purpose. - aef8948f, a more advanced refactoring of the hex encoding/decoding code to src/common/ that added sanity checks on the result buffer for hex decoding and encoding. As reported by Hans Buschmann, those overflow checks are expensive, and it is possible to see a performance drop in the decoding/encoding of bytea or LOs the longer they are. Simple SQLs working on large bytea values show a clear difference in perf profile. - ccf4e277, a cleanup made possible by aef8948f. The reverts of all those commits bring back the performance of hex decoding and encoding back to what it was in ~13. Fow now and post-beta3, this is the simplest option. Reported-by: Hans Buschmann Discussion: https://postgr.es/m/1629039545467.80333@nidsa.net Backpatch-through: 14
-
- 18 Aug, 2021 2 commits
-
-
Tom Lane authored
Recursion into the FILTER clause was mis-implemented, such that a relevant Var or Aggref at the very top of the FILTER clause would be ignored. (Of course, that'd have to be a plain boolean Var or boolean-returning aggregate.) The consequence would be mis-identification of the correct semantic level of the aggregate, which could lead to not-per-spec query behavior. If the FILTER expression is an aggregate, this could also lead to failure to issue an expected "aggregate function calls cannot be nested" error, which would likely result in a core dump later on, since the planner and executor aren't expecting such cases to appear. The root cause is that commit b560ec1b blindly copied some code that assumed it's recursing into a List, and thus didn't examine the top-level node. To forestall questions about why this call doesn't look like the others, as well as possible future copy-and-paste mistakes, let's change all three check_agg_arguments_walker calls in check_agg_arguments, even though only the one for the filter clause is really broken. Per bug #17152 from Zhiyong Wu. This has been wrong since we implemented FILTER, so back-patch to all supported versions. (Testing suggests that pre-v11 branches manage to avoid crashing in the bad-Aggref case, thanks to "redundant" checks in ExecInitAgg. But I'm not sure how thorough that protection is, and anyway the wrong-behavior issue remains, so fix 9.6 and 10 too.) Discussion: https://postgr.es/m/17152-c7f906cc1a88e61b@postgresql.org
-
Daniel Gustafsson authored
The skip options set for all-visible and all-frozen were incorrect as they used space rather than hyphen, causing a syntax error when invoked. Also, the option for not skipping any pages at all, none, was documented but not implemented. Backpatch through 14 where pg_amcheck was introduced. Bug: #17149 Reported-by: Chen Jiaoqian <chenjq.jy@fujitsu.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/17149-5918ea748da36b15@postgresql.org Backpatch-through: 14
-
- 17 Aug, 2021 4 commits
-
-
Tom Lane authored
If recordDependencyOnCurrentExtension is invoked on a pre-existing, free-standing object during an extension update script, that object will become owned by the extension. In our current code this is possible in three cases: * Replacing a "shell" type or operator. * CREATE OR REPLACE overwriting an existing object. * ALTER TYPE SET, ALTER DOMAIN SET, and ALTER OPERATOR SET. The first of these cases is intentional behavior, as noted by the existing comments for GenerateTypeDependencies. It seems like appropriate behavior for CREATE OR REPLACE too; at least, the obvious alternatives are not better. However, the fact that it happens during ALTER is an artifact of trying to share code (GenerateTypeDependencies and makeOperatorDependencies) between the CREATE and ALTER cases. Since an extension script would be unlikely to ALTER an object that didn't already belong to the extension, this behavior is not very troubling for the direct target object ... but ALTER TYPE SET will recurse to dependent domains, and it is very uncool for those to become owned by the extension if they were not already. Let's fix this by redefining the ALTER cases to never change extension membership, full stop. We could minimize the behavioral change by only changing the behavior when ALTER TYPE SET is recursing to a domain, but that would complicate the code and it does not seem like a better definition. Per bug #17144 from Alex Kozhemyakin. Back-patch to v13 where ALTER TYPE SET was added. (The other cases are older, but since they only affect the directly-named object, there's not enough of a problem to justify changing the behavior further back.) Discussion: https://postgr.es/m/17144-e67d7a8f049de9af@postgresql.org
-
Michael Meskes authored
that triggers the warning during regression tests.
-
Daniel Gustafsson authored
In OpenSSL there are two types of BIO's (I/O abstractions): source/sink and filters. A source/sink BIO is a source and/or sink of data, ie one acting on a socket or a file. A filter BIO takes a stream of input from another BIO and transforms it. In order for BIO_find_type() to be able to traverse the chain of BIO's and correctly find all BIO's of a certain type they shall have the type bit set accordingly, source/sink BIO's (what PostgreSQL implements) use BIO_TYPE_SOURCE_SINK and filter BIO's use BIO_TYPE_FILTER. In addition to these, file descriptor based BIO's should have the descriptor bit set, BIO_TYPE_DESCRIPTOR. The PostgreSQL implementation didn't set the type bits, which went unnoticed for a long time as it's only really relevant for code auditing the OpenSSL installation, or doing similar tasks. It is required by the API though, so this fixes it. Backpatch through 9.6 as this has been wrong for a long time. Author: Itamar Gafni Discussion: https://postgr.es/m/SN6PR06MB39665EC10C34BB20956AE4578AF39@SN6PR06MB3966.namprd06.prod.outlook.com Backpatch-through: 9.6
-
Heikki Linnakangas authored
The backslash sequences, including \123 and \x12 escapes, are interpreted after encoding conversion. The docs failed to mention that. Backpatch to all supported versions. Reported-by: Andreas Grob Discussion: https://www.postgresql.org/message-id/17142-9181542ca1df75ab%40postgresql.org
-
- 16 Aug, 2021 2 commits
-
-
Alvaro Herrera authored
This reverts the following commits: 1b5617eb Describe (auto-)analyze behavior for partitioned tables 0e69f705 Set pg_class.reltuples for partitioned tables 41badeab Document ANALYZE storage parameters for partitioned tables 0827e8af autovacuum: handle analyze for partitioned tables There are efficiency issues in this code when handling databases with large numbers of partitions, and it doesn't look like there isn't any trivial way to handle those. There are some other issues as well. It's now too late in the cycle for nontrivial fixes, so we'll have to let Postgres 14 users continue to manually deal with ANALYZE their partitioned tables, and hopefully we can fix the issues for Postgres 15. I kept [most of] be280cda ("Don't reset relhasindex for partitioned tables on ANALYZE") because while we added it due to 0827e8af, it is a good bugfix in its own right, since it affects manual analyze as well as autovacuum-induced analyze, and there's no reason to revert it. I retained the addition of relkind 'p' to tables included by pg_stat_user_tables, because reverting that would require a catversion bump. Also, in pg14 only, I keep a struct member that was added to PgStat_TabStatEntry to avoid breaking compatibility with existing stat files. Backpatch to 14. Discussion: https://postgr.es/m/20210722205458.f2bug3z6qzxzpx2s@alap3.anarazel.de
-
Michael Paquier authored
This commit ensures that the wait interval in the replay delay loop waiting for an amount of time defined by recovery_min_apply_delay is correctly handled on reload, recalculating the delay if this GUC value is updated, based on the timestamp of the commit record being replayed. The previous behavior would be problematic for example with replay still waiting even if the delay got reduced or just cancelled. If the apply delay was increased to a larger value, the wait would have just respected the old value set, finishing earlier. Author: Soumyadeep Chakraborty, Ashwin Agrawal Reviewed-by: Kyotaro Horiguchi, Michael Paquier Discussion: https://postgr.es/m/CAE-ML+93zfr-HLN8OuxF0BjpWJ17O5dv1eMvSE5jsj9jpnAXZA@mail.gmail.com Backpatch-through: 9.6
-
- 13 Aug, 2021 5 commits
-
-
Tom Lane authored
Like the ARM case, just use gcc's __sync_lock_test_and_set(); that will compile into AMOSWAP.W.AQ which does what we need. At some point it might be worth doing some work on atomic ops for RISC-V, but this should be enough for a creditable port. Back-patch to all supported branches, just in case somebody wants to try them on RISC-V. Marek Szuba Discussion: https://postgr.es/m/dea97b6d-f55f-1f6d-9109-504aa7dfa421@gentoo.org
-
Peter Eisentraut authored
-
Michael Meskes authored
After binding a statement to a connection with DECLARE STATEMENT the connection was still not used for DEALLOCATE and DESCRIBE statements. This patch fixes that, adds a missing warning and cleans up the code. Author: Hayato Kuroda Reviewed-by: Kyotaro Horiguchi, Michael Paquier Discussion: https://postgr.es/m/TYAPR01MB5866BA57688DF2770E2F95C6F5069%40TYAPR01MB5866.jpnprd01.prod.outlook.com
-
Daniel Gustafsson authored
The check for sslsni only checked for existence of the parameter but not for the actual value of the param. This meant that the SNI extension was always turned on. Fix by inspecting the value of sslsni and only activate the SNI extension iff sslsni has been enabled. Also update the docs to be more in line with how other boolean params are documented. Backpatch to 14 where sslsni was first implemented. Reviewed-by: Tom Lane Backpatch-through: 14, where sslni was added
-
David Rowley authored
This fixes a bug in simplehash.h which caused an incorrect size mask to be used when the hash table grew to SH_MAX_SIZE (2^32). The code was incorrectly setting the size mask to 0 when the hash tables reached the maximum possible number of buckets. This would result always trying to use the 0th bucket causing an infinite loop of trying to grow the hash table due to there being too many collisions. Seemingly it's not that common for simplehash tables to ever grow this big as this bug dates back to v10 and nobody seems to have noticed it before. However, probably the most likely place that people would notice it would be doing a large in-memory Hash Aggregate with something close to at least 2^31 groups. After this fix, the code now works correctly with up to within 98% of 2^32 groups and will fail with the following error when trying to insert any more items into the hash table: ERROR: hash table size exceeded However, the work_mem (or hash_mem_multiplier in newer versions) settings will generally cause Hash Aggregates to spill to disk long before reaching that many groups. The minimal test case I did took a work_mem setting of over 192GB to hit the bug. simplehash hash tables are used in a few other places such as Bitmap Index Scans, however, again the size that the hash table can become there is also limited to work_mem and it would take a relation of around 16TB (2^31) pages and a very large work_mem setting to hit this. With smaller work_mem values the table would become lossy and never grow large enough to hit the problem. Author: Yura Sokolov Reviewed-by: David Rowley, Ranier Vilela Discussion: https://postgr.es/m/b1f7f32737c3438136f64b26f4852b96@postgrespro.ru Backpatch-through: 10, where simplehash.h was added
-
- 12 Aug, 2021 3 commits
-
-
Thomas Munro authored
It's hard to disable ASLR on current macOS releases, for testing with -DEXEC_BACKEND. You could already set the environment variable PG_SHMEM_ADDR to something not likely to collide with mappings created earlier in process startup. Let's also provide a default value that works on current releases and architectures, for developer convenience. As noted in the pre-existing comment, this is a horrible hack, but -DEXEC_BACKEND is only used by Unix-based PostgreSQL developers for testing some otherwise Windows-only code paths, so it seems excusable. Back-patch to all supported branches. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/20210806032944.m4tz7j2w47mant26%40alap3.anarazel.de
-
Tomas Vondra authored
The FDW batching code was using the same tuple descriptor both for all slots (regular and plan slots), but that's incorrect - the subplan may use a different descriptor. Currently this is benign, because batching is used only for INSERTs, and in that case the descriptors always match. But that would change if we allow batching UPDATEs. Fix by copying the appropriate tuple descriptor. Backpatch to 14, where the FDW batching was implemented. Author: Amit Langote Backpatch-through: 14, where FDW batching was added Discussion: https://postgr.es/m/CA%2BHiwqEWd5B0-e-RvixGGUrNvGkjH2s4m95%3DJcwUnyV%3Df0rAKQ%40mail.gmail.com
-
Heikki Linnakangas authored
It's not sensible to re-evaluate a direct-modify Foreign Update or Delete during EvalPlanQual. However, ExecInitForeignScan() can still get called if a table mixes local and foreign partitions. EvalPlanQualStart() left the es_result_relations array uninitialized in the child EPQ EState, but ExecInitForeignScan() still expected to find it. That caused a segfault. Fix by skipping the es_result_relations lookup during EvalPlanQual processing. To make things a bit more robust, also skip the BeginDirectModify calls, and add a runtime check that ExecForeignScan() is not called on direct-modify foreign scans during EvalPlanQual processing. This is new in v14, commit 1375422c. Before that, EvalPlanQualStart() copied the whole ResultRelInfo array to the EPQ EState. Backpatch to v14. Report and diagnosis by Andrey Lepikhov. Discussion: https://www.postgresql.org/message-id/cb2b808d-cbaa-4772-76ee-c8809bafcf3d%40postgrespro.ru
-
- 10 Aug, 2021 1 commit
-
-
Tom Lane authored
As a result of confusion about whether the "char" type is signed or unsigned, scans for index searches like "col < 'x'" or "col <= 'x'" would start at the middle of the index not the left end, thus missing many or all of the entries they should find. Fortunately, this is not a symptom of index corruption. It's only the search logic that is broken, and we can fix it without unpleasant side-effects. Per report from Jason Kim. This has been wrong since btree_gin's beginning, so back-patch to all supported branches. Discussion: https://postgr.es/m/20210810001649.htnltbh7c63re42p@jasonk.me
-
- 09 Aug, 2021 5 commits
-
-
Tom Lane authored
-
Peter Eisentraut authored
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 1234b3cdae465246e534cc4114129f18d3c04c38
-
David Rowley authored
In ec34040af I added a mention that there was no point in setting maintenance_work_limit to anything higher than 1GB for vacuum, but that was incorrect as ginInsertCleanup() also looks at what maintenance_work_mem is set to during VACUUM and that's not limited to 1GB. Here I attempt to make it more clear that the limitation is only around the number of dead tuple identifiers that we can collect during VACUUM. I've also added a note to autovacuum_work_mem to mention this limitation. I didn't do that in ec34040af as I'd had some wrong-headed ideas about just limiting the maximum value for that GUC to 1GB. Author: David Rowley Discussion: https://postgr.es/m/CAApHDvpGwOAvunp-E-bN_rbAs3hmxMoasm5pzkYDbf36h73s7w@mail.gmail.com Backpatch-through: 9.6, same as ec34040af
-
David Rowley authored
This saves a few lines of code. Also add a comment to mention why we use ExplainPropertyInteger instead of ExplainPropertyUInteger given that queryid is a uint64 type. Author: David Rowley Reviewed-by: Julien Rouhaud Discussion: https://postgr.es/m/CAApHDvqhSLYpSU_EqUdN39w9Uvb8ogmHV7_3YhJ0S3aScGBjsg@mail.gmail.com Backpatch-through: 14, where this code was originally added
-
Bruce Momjian authored
Since commit e462856a7a, pg_upgrade automatically creates a script to update extensions, so mention that instead of ALTER EXTENSION. Backpatch-through: 9.6
-
- 08 Aug, 2021 4 commits
-
-
Tom Lane authored
Copy-and-pasteo in 665c5855e, evidently. The 9.6 docs toolchain whined about duplicate index entries, though our modern toolchain doesn't. In any case, these GUCs surely are not about the default settings of these values.
-
Tom Lane authored
I had committer's remorse almost immediately after pushing cb76fbd7e, upon finding that removing capturing subexpressions' subREs from the data structure broke my proposed patch for REG_NOSUB optimization. Revert that data structure change. Instead, address the concern about not changing capturing subREs' endpoints by not changing the endpoints. We don't need to, because the point of that bit was just to ensure that the atom has endpoints distinct from the outer state pair that we're stringing the branch between. We already made suitable states in the parenthesized-subexpression case, so the additional ones were just useless overhead. This seems more understandable than Spencer's original coding, and it ought to be a shade faster too by saving a few state creations and arc changes. (I actually see a couple percent improvement on Jacobson's web corpus, though that's barely above the noise floor so I wouldn't put much stock in that result.) Also, fix the logic added by ea1268f6 to ensure that the subRE recorded in v->subs[subno] is exactly the one with capno == subno. Spencer's original coding recorded the child subRE of the capture node, which is okay so far as having the right endpoint states is concerned, but as of cb76fbd7e the capturing subRE itself always has those endpoints too. I think the inconsistency is confusing for the REG_NOSUB optimization. As before, backpatch to v14. Discussion: https://postgr.es/m/0203588E-E609-43AF-9F4F-902854231EE7@enterprisedb.com
-
Tom Lane authored
Up to now, we remembered the definition of a capturing parenthesis subexpression by storing a pointer to the associated subRE node. That was okay before, because that subRE didn't get modified anymore while parsing the rest of the regexp. However, in the wake of commit ea1268f6, that's no longer true: the outer invocation of parseqatom() feels free to scribble on that subRE. This seems to work anyway, because the states we jam into the child atom in the "prepare a general-purpose state skeleton" stanza aren't really semantically different from the original endpoints of the child atom. But that would be mighty easy to break, and it's definitely not how things worked before. Between this and the issue fixed in the prior commit, it seems best to get rid of this dependence on subRE nodes entirely. We don't need the whole child subRE for future backrefs, only its starting and ending NFA states; so let's just store pointers to those. Also, in the corner case where we make an extra subRE to handle immediately-nested capturing parentheses, it seems like it'd be smart to have the extra subRE have the same begin/end states as the original child subRE does (s/s2 not lp/rp). I think that linking it from lp to rp might actually be semantically wrong, though since Spencer's original code did it that way, I'm not totally certain. Using s/s2 is certainly not wrong, in any case. Per report from Mark Dilger. Back-patch to v14 where the problematic patches came in. Discussion: https://postgr.es/m/0203588E-E609-43AF-9F4F-902854231EE7@enterprisedb.com
-
Tom Lane authored
Commit cebc1d34 taught parseqatom() to optimize cases where a branch contains only one, "messy", atom by getting rid of excess subRE nodes. The way we really should do that is to keep the subRE built for the "messy" child atom; but to avoid changing parseqatom's nominal API, I made it delete that node after copying its fields to the outer subRE made by parsebranch(). It seems that that actually worked at the time; but it became dangerous after ea1268f6, because that later commit allowed the lower invocation of parse() to return a subRE that was also pointed to by some v->subs[] entry. This meant we could wind up with a dangling pointer in v->subs[], allowing a later backref to misbehave, but only if that subRE struct had been reused in between. So the damage seems confined to cases like '((...))...(...\2'. To fix, do what I should have done before and modify parseqatom's API to make it possible for it to remove the caller's subRE instead of the callee's. That's safer because we know that subRE isn't complete yet, so noplace else will have a pointer to it. Per report from Mark Dilger. Back-patch to v14 where the problematic patches came in. Discussion: https://postgr.es/m/0203588E-E609-43AF-9F4F-902854231EE7@enterprisedb.com
-
- 07 Aug, 2021 4 commits
-
-
Peter Eisentraut authored
-
Tom Lane authored
Rather than trying to pick table aliases that won't conflict with any possible user-defined matview column name, adjust the queries' syntax so that the aliases are only used in places where they can't be mistaken for column names. Mostly this consists of writing "alias.*" not just "alias", which adds clarity for humans as well as machines. We do have the issue that "SELECT alias.*" acts differently from "SELECT alias", but we can use the same hack ruleutils.c uses for whole-row variables in SELECT lists: write "alias.*::compositetype". We might as well revert to the original aliases after doing this; they're a bit easier to read. Like 75d66d10e, back-patch to all supported branches. Discussion: https://postgr.es/m/2488325.1628261320@sss.pgh.pa.us
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
- 06 Aug, 2021 4 commits
-
-
Tom Lane authored
Casting a value that's already of a type with a specific typmod to an unspecified typmod doesn't do anything so far as run-time behavior is concerned. However, it really ought to change the exposed type of the expression to match. Up to now, coerce_type_typmod hasn't bothered with that, which creates gotchas in contexts such as recursive unions. If for example one side of the union is numeric(18,3), but it needs to be plain numeric to match the other side, there's no direct way to express that. This is easy enough to fix, by inserting a RelabelType to update the exposed type of the expression. However, it's a bit nervous-making to change this behavior, because it's stood for a really long time. (I strongly suspect that it's like this in part because the logic pre-dates the introduction of RelabelType in 7.0. The commit log message for 57b30e8e is interesting reading here.) As a compromise, we'll sneak the change into 14beta3, and consider back-patching to stable branches if no complaints emerge in the next three months. Discussion: https://postgr.es/m/CABNQVagu3bZGqiTjb31a8D5Od3fUMs7Oh3gmZMQZVHZ=uWWWfQ@mail.gmail.com
-
Dean Rasheed authored
Formerly, the numeric code tested whether an integer value of a larger type would fit in a smaller type by casting it to the smaller type and then testing if the reverse conversion produced the original value. That's perfectly fine, except that it caused a test failure on buildfarm animal castoroides, most likely due to a compiler bug. Instead, do these tests by comparing against PG_INT16/32_MIN/MAX. That matches existing code in other places, such as int84(), which is more widely tested, and so is less likely to go wrong. While at it, add regression tests covering the numeric-to-int8/4/2 conversions, and adjust the recently added tests to the style of 434ddfb79a (on the v11 branch) to make failures easier to diagnose. Per buildfarm via Tom Lane, reviewed by Tom Lane. Discussion: https://postgr.es/m/2394813.1628179479%40sss.pgh.pa.us
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
- 05 Aug, 2021 3 commits
-
-
Tom Lane authored
Now that this has been back-patched, it's no longer a new feature for v14.
-
Etsuro Fujita authored
postgres_fdw imported generated columns from the remote tables as plain columns, and caused failures like "ERROR: cannot insert a non-DEFAULT value into column "foo"" when inserting into the foreign tables, as it tried to insert values into the generated columns. To fix, we do the following under the assumption that generated columns in a postgres_fdw foreign table are defined so that they represent generated columns in the underlying remote table: * Send DEFAULT for the generated columns to the foreign server on insert or update, not generated column values computed on the local server. * Add to postgresImportForeignSchema() an option "import_generated" to include column generated expressions in the definitions of foreign tables imported from a foreign server. The option is true by default. The assumption seems reasonable, because that would make a query of the postgres_fdw foreign table return values for the generated columns that are consistent with the generated expression. While here, fix another issue in postgresImportForeignSchema(): it tried to include column generated expressions as column default expressions in the foreign table definitions when the import_default option was enabled. Per bug #16631 from Daniel Cherniy. Back-patch to v12 where generated columns were added. Discussion: https://postgr.es/m/16631-e929fe9db0ffc7cf%40postgresql.org
-
Dean Rasheed authored
This fixes a long-standing bug when using to_char() to format a numeric value in scientific notation -- if the value's exponent is less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a division-by-zero error. The reason for this error was that get_str_from_var_sci() divides its input by 10^exp, which it produced using power_var_int(). However, the underflow test in power_var_int() causes it to return zero if the result scale is too small. That's not a problem for power_var_int()'s only other caller, power_var(), since that limits the rscale to 1000, but in get_str_from_var_sci() the exponent can be much smaller, requiring a much larger rscale. Fix by introducing a new function to compute 10^exp directly, with no rscale limit. This also allows 10^exp to be computed more efficiently, without any numeric multiplication, division or rounding. Discussion: https://postgr.es/m/CAEZATCWhojfH4whaqgUKBe8D5jNHB8ytzemL-PnRx+KCTyMXmg@mail.gmail.com
-