- 02 Jan, 2017 2 commits
-
-
Tom Lane authored
The advantage of clock_gettime() is that the API allows the result to be precise to nanoseconds, not just microseconds as in gettimeofday(). Now that it's routinely possible to do tens of plan node executions in 1us, we really need more precision than gettimeofday() can offer for EXPLAIN ANALYZE to accumulate statistics with. Some research shows that clock_gettime() is available on pretty nearly every modern Unix-ish platform, and as far as I have been able to test, it has about the same execution time as gettimeofday(), so there's no loss in switching over. (By the same token, this doesn't do anything to fix the fact that we really wish clock readings were faster. But there's enough win here to justify changing anyway.) A small side benefit is that on most platforms, we can use CLOCK_MONOTONIC instead of CLOCK_REALTIME and thereby render EXPLAIN impervious to concurrent resets of the system clock. (This means that code must not assume that the contents of struct instr_time have any well-defined interpretation as timestamps, but really that was true before.) Some platforms offer nonstandard clock IDs that might be of interest. This patch knows we should use CLOCK_MONOTONIC_RAW on macOS, because it provides more precision and is faster to read than their CLOCK_MONOTONIC. If there turn out to be many more cases where we need special rules, it might be appropriate to handle the selection of clock ID in configure, but for the moment that doesn't seem worth the trouble. Discussion: https://postgr.es/m/31856.1400021891@sss.pgh.pa.us
-
Tom Lane authored
For aggregated logging, pg_bench supposed that printing the integer part of INSTR_TIME_GET_DOUBLE() would produce a Unix timestamp. That was already broken on Windows, and it's about to get broken on most other platforms as well. As in commit 74baa1e3, we can remove the entanglement at the price of one extra syscall per transaction; though here it seems more convenient to use time(NULL) instead of gettimeofday(), since we only need integral-second precision. I took the time to do some wordsmithing on the documentation about pgbench's logging features, too. Discussion: https://postgr.es/m/8837.1483216839@sss.pgh.pa.us
-
- 01 Jan, 2017 1 commit
-
-
Tom Lane authored
This code was presuming undue familiarity with the contents of the instr_time struct. That was already broken on Windows, and it's about to get broken on most other platforms as well. The simplest solution that preserves the current output definition is to just do our own gettimeofday() call here. Realistically, the extra cost is probably negligible in comparison to everything else that's going on in a pgbench transaction, so it's not worth sweating over. On Windows, the precision delivered by gettimeofday() is lower than one could wish, but this is still a big improvement over printing zeroes, as the code did before. Discussion: https://postgr.es/m/8837.1483216839@sss.pgh.pa.us
-
- 31 Dec, 2016 1 commit
-
-
Tom Lane authored
Commit 2ac3ef7a added a query with an underdetermined output row order; it has failed multiple times in the buildfarm since then. Add an ORDER BY to fix. Also, don't rely on a DROP CASCADE to drop in a well-determined order; that hasn't failed yet but I don't trust it much, and we're not saving any typing by using CASCADE anyway.
-
- 29 Dec, 2016 4 commits
-
-
Tom Lane authored
Commit f0e44751 added new node tags at a place in the tag numbering where there was no daylight left before the next hard-coded number, resulting in some duplicate tag assignments. This doesn't seem to have caused any big problem so far, but it's surely trouble waiting to happen. We could adjust the manually assigned breakpoints to make more room, but that just leaves the same hazard waiting to strike again in future. What seems like a better idea is to get rid of the manual assignments and leave NodeTags to be automatically assigned, consecutively from one on up. This means that any change in the tag list forces a backend-wide recompile, but realistically that's usually needed anyway. Discussion: https://postgr.es/m/29670.1482942811@sss.pgh.pa.us
-
Peter Eisentraut authored
-
Peter Eisentraut authored
There is no need to use abbreviations here, so just write it out for consistency.
-
Peter Eisentraut authored
Most code was casting this through a generic Node. By declaring everything as RoleSpec appropriately, we can remove a bunch of casts and ad-hoc node type checking. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
-
- 27 Dec, 2016 4 commits
-
-
Tom Lane authored
interval_transform() contained two separate bugs that caused it to sometimes mistakenly decide that a cast from interval to restricted interval is a no-op and throw it away. First, it was wrong to rely on dt.h's field type macros to have an ordering consistent with the field's significance; in one case they do not. This led to mistakenly treating YEAR as less significant than MONTH, so that a cast from INTERVAL MONTH to INTERVAL YEAR was incorrectly discarded. Second, fls(1<<k) produces k+1 not k, so comparing its output directly to SECOND was wrong. This led to supposing that a cast to INTERVAL MINUTE was really a cast to INTERVAL SECOND and so could be discarded. To fix, get rid of the use of fls(), and make a function based on intervaltypmodout to produce a field ID code adapted to the need here. Per bug #14479 from Piotr Stefaniak. Back-patch to 9.2 where transform functions were introduced, because this code was born broken. Discussion: https://postgr.es/m/20161227172307.10135.7747@wrigleys.postgresql.org
-
Andrew Dunstan authored
In addition to space accounted for by tuple_len, dead_tuple_len and free_space, the table_len includes page overhead, the item pointers table and padding bytes. Backpatch to live branches.
-
Magnus Hagander authored
In 56c7d8d4 the behavior to keep .partial segments around (considered corrupt) in case an connection failure occurs was accidentally removed. This would lead to an incomplete segment being considered complete. Author: Michael Paquier
-
Magnus Hagander authored
Erik Rijkers
-
- 26 Dec, 2016 1 commit
-
-
Tom Lane authored
hashname() asserted that the key string it is given is shorter than NAMEDATALEN. That should surely always be true if the input is in fact a regular value of type "name". However, for reasons of coding convenience, we allow plain old C strings to be treated as "name" values in many places. Some SQL functions accept arbitrary "text" inputs, convert them to C strings, and pass them otherwise-untransformed to syscache lookups for name columns, allowing an overlength input value to trigger hashname's Assert. This would be a DOS problem, except that it only happens in assert-enabled builds which aren't recommended for production. In a production build, you'll just get a name lookup error, since regardless of the hash value computed by hashname, the later equality comparison checks can't match. Likewise, if the catalog lookup is done by seqscan or indexscan searches, there will just be a lookup error, since the name comparison functions don't contain any similar length checks, and will see an overlength input as unequal to any stored entry. After discussion we concluded that we should simply remove this Assert. It's inessential to hashname's own functionality, and having such an assertion in only some paths for name lookup is more of a foot-gun than a useful check. There may or may not be a case for the affected callers to do something other than let the name lookup fail, but we'll consider that separately; in any case we probably don't want to change such behavior in the back branches. Per report from Tushar Ahuja. Back-patch to all supported branches. Report: https://postgr.es/m/7d0809ee-6f25-c9d6-8e74-5b2967830d49@enterprisedb.com Discussion: https://postgr.es/m/17691.1482523168@sss.pgh.pa.us
-
- 25 Dec, 2016 1 commit
-
-
Tom Lane authored
\crosstabview's complaint about multiple entries for the same crosstab cell quoted the wrong row and/or column values. It would accidentally appear to work if the data had been in strcmp() order to start with, which probably explains how we missed noticing this during development. This could be fixed in more than one way, but the way I chose was to hang onto both result pointers from bsearch() and use those to get at the value names. In passing, avoid casting away const in the bsearch comparison functions. No bug there, just poor style. Per bug #14476 from Tomonari Katsumata. Back-patch to 9.6 where \crosstabview was introduced. Report: https://postgr.es/m/20161225021519.10139.45460@wrigleys.postgresql.org
-
- 24 Dec, 2016 2 commits
-
-
Stephen Frost authored
The -v/--verbose option was not included in the output from --help for pg_dumpall even though it's in the pg_dumpall documentation and has apparently been around since pg_dumpall was reimplemented in C in 2002. Fix that by adding it. Pointed out by Daniel Westermann. Back-patch to all supported branches. Discussion: https://www.postgresql.org/message-id/2020970042.4589542.1482482101585.JavaMail.zimbra%40dbi-services.com
-
Stephen Frost authored
When providing tab completion for ALTER DEFAULT PRIVILEGES, we are including the list of roles as possible options for completion after the GRANT or REVOKE. Further, we accept FOR ROLE/IN SCHEMA at the same time and in either order, but the tab completion was only working for one or the other. Lastly, we weren't using the actual list of allowed kinds of objects for default privileges for completion after the 'GRANT X ON' but instead were completeing to what 'GRANT X ON' supports, which isn't the ssame at all. Address these issues by improving the forward tab-completion for ALTER DEFAULT PRIVILEGES and then constrain and correct how the tail completion is done when it is for ALTER DEFAULT PRIVILEGES. Back-patch the forward/tail tab-completion to 9.6, where we made it easy to handle such cases. For 9.5 and earlier, correct the initial tab-completion to at least be correct as far as it goes and then add a check for GRANT/REVOKE to only tab-complete when the GRANT/REVOKE is the start of the command, so we don't try to do tab-completion after we get to the GRANT/REVOKE part of the ALTER DEFAULT PRIVILEGES command, which is better than providing incorrect completions. Initial patch for master and 9.6 by Gilles Darold, though I cleaned it up and added a few comments. All bugs in the 9.5 and earlier patch are mine. Discussion: https://www.postgresql.org/message-id/1614593c-e356-5b27-6dba-66320a9bc68b@dalibo.com
-
- 23 Dec, 2016 8 commits
-
-
Tom Lane authored
Now that it has only INH_NO and INH_YES values, it's just weird that it's not a plain bool, so make it that way. Also rename RangeVar.inhOpt to "inh", to be like RangeTblEntry.inh. My recollection is that we gave it a different name specifically because it had a different representation than the derived bool value, but it no longer does. And this is a good forcing function to be sure we catch any places that are affected by the change. Bump catversion because of possible effect on stored RangeVar nodes. I'm not exactly convinced that we ever store RangeVar on disk, but we have a readfuncs function for it, so be cautious. (If we do do so, then commit e13486eb was in error not to bump catversion.) Follow-on to commit e13486eb. Discussion: http://postgr.es/m/CA+TgmoYe+EG7LdYX6pkcNxr4ygkP4+A=jm9o-CPXyOvRiCNwaQ@mail.gmail.com
-
Peter Eisentraut authored
makeNode() is already a macro that has the right result pointer type, so casting it again to the same type is unnecessary.
-
Tom Lane authored
We had an index entry for "median" attached to the percentile_cont function entry, which was pretty useless because a person following the link would never realize that that function was the one they were being hinted to use. Instead, make the index entry point at the example in syntax-aggregates, and add a <seealso> link to "percentile". Also, since that example explicitly claims to be calculating the median, make it use percentile_cont not percentile_disc. This makes no difference in terms of the larger goals of that section, but so far as I can find, nearly everyone thinks that "median" means the continuous not discrete calculation. Per gripe from Steven Winfield. Back-patch to 9.4 where we introduced percentile_cont. Discussion: https://postgr.es/m/20161223102056.25614.1166@wrigleys.postgresql.org
-
Tom Lane authored
I got a little annoyed by reading documentation paragraphs containing both spellings within a few lines of each other. My dictionary says "descendant" is the preferred spelling, and it's certainly the majority usage in our tree, so standardize on that. For one usage in parallel.sgml, I thought it better to rewrite to avoid the term altogether.
-
Peter Eisentraut authored
There was code that attempted to check whether the sequence name stored inside the sequence was the same as the name in pg_class. But that code was already ifdef'ed out, and now that the sequence no longer stores its own name, it's altogether obsolete, so remove it.
-
Robert Haas authored
This backward-compatibility GUC is long overdue for removal. Discussion: http://postgr.es/m/CA+TgmoYe+EG7LdYX6pkcNxr4ygkP4+A=jm9o-CPXyOvRiCNwaQ@mail.gmail.com
-
Robert Haas authored
This is basically for the same reasons I got rid of _hash_wrtbuf() in commit 25216c98: it's not convenient to have a function which encapsulates MarkBufferDirty(), especially as we move towards having hash indexes be WAL-logged. Patch by me, reviewed (but not entirely endorsed) by Amit Kapila.
-
Joe Conway authored
Documentation for pg_restore said COPY TO does not support row security when in fact it should say COPY FROM. Fix that. While at it, make it clear that "COPY FROM" does not allow RLS to be enabled and INSERT should be used instead. Also that SELECT policies will apply to COPY TO statements. Back-patch to 9.5 where RLS first appeared. Author: Joe Conway Reviewed-By: Dean Rasheed and Robert Haas Discussion: https://postgr.es/m/5744FA24.3030008%40joeconway.com
-
- 22 Dec, 2016 14 commits
-
-
Robert Haas authored
The previous coding failed to work correctly when we have a multi-level partitioned hierarchy where tables at successive levels have different attribute numbers for the partition key attributes. To fix, have each PartitionDispatch object store a standalone TupleTableSlot initialized with the TupleDesc of the corresponding partitioned table, along with a TupleConversionMap to map tuples from the its parent's rowtype to own rowtype. After tuple routing chooses a leaf partition, we must use the leaf partition's tuple descriptor, not the root table's. To that end, a dedicated TupleTableSlot for tuple routing is now allocated in EState. Amit Langote
-
Stephen Frost authored
When we are altering a text search configuration, we are getting the tuple from pg_ts_config and using its OID, so use TSConfigRelationId when invoking any post-alter hooks and setting the object address. Further, in the functions called from AlterTSConfiguration(), we're saving information about the command via EventTriggerCollectAlterTSConfig(), so we should be setting commandCollected to true. Also add a regression test to test_ddl_deparse for ALTER TEXT SEARCH CONFIGURATION. Author: Artur Zakirov, a few additional comments by me Discussion: https://www.postgresql.org/message-id/57a71eba-f2c7-e7fd-6fc0-2126ec0b39bd%40postgrespro.ru Back-patch the fix for the InvokeObjectPostAlterHook() call to 9.3 where it was introduced, and the fix for the ObjectAddressSet() call and setting commandCollected to true to 9.5 where those changes to ProcessUtilitySlow() were introduced.
-
Tom Lane authored
Having a WITH OIDS specification should result in the creation of an OID column, but commit b943f502 broke that in the case that there were LIKE tables without OIDS. Commentary in that patch makes it look like this was intentional, but if so it was based on a faulty reading of what inheritance does: the parent tables can add an OID column, but they can't subtract one. AFAICS, the behavior ought to be that you get an OID column if any of the inherited tables, LIKE tables, or WITH clause ask for one. Also, revert that patch's unnecessary split of transformCreateStmt's loop over the tableElts list into two passes. That seems to have been based on a misunderstanding as well: we already have two-pass processing here, we don't need three passes. Per bug #14474 from Jeff Dafoe. Back-patch to 9.6 where the misbehavior was introduced. Report: https://postgr.es/m/20161222145304.25620.47445@wrigleys.postgresql.org
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
Tom Lane authored
When the input value to a CoerceToDomain expression node is a read-write expanded datum, we should pass a read-only pointer to any domain CHECK expressions and then return the original read-write pointer as the expression result. Previously we were blindly passing the same pointer to all the consumers of the value, making it possible for a function in CHECK to modify or even delete the expanded value. (Since a plpgsql function will absorb a passed-in read-write expanded array as a local variable value, it will in fact delete the value on exit.) A similar hazard of passing the same read-write pointer to multiple consumers exists in domain_check() and in ExecEvalCase, so fix those too. The fix requires adding MakeExpandedObjectReadOnly calls at the appropriate places, which is simple enough except that we need to get the data type's typlen from somewhere. For the domain cases, solve this by redefining DomainConstraintRef.tcache as okay for callers to access; there wasn't any reason for the original convention against that, other than not wanting the API of typcache.c to be any wider than it had to be. For CASE, there's no good solution except to add a syscache lookup during executor start. Per bug #14472 from Marcos Castedo. Back-patch to 9.5 where expanded values were introduced. Discussion: https://postgr.es/m/15225.1482431619@sss.pgh.pa.us
-
Andres Freund authored
Some background activity (like checkpoints, archive timeout, standby snapshots) is not supposed to happen on an idle system. Unfortunately so far it was not easy to determine when a system is idle, which defeated some of the attempts to avoid redundant activity on an idle system. To make that easier, allow to make individual WAL insertions as not being "important". By checking whether any important activity happened since the last time an activity was performed, it now is easy to check whether some action needs to be repeated. Use the new facility for checkpoints, archive timeout and standby snapshots. The lack of a facility causes some issues in older releases, but in my opinion the consequences (superflous checkpoints / archived segments) aren't grave enough to warrant backpatching. Author: Michael Paquier, editorialized by Andres Freund Reviewed-By: Andres Freund, David Steele, Amit Kapila, Kyotaro HORIGUCHI Bug: #13685 Discussion: https://www.postgresql.org/message-id/20151016203031.3019.72930@wrigleys.postgresql.org https://www.postgresql.org/message-id/CAB7nPqQcPqxEM3S735Bd2RzApNqSNJVietAC=6kfkYv_45dKwA@mail.gmail.com Backpatch: -
-
Robert Haas authored
You can't just cast a HashMetaPage to a Page, because the meta page data is stored after the page header, not at offset 0. Fortunately, this didn't break anything because it happens to find hashm_bsize at the offset at which it expects to find pd_pagesize_version, and the values are close enough to the same that this works out. Still, it's a bug, so back-patch to all supported versions. Mithun Cy, revised a bit by me.
-
Joe Conway authored
When libpq encounters a connection-level error, e.g. runs out of memory while forming a result, there will be no error associated with PGresult, but a message will be placed into PGconn's error buffer. postgres_fdw takes care to use the PGconn error message when PGresult does not have one, but dblink has been negligent in that regard. Modify dblink to mirror what postgres_fdw has been doing. Back-patch to all supported branches. Author: Joe Conway Reviewed-By: Tom Lane Discussion: https://postgr.es/m/02fa2d90-2efd-00bc-fefc-c23c00eb671e%40joeconway.com
-
Robert Haas authored
Amit Langote. Most of this reported by Álvaro Herrera.
-
Joe Conway authored
When dblink uses a postgres_fdw server name for its connection, it is possible for the connection to have options that are invalid with dblink (e.g. "updatable"). The recommended way to avoid this problem is to use dblink_fdw servers instead. However there are use cases for using postgres_fdw, and possibly other FDWs, for dblink connection options, therefore protect against trying to use any options that do not apply by using is_valid_dblink_option() when building the connection string from the options. Back-patch to 9.3. Although 9.2 supports FDWs for connection info, is_valid_dblink_option() did not yet exist, and neither did postgres_fdw, at least in the postgres source tree. Given the lack of previous complaints, fixing that seems too invasive/not worth it. Author: Corey Huinker Reviewed-By: Joe Conway Discussion: https://postgr.es/m/CADkLM%3DfWyXVEyYcqbcRnxcHutkP45UHU9WD7XpdZaMfe7S%3DRwA%40mail.gmail.com
-
Heikki Linnakangas authored
No more indirect blocks. The blocks form a linked list instead. This saves some memory, because we don't need to have a buffer in memory to hold the indirect block (or blocks). To reflect that, TAPE_BUFFER_OVERHEAD is reduced from 3 to 1 buffer, which allows using more memory for building the initial runs. Reviewed by Peter Geoghegan and Robert Haas. Discussion: https://www.postgresql.org/message-id/34678beb-938e-646e-db9f-a7def5c44ada%40iki.fi
-
Tom Lane authored
Before commit b8cc8f94, it was possible to build contrib/uuid-ossp without having told configure you meant to; you could just cd into that directory and "make". That no longer works because the code depends on configure to have done header and library probes, but the ensuing error messages are not so easy to interpret if you're not an old C hand. We've gotten a couple of complaints recently from people trying to do this the low-tech way, so add an explicit #error directing the user to use --with-uuid. (In principle we might want to do something similar in the other optionally-built contrib modules; but I don't think any of the others have ever worked without preconfiguration, so there are no bad habits to break people of.) Back-patch to 9.4 where the previous commit came in. Report: https://postgr.es/m/CAHeEsBf42AWTnk=1qJvFv+mYgRFm07Knsfuc86Ono8nRjf3tvQ@mail.gmail.com Report: https://postgr.es/m/CAKYdkBrUaZX+F6KpmzoHqMtiUqCtAW_w6Dgvr6F0WTiopuGxow@mail.gmail.com
-
Michael Meskes authored
output file naming. Patch by Tsunakawa, Takayuki <tsunakawa.takay@jp.fujitsu.com>
-
- 21 Dec, 2016 2 commits
-
-
Joe Conway authored
When dblink or postgres_fdw detects an error on the remote side of the connection, it will try to construct a local error message as best it can using libpq's PQresultErrorField(). When no primary message is available, it was bailing out with an unhelpful "unknown error". Make that message better and more style guide compliant. Per discussion on hackers. Backpatch to 9.2 except postgres_fdw which didn't exist before 9.3. Discussion: https://postgr.es/m/19872.1482338965%40sss.pgh.pa.us
-
Tom Lane authored
The U&'...' and U&"..." syntaxes silently discarded a surrogate pair start (that is, a code between U+D800 and U+DBFF) if it occurred at the very end of the string. This seems like an obvious oversight, since we throw an error for every other invalid combination of surrogate characters, including the very same situation in E'...' syntax. This has been wrong since the pair processing was added (in 9.0), so back-patch to all supported branches. Discussion: https://postgr.es/m/19113.1482337898@sss.pgh.pa.us
-