- 03 Mar, 2015 5 commits
-
-
Alvaro Herrera authored
The changed routines are mostly those that can be directly called by ProcessUtilitySlow; the intention is to make the affected object information more precise, in support for future event trigger changes. Originally it was envisioned that the OID of the affected object would be enough, and in most cases that is correct, but upon actually implementing the event trigger changes it turned out that ObjectAddress is more widely useful. Additionally, some command execution routines grew an output argument that's an object address which provides further info about the executed command. To wit: * for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of the new constraint * for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the schema that originally contained the object. * for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address of the object added to or dropped from the extension. There's no user-visible change in this commit, and no functional change either. Discussion: 20150218213255.GC6717@tamriel.snowman.net Reviewed-By: Stephen Frost, Andres Freund
-
Alvaro Herrera authored
This was missed in my commit f4c4335a of 9.3 vintage, so backpatch to that.
-
Tom Lane authored
There's no reason to make users write an explicit cast to store a json value in a jsonb column or vice versa. We could probably even make these implicit, but that might open us up to problems with ambiguous function calls, so for now just do this.
-
Robert Haas authored
My commit 878fdcb8 was not quite right. Tom Lane pointed out one of the mistakes fixed here, and I noticed the other myself while reviewing what I'd committed.
-
- 02 Mar, 2015 2 commits
-
-
Robert Haas authored
Previously, you could do \set variable operand1 operator operand2, but nothing more complicated. Now, you can \set variable expression, which makes it much simpler to do multi-step calculations here. This also adds support for the modulo operator (%), with the same semantics as in C. Robert Haas and Fabien Coelho, reviewed by Álvaro Herrera and Stephen Frost
-
Stephen Frost authored
Since 9.1, we've provided extensions with a way to denote "configuration" tables- tables created by an extension which the user may modify. By marking these as "configuration" tables, the extension is asking for the data in these tables to be pg_dump'd (tables which are not marked in this way are assumed to be entirely handled during CREATE EXTENSION and are not included at all in a pg_dump). Unfortunately, pg_dump neglected to consider foreign key relationships between extension configuration tables and therefore could end up trying to reload the data in an order which would cause FK violations. This patch teaches pg_dump about these dependencies, so that the data dumped out is done so in the best order possible. Note that there's no way to handle circular dependencies, but those have yet to be seen in the wild. The release notes for this should include a caution to users that existing pg_dump-based backups may be invalid due to this issue. The data is all there, but restoring from it will require extracting the data for the configuration tables and then loading them in the correct order by hand. Discussed initially back in bug #6738, more recently brought up by Gilles Darold, who provided an initial patch which was further reworked by Michael Paquier. Further modifications and documentation updates by me. Back-patch to 9.1 where we added the concept of extension configuration tables.
-
- 01 Mar, 2015 6 commits
-
-
Stephen Frost authored
In 6f9bd50e, we modified expand_security_quals() to tell expand_security_qual() about when the current RTE was the targetRelation. Unfortunately, that commit initialized the targetRelation variable used outside of the loop over the RTEs instead of at the start of it. This patch moves the variable and the initialization of it into the loop, where it should have been to begin with. Pointed out by Dean Rasheed. Back-patch to 9.4 as the original commit was.
-
Tom Lane authored
Previously, we cached domain constraints for the life of a query, or really for the life of the FmgrInfo struct that was used to invoke domain_in() or domain_check(). But plpgsql (and probably other places) are set up to cache such FmgrInfos for the whole lifespan of a session, which meant they could be enforcing really stale sets of constraints. On the other hand, searching pg_constraint once per query gets kind of expensive too: testing says that as much as half the runtime of a trivial query such as "SELECT 0::domaintype" went into that. To fix this, delegate the responsibility for tracking a domain's constraints to the typcache, which has the infrastructure needed to detect syscache invalidation events that signal possible changes. This not only removes unnecessary repeat reads of pg_constraint, but ensures that we never apply stale constraint data: whatever we use is the current data according to syscache rules. Unfortunately, the current configuration of the system catalogs means we have to flush cached domain-constraint data whenever either pg_type or pg_constraint changes, which happens rather a lot (eg, creation or deletion of a temp table will do it). It might be worth rearranging things to split pg_constraint into two catalogs, of which the domain constraint one would probably be very low-traffic. That's a job for another patch though, and in any case this patch should improve matters materially even with that handicap. This patch makes use of the recently-added memory context reset callback feature to manage the lifespan of domain constraint caches, so that we don't risk deleting a cache that might be in the midst of evaluation. Although this is a bug fix as well as a performance improvement, no back-patch. There haven't been many if any field complaints about stale domain constraint checks, so it doesn't seem worth taking the risk of modifying data structures as basic as MemoryContexts in back branches.
-
Noah Misch authored
This makes "ALTER TABLE tabname ALTER tscol TYPE ... USING tscol AT TIME ZONE 'UTC'" skip rewriting the table when altering from "timestamp" to "timestamptz" or vice versa. While it would be nicer still to optimize this in the absence of the USING clause given timezone==UTC, transform functions must consult IMMUTABLE facts only.
-
Noah Misch authored
When the library already exists in the build directory, "ar" preserves members not named on its command line. This mattered when, for example, a "configure" rerun dropped a file from $(LIBOBJS). libpgport carried the obsolete member until "make clean". Back-patch to 9.0 (all supported versions).
-
Tom Lane authored
Initial experience with this feature suggests that instances of MemoryContextCallback are likely to propagate into some widely-used headers over time. As things stood, that would result in pulling memutils.h or at least memnodes.h into common headers, which does not seem desirable. Instead, let's decide that this feature is part of the "ordinary palloc user" API rather than the "specialized context management" API, and as such should be declared in palloc.h not memutils.h.
-
Alvaro Herrera authored
As evidenced by measles in buildfarm. Pointed out by Tom.
-
- 28 Feb, 2015 3 commits
-
-
Tom Lane authored
The main value of this change is to avoid expensive I/O conversions when assigning to a variable that has a typmod specification, if the value to be assigned is already known to have the right typmod. This is particularly valuable for arrays with typmod specifications; formerly, in an assignment to an array element the entire array would invariably get put through double I/O conversion to check the typmod, to absolutely no purpose since we'd already properly coerced the new element value. Extracted from my "expanded arrays" patch; this seems worth committing separately, whatever becomes of that patch, since it's really an independent issue. As long as we're changing the function signatures, take the opportunity to rationalize the argument lists of exec_assign_value, exec_cast_value, and exec_simple_cast_value; that is, put the arguments into a saner order, and get rid of the bizarre choice to pass exec_assign_value's isNull flag by reference.
-
Tom Lane authored
Part of the intent of the parameterized-path mechanism was to handle star-schema queries efficiently, but some overly-restrictive search limiting logic added in commit e2fa76d8 prevented such cases from working as desired. Fix that and add a regression test about it. Per gripe from Marc Cousin. This is arguably a bug rather than a new feature, so back-patch to 9.2 where parameterized paths were introduced.
-
Tom Lane authored
Add documentation about the new reset callback mechanism. Also, at long last, recast the existing text so that it describes the current context mechanisms as established fact rather than something we're going to implement. Shoulda done that in 2001 or so ...
-
- 27 Feb, 2015 6 commits
-
-
Tom Lane authored
That is, MemoryContextReset() now means what was formerly meant by MemoryContextResetAndDeleteChildren(), and the latter is now just a macro alias for the former. If you really want the functionality that was formerly provided by MemoryContextReset(), what you have to do is MemoryContextResetChildren() plus MemoryContextResetOnly() (which is a new API to reset *only* the named context and not touch its children). The reason for this change is that near fifteen years of experience has proven that there is noplace where old-style MemoryContextReset() is actually what you want. Making that the default behavior has led to lots of context-leakage bugs, while we've not found anyplace where it's actually necessary to keep the child contexts; at least the standard regression tests do not reveal anyplace where this change breaks anything. And there are upcoming patches that will introduce additional reasons why child contexts need to be removed. We could change existing calls of MemoryContextResetAndDeleteChildren to be just MemoryContextReset, but for the moment I'll leave them alone; they're not costing anything.
-
Alvaro Herrera authored
The way that columns are added to a view is by calling AlterTableInternal with special subtype AT_AddColumnToView; but that subtype is changed to AT_AddColumnRecurse by ATPrepAddColumn. This has no visible effect in the current code, since views cannot have inheritance children (thus the recursion step is a no-op) and adding a column to a view is executed identically to doing it to a table; but it does make a difference for future event trigger code keeping track of commands, because the current situation leads to confusing the case with a normal ALTER TABLE ADD COLUMN. Fix the problem by passing a flag to ATPrepAddColumn to prevent it from changing the command subtype. The event trigger code can then properly ignore the subcommand. (We could remove the call to ATPrepAddColumn, since views are never typed, and there is never a need for recursion, which are the two conditions that are checked by ATPrepAddColumn; but it seems more future-proof to keep the call in place.)
-
Tom Lane authored
This allows cleanup actions to be registered to be called just before a particular memory context's contents are flushed (either by deletion or MemoryContextReset). The patch in itself has no use-cases for this, but several likely reasons for wanting this exist. In passing, per discussion, rearrange some boolean fields in struct MemoryContextData so as to avoid wasted padding space. For safety, this requires making allowInCritSection's existence unconditional; but I think that's a better approach than what was there anyway.
-
Alvaro Herrera authored
Typo "aggreagate" appeared three times, and the return value of function JsonbIteratorNext() was being assigned to an int variable in a bunch of places.
-
Alvaro Herrera authored
When a composite type being used in a typed table is modified by way of ALTER TYPE, a table rewrite occurs appearing to come from ALTER TYPE. The existing event_trigger.c code was unable to cope with that and raised a spurious error. The fix is just to accept that command tag for the event, and document this properly. Noted while fooling with deparsing of DDL commands. This appears to be an oversight in commit 618c9430. Thanks to Mark Wong for documentation wording help.
- 26 Feb, 2015 6 commits
-
-
Andrew Dunstan authored
Commit ab14a73a raised an error in these cases and later the behaviour was copied to jsonb. This is what the XML code, which we then adopted, does, as the XSD types don't accept infinite values. However, json dates and timestamps are just strings as far as json is concerned, so there is no reason not to render these values as 'infinity'. The json portion of this is backpatched to 9.4 where the behaviour was introduced. The jsonb portion only affects the development branch. Per gripe on pgsql-general.
-
Andres Freund authored
Up to now RecordTransactionCommit() waited for WAL to be flushed (if synchronous_commit != off) and to be synchronously replicated (if enabled), even if a transaction did not have a xid assigned. The primary reason for that is that sequence's nextval() did not assign a xid, but are worthwhile to wait for on commit. This can be problematic because sometimes read only transactions do write WAL, e.g. HOT page prune records. That then could lead to read only transactions having to wait during commit. Not something people expect in a read only transaction. This lead to such strange symptoms as backends being seemingly stuck during connection establishment when all synchronous replicas are down. Especially annoying when said stuck connection is the standby trying to reconnect to allow syncrep again... This behavior also is involved in a rather complicated <= 9.4 bug where the transaction started by catchup interrupt processing waited for syncrep using latches, but didn't get the wakeup because it was already running inside the same overloaded signal handler. Fix the issue here doesn't properly solve that issue, merely papers over the problems. In 9.5 catchup interrupts aren't processed out of signal handlers anymore. To fix all this, make nextval() acquire a top level xid, and only wait for transaction commit if a transaction both acquired a xid and emitted WAL records. If only a xid has been assigned we don't uselessly want to wait just because of writes to temporary/unlogged tables; if only WAL has been written we don't want to wait just because of HOT prunes. The xid assignment in nextval() is unlikely to cause overhead in real-world workloads. For one it only happens SEQ_LOG_VALS/32 values anyway, for another only usage of nextval() without using the result in an insert or similar is affected. Discussion: 20150223165359.GF30784@awork2.anarazel.de, 369698E947874884A77849D8FE3680C2@maumau, 5CF4ABBA67674088B3941894E22A0D25@maumau Per complaint from maumau and Thom Brown Backpatch all the way back; 9.0 doesn't have syncrep, but it seems better to be consistent behavior across all maintained branches.
-
Fujii Masao authored
Andrew Gierth and Ali Akbar
-
Noah Misch authored
"RETURN SQLERRM" prompted plpgsql_exec_function() to read from freed memory. Back-patch to 9.0 (all supported versions). Little code ran between the premature free and the read, so non-assert builds are unlikely to witness user-visible consequences.
-
Stephen Frost authored
The RLS patch added a hasRowSecurity field to PlannerGlobal and PlannedStmt but didn't update nodes/copyfuncs.c and nodes/outfuncs.c to reflect those additional fields. Correct that by adding entries to the appropriate functions for those fields. Pointed out by Robert.
-
Stephen Frost authored
In expand_security_qual(), we were handling locking correctly when a PlanRowMark existed, but not when we were working with the target relation (which doesn't have any PlanRowMarks, but the subquery created for the security barrier quals still needs to lock the rows under it). Noted by Etsuro Fujita when working with the Postgres FDW, which wasn't properly issuing a SELECT ... FOR UPDATE to the remote side under a DELETE. Back-patch to 9.4 where updatable security barrier views were introduced. Per discussion with Etsuro and Dean Rasheed.
-
- 25 Feb, 2015 3 commits
-
-
Tom Lane authored
The "simple" path for printing VALUES clauses doesn't work if we need to attach nondefault column aliases, because there's noplace to do that in the minimal VALUES() syntax. So modify get_simple_values_rte() to detect nondefault aliases and treat that as a non-simple case. This further exposes that the "non-simple" path never actually worked; it didn't produce valid syntax. Fix that too. Per bug #12789 from Curtis McEnroe, and analysis by Andrew Gierth. Back-patch to all supported branches. Before 9.3, this also requires back-patching the part of commit 092d7ded that created get_simple_values_rte() to begin with; inserting the extra test into the old factorization of that logic would've been too messy.
-
Michael Meskes authored
If a pointer is guaranteed to carry information there is no need to check for NULL again. Patch by Michael Paquier.
- 24 Feb, 2015 4 commits
-
-
Tom Lane authored
There are a couple of places in our grammar that fail to be strict LALR(1), by requiring more than a single token of lookahead to decide what to do. Up to now we've dealt with that by using a filter between the lexer and parser that merges adjacent tokens into one in the places where two tokens of lookahead are necessary. But that creates a number of user-visible anomalies, for instance that you can't name a CTE "ordinality" because "WITH ordinality AS ..." triggers folding of WITH and ORDINALITY into one token. I realized that there's a better way. In this patch, we still do the lookahead basically as before, but we never merge the second token into the first; we replace just the first token by a special lookahead symbol when one of the lookahead pairs is seen. This requires a couple extra productions in the grammar, but it involves fewer special tokens, so that the grammar tables come out a bit smaller than before. The filter logic is no slower than before, perhaps a bit faster. I also fixed the filter logic so that when backing up after a lookahead, the current token's terminator is correctly restored; this eliminates some weird behavior in error message issuance, as is shown by the one change in existing regression test outputs. I believe that this patch entirely eliminates odd behaviors caused by lookahead for WITH. It doesn't really improve the situation for NULLS followed by FIRST/LAST unfortunately: those sequences still act like a reserved word, even though there are cases where they should be seen as two ordinary identifiers, eg "SELECT nulls first FROM ...". I experimented with additional grammar hacks but couldn't find any simple solution for that. Still, this is better than before, and it seems much more likely that we *could* somehow solve the NULLS case on the basis of this filter behavior than the previous one.
-
Peter Eisentraut authored
The tar format (at least the version we are using), does not support file names or symlink targets longer than 99 bytes. Until now, the tar creation code would silently truncate any names that are too long. (Its original application was pg_dump, where this never happens.) This creates problems when running base backups over the replication protocol. The most important problem is when a tablespace path is longer than 99 bytes, which will result in a truncated tablespace path being backed up. Less importantly, the basebackup protocol also promises to back up any other files it happens to find in the data directory, which would also lead to file name truncation if someone put a file with a long name in there. Now both of these cases result in an error during the backup. Add tests that fail when a too-long file name or symlink is attempted to be backed up. Reviewed-by: Robert Hass <robertmhaas@gmail.com>
-
Heikki Linnakangas authored
Kyotaro Horiguchi
-
Heikki Linnakangas authored
Kyotaro Horiguchi
-
- 23 Feb, 2015 5 commits
-
-
Peter Eisentraut authored
-
Alvaro Herrera authored
Brown paper bag installed permanently.
-
Tom Lane authored
Use a different A_Expr_Kind for LIKE/ILIKE/SIMILAR TO constructs, so that they can be distinguished from direct invocation of the underlying operators. Also, postpone selection of the operator name when transforming "x IN (select)" to "x = ANY (select)", so that those syntaxes can be told apart at parse analysis time. I had originally thought I'd also have to do something special for the syntaxes IS NOT DISTINCT FROM, IS NOT DOCUMENT, and x NOT IN (SELECT...), which the grammar translates as though they were NOT (construct). On reflection though, we can distinguish those cases reliably by noting whether the parse location shown for the NOT is the same as for its child node. This only requires tweaking the parse locations for NOT IN, which I've done here. These changes should have no effect outside the parser; they're just in support of being able to give accurate warnings for planned operator precedence changes.
-
Alvaro Herrera authored
COMMENT, SECURITY LABEL, and GRANT/REVOKE now also fire ddl_command_start and ddl_command_end event triggers, when they operate on database-local objects. Reviewed-By: Michael Paquier, Andres Freund, Stephen Frost
-
Heikki Linnakangas authored
Instead of having a single knob (checkpoint_segments) that both triggers checkpoints, and determines how many checkpoints to recycle, they are now separate concerns. There is still an internal variable called CheckpointSegments, which triggers checkpoints. But it no longer determines how many segments to recycle at a checkpoint. That is now auto-tuned by keeping a moving average of the distance between checkpoints (in bytes), and trying to keep that many segments in reserve. The advantage of this is that you can set max_wal_size very high, but the system won't actually consume that much space if there isn't any need for it. The min_wal_size sets a floor for that; you can effectively disable the auto-tuning behavior by setting min_wal_size equal to max_wal_size. The max_wal_size setting is now the actual target size of WAL at which a new checkpoint is triggered, instead of the distance between checkpoints. Previously, you could calculate the actual WAL usage with the formula "(2 + checkpoint_completion_target) * checkpoint_segments + 1". With this patch, you set the desired WAL usage with max_wal_size, and the system calculates the appropriate CheckpointSegments with the reverse of that formula. That's a lot more intuitive for administrators to set. Reviewed by Amit Kapila and Venkata Balaji N.
-