- 04 May, 2015 6 commits
-
-
Robert Haas authored
This makes the executor code more consistent. It also removes an apparently superfluous NULL test in nodeGroup.c. Qingqing Zhou, reviewed by Tom Lane, and further revised by me.
-
Tom Lane authored
The text search functions that involve parsing raw text into lexemes are remarkably CPU-intensive, so estimating them at the same cost as most other built-in functions seems like a mistake; moreover, doing so turns out to discourage the optimizer from using functional indexes on these functions. After some debate, we've agreed to raise procost from 1 to 100 for to_tsvector(), plainto_tsvector(), to_tsquery(), ts_headline(), ts_match_tt(), and ts_match_tq(), which are all the text search functions that parse raw text. Also increase procost for the 2-argument form of ts_rewrite() (tsquery_rewrite_query); while this function doesn't do text parsing, it does execute a user-supplied SQL query, so its previous procost of 1 is clearly a drastic underestimate. It seems reasonable to assign it the same cost we assign to PL functions by default, so 100 is the number here too. I did not bother bumping catversion for this change, since it does not break catalog compatibility with the server executable nor result in any regression test changes. Per complaint from Andrew Gierth and subsequent discussion.
-
Robert Haas authored
Otherwise, if there's another crash, some writes from after the first crash might make it to disk while writes from before the crash fail to make it to disk. This could lead to data corruption. Back-patch to all supported versions. Abhijit Menon-Sen, reviewed by Andres Freund and slightly revised by me.
-
Heikki Linnakangas authored
prev_regbuf was never set, and therefore the same-rel flag was never set on WAL records. Report and fix by Zhanq Zq
-
Andrew Dunstan authored
The first bug is not releasing a tupdesc when doing an early return out of the function. The second bug is a logic error in choosing when to do an early return if given an empty jsonb object. Bug reports from Pavel Stehule and Tom Lane respectively. Backpatch to 9.4 where these were introduced.
-
Tom Lane authored
Commit ef3f9e64 suppressed one cause of warnings here, but recent clang on OS X is still unhappy because we're passing a "long" to abs(). The fact that tm_gmtoff is declared as long is no doubt a hangover from days when int might be only 16 bits; but Postgres has never been able to run on such machines, so we can just cast it to int with no worries. For consistency, also cast to int in the other uses of tm_gmtoff in this stanza. Note: this code is still broken on machines that don't follow C99 integer-division-truncates-towards-zero rules. Given the lack of complaints about it, I don't feel a large desire to complicate things enough to cope with the pre-C99 rules.
-
- 03 May, 2015 4 commits
-
-
Tom Lane authored
When altering the deferredness state of a foreign key constraint, we correctly updated the catalogs and then invalidated the relcache state for the target relation ... but that's not the only relation with relevant triggers. Must invalidate the other table as well, or the state change fails to take effect promptly for operations triggered on the other table. Per bug #13224 from Christian Ullrich. In passing, reorganize regression test case for this feature so that it isn't randomly injected into the middle of an unrelated test sequence. Oversight in commit f177cbfe. Back-patch to 9.4 where the faulty code was added.
-
Andrew Dunstan authored
-
Andrew Dunstan authored
These modules were all missing essential Windows scaffolding, including resources files and descriptions, and links to the relevant library import files. This latter item means that the modules can't be built with pgxs on Windows, as we don't install the import files. If we ever decide to install them this restriction could probably be removed. Also, as with plperl we need to make sure that perl's CORE directory is last on the include list, as on Windows it appears to contain some headers with names that clash with names of some headers we include.
-
Andrew Dunstan authored
By converting to using forward slashes at configure time we avoid having to repeat the logic anywhere that this is needed, such as in transforms modules for plpython.
-
- 02 May, 2015 8 commits
-
-
Noah Misch authored
This eliminates many seconds of test duration and the cause to invoke "rm -rf", which is typically unavailable on Windows. Michael Paquier and Noah Misch
-
Noah Misch authored
Commit c67a86f7 caught most of these, but this negative test escaped notice. The test did pass, for the wrong reason, under affected configurations. Michael Paquier
-
Noah Misch authored
coerce_type() has local variables named targetTypeId, baseTypeId, and targetType. targetType has been the Type structure for baseTypeId, so rename it to baseType.
-
Peter Eisentraut authored
-
Peter Eisentraut authored
PORTNAME isn't set until the global makefiles have been included.
-
Peter Eisentraut authored
Apparently, looking for an appropriately named file doesn't work on some older versions, so put the back the explicit platform detection.
-
Peter Eisentraut authored
Combine the two places that set CPPFLAGS into one. Also, some settings should be restricted to Windows only. More precisely, -Wno-comment is a GCC-only option, but Windows in a makefile implies GCC at the moment. Also, since -Wno-comment is more properly a preprocessor option, move it to CPPFLAGS to simplify things a bit.
-
Peter Eisentraut authored
For building PL/Perl, PL/Python, and PL/Tcl, we need a shared library of libperl, libpython, and libtcl, respectively. Previously, this was checked in the makefiles, skipping the PL build with a warning if no shared library was available. Now this is checked in configure, with an error if no shared library is available. The previous situation arose because in the olden days, the configure options --with-perl, --with-python, and --with-tcl controlled whether frontend interfaces for those languages would be built. The procedural languages were added later, and shared libraries were often not available in the beginning. So it was decided skip the builds of the procedural languages in those cases. The frontend interfaces have since been removed from the tree, and shared libraries are now available most of the time, so that setup makes much less sense now. Also, the new setup allows contrib modules and pgxs users to rely on the respective PLs being available based on configure flags.
-
- 01 May, 2015 7 commits
-
-
Andrew Dunstan authored
This involves moving perl's CORE library to the end of the include list, and adding other compilation settings that plperl uses. This won't completely fix the breakage currently being seen by gcc builds on Windows, but it will let the build get further, and should be wholly benign, if not beneficial, on *nix.
-
Bruce Momjian authored
pg_dump turns tables into views using a method that was not setting pg_class.relreplident properly. Patch by Marko Tiikkaja Backpatch through 9.4
-
Robert Haas authored
Tom Lane pointed out that this wasn't done, and asked whether that was intentional. Subsequent discussion was in favor of making the change, so here we go.
-
Robert Haas authored
Foreign data wrappers can use this capability for so-called "join pushdown"; that is, instead of executing two separate foreign scans and then joining the results locally, they can generate a path which performs the join on the remote server and then is scanned locally. This commit does not extend postgres_fdw to take advantage of this capability; it just provides the infrastructure. Custom scan providers can use this in a similar way. Previously, it was only possible for a custom scan provider to scan a single relation. Now, it can scan an entire join tree, provided of course that it knows how to produce the same results that the join would have produced if executed normally. KaiGai Kohei, reviewed by Shigeru Hanada, Ashutosh Bapat, and me.
-
Andres Freund authored
Michael Paquier and myself.
-
Andres Freund authored
ParseCommitRecord() accessed xl_xact_origin directly. But the chunks in the commit record's data only have 4 byte alignment, whereas xl_xact_origin's members require 8 byte alignment on some platforms. Update comments to make not of that and copy the record to stack local storage before reading. With help from Stefan Kaltenbrunner in pinning down the buildfarm and verifying the fix.
-
Heikki Linnakangas authored
pg_rewind looks at the control file to determine the server's timeline. If the standby performs a "fast promotion", the timeline ID in the control file is not updated until the next checkpoint. The startup process requests a checkpoint immediately after promotion, so this is unlikely to be an issue in the real world, but the regression suite ran pg_rewind so quickly after promotion that the checkpoint had not yet completed. Reported by Stephen Frost
-
- 30 Apr, 2015 5 commits
-
-
Alvaro Herrera authored
In commit 31eae602, some documents were not updated to show the new capability; fix that. Also, the error message you get when CURRENT_USER and SESSION_USER are used in a context that doesn't accept them could be clearer about it being a problem only in those contexts; so add the word "here". Author: Kyotaro HORIGUCHI His patch submission also included changes to GRANT/REVOKE, but those seemed more controversial, so I left them out. We can reconsider these changes later.
-
Robert Haas authored
This does four basic things. First, it provides convenience routines to coordinate the startup and shutdown of parallel workers. Second, it synchronizes various pieces of state (e.g. GUCs, combo CID mappings, transaction snapshot) from the parallel group leader to the worker processes. Third, it prohibits various operations that would result in unsafe changes to that state while parallelism is active. Finally, it propagates events that would result in an ErrorResponse, NoticeResponse, or NotifyResponse message being sent to the client from the parallel workers back to the master, from which they can then be sent on to the client. Robert Haas, Amit Kapila, Noah Misch, Rushabh Lathia, Jeevan Chalke. Suggestions and review from Andres Freund, Heikki Linnakangas, Noah Misch, Simon Riggs, Euler Taveira, and Jim Nasby.
-
Alvaro Herrera authored
We need to create the pg_multixact/offsets file deleted by pg_upgrade much earlier than we originally were: it was in TrimMultiXact(), which runs after we exit recovery, but it actually needs to run earlier than the first call to SetMultiXactIdLimit (before recovery), because that routine already wants to read the first offset segment. Per pg_upgrade trouble report from Jeff Janes. While at it, silence a compiler warning about a pointless assert that an unsigned variable was being tested non-negative. This was a signed constant in Thomas Munro's patch which I changed to unsigned before commit. Pointed out by Andres Freund.
-
Magnus Hagander authored
Amit Langote
-
Peter Eisentraut authored
The "check" target no longer needs to depend on "all", because it now runs "install" directly, which in turn depends on "all". Doing both will cause problems with parallel make, because two builds will run next to each other. Also remove the redirection of the temp-install output into a log file. This was appropriate when this was done from within pg_regress, but now it's just a regular make run, and especially with the above changes this will now take the place of running the "all" target before the test suites. problem report by Jeff Janes, patch in part by Michael Paquier
-
- 29 Apr, 2015 10 commits
-
-
Andres Freund authored
We can't rely on UINT16_MAX being present, which is why we introduced PG_UINT16_MAX... Buildfarm animal bowerbird via Andrew Gierth.
-
Robert Haas authored
This file isn't entirely consistent about whether "on" and "off" should be marked up with <literal>, but it doesn't make much sense to be inconsistent within a single sentence. Etsuro Fujita
-
Robert Haas authored
-
Robert Haas authored
When this code was written, catalog scans were normally performed using SnapshotNow, making special handling necessary here. Now, however, all catalog scans use MVCC snapshots, so we can change these cases to look more like what we do for catalog scans elsewhere in the code. Per discussion with Tom Lane and a reminder from Bruce Momjian.
-
Robert Haas authored
-
Andrew Dunstan authored
Currently regression tests for python 3 are disabled on MSVC, and these tests fail with python 3, too, so we have some work to do to enable both. Meanwhile, all the buildfarm hosts seem to be building with python 2 anyway, so this at least gets us some coverage. Original patch from Michael Paquier, significantly modified by me.
-
Andres Freund authored
When implementing a replication solution ontop of logical decoding, two related problems exist: * How to safely keep track of replication progress * How to change replication behavior, based on the origin of a row; e.g. to avoid loops in bi-directional replication setups The solution to these problems, as implemented here, consist out of three parts: 1) 'replication origins', which identify nodes in a replication setup. 2) 'replication progress tracking', which remembers, for each replication origin, how far replay has progressed in a efficient and crash safe manner. 3) The ability to filter out changes performed on the behest of a replication origin during logical decoding; this allows complex replication topologies. E.g. by filtering all replayed changes out. Most of this could also be implemented in "userspace", e.g. by inserting additional rows contain origin information, but that ends up being much less efficient and more complicated. We don't want to require various replication solutions to reimplement logic for this independently. The infrastructure is intended to be generic enough to be reusable. This infrastructure also replaces the 'nodeid' infrastructure of commit timestamps. It is intended to provide all the former capabilities, except that there's only 2^16 different origins; but now they integrate with logical decoding. Additionally more functionality is accessible via SQL. Since the commit timestamp infrastructure has also been introduced in 9.5 (commit 73c986ad) changing the API is not a problem. For now the number of origins for which the replication progress can be tracked simultaneously is determined by the max_replication_slots GUC. That GUC is not a perfect match to configure this, but there doesn't seem to be sufficient reason to introduce a separate new one. Bumps both catversion and wal page magic. Author: Andres Freund, with contributions from Petr Jelinek and Craig Ringer Reviewed-By: Heikki Linnakangas, Petr Jelinek, Robert Haas, Steve Singer Discussion: 20150216002155.GI15326@awork2.anarazel.de, 20140923182422.GA15776@alap3.anarazel.de, 20131114172632.GE7522@alap2.anarazel.de
-
Robert Haas authored
Etsuro Fujita
-
Bruce Momjian authored
Previously both hours and minutes displayed as negative. Report by David Pozsar
-
Bruce Momjian authored
Patch by Craig Ringer
-