- 30 Sep, 2016 6 commits
-
-
Peter Eisentraut authored
Prototypes for functions implementing V1-callable functions are no longer necessary. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
-
Peter Eisentraut authored
atoi() needs stdlib.h strcmp() needs string.h Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
-
Peter Eisentraut authored
Using exit() requires stdlib.h, which is not included. Use return instead. Also add return type for main(). Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
-
Peter Eisentraut authored
Using offsetof() with a run-time computed argument is not allowed in either C or C++. Apparently, gcc allows it, but g++ doesn't. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
-
Magnus Hagander authored
There is a small window between when the server closes out the existing segment and the new one is created. Put a loop around the open call in this case to make sure we wait for the new file to actually appear.
-
Stephen Frost authored
Now that we track initial privileges on extension objects and changes to those permissions, we can drop the superuser() checks from the various functions which are part of the pgstattuple extension and rely on the GRANT system to control access to those functions. Since a pg_upgrade will preserve the version of the extension which existed prior to the upgrade, we can't simply modify the existing functions but instead need to create new functions which remove the checks and update the SQL-level functions to use the new functions (and to REVOKE EXECUTE rights on those functions from PUBLIC). Thanks to Tom and Andres for adding support for extensions to follow update paths (see: 40b449ae), allowing this patch to be much smaller since no new base version script needed to be included. Approach suggested by Noah. Reviewed by Michael Paquier.
-
- 29 Sep, 2016 7 commits
-
-
Peter Eisentraut authored
This was missed in bf5bb2e8, because the code is only visible under PG_FLUSH_DATA_WORKS.
-
Tom Lane authored
This patch just exposes COPY's FROM PROGRAM option in contrib/file_fdw. There don't seem to be any security issues with that that are any worse than what already exist with file_fdw and COPY; as in the existing cases, only superusers are allowed to control what gets executed. A regression test case might be nice here, but choosing a 100% portable command to run is hard. (We haven't got a test for COPY FROM PROGRAM itself, either.) Corey Huinker and Adam Gomaa, reviewed by Amit Langote Discussion: <CADkLM=dGDGmaEiZ=UDepzumWg-CVn7r8MHPjr2NArj8S3TsROQ@mail.gmail.com>
-
Peter Eisentraut authored
On slow machines, this greatly reduces the I/O pressure induced by the tests. From: Michael Paquier <michael.paquier@gmail.com>
-
Peter Eisentraut authored
This is useful for testing, similar to initdb's --nosync. From: Michael Paquier <michael.paquier@gmail.com>
-
Peter Eisentraut authored
Several places weren't careful about fsyncing in the way. See 1d4a0ab1 and 606e0f98 for details about required fsyncs. This adds a couple of functions in src/common/ that have an equivalent in the backend: durable_rename(), fsync_parent_path() From: Michael Paquier <michael.paquier@gmail.com>
-
Peter Eisentraut authored
The intention is to used those in other utilities such as pg_basebackup and pg_receivexlog. From: Michael Paquier <michael.paquier@gmail.com>
-
Heikki Linnakangas authored
That makes the view a lot less disruptive to use on a production system. Without the locks, you don't get a consistent snapshot across all buffers, but that's OK. It wasn't a very useful guarantee in practice. Ivan Kartyshov, reviewed by Tomas Vondra and Robert Haas. Discusssion: <f9d6cab2-73a7-7a84-55a8-07dcb8516ae5@postgrespro.ru>
-
- 28 Sep, 2016 9 commits
-
-
Peter Eisentraut authored
The list of files and directories that pg_basebackup excludes from the backup was somewhat incomplete and unorganized. Change that with having the exclusion driven from tables. Clean up some code around it. Also document the exclusions in more detail so that users of pg_start_backup can make use of it as well. The contents of these directories are now excluded from the backup: pg_dynshmem, pg_notify, pg_serial, pg_snapshots, pg_subtrans Also fix a bug that a pg_repl_slot or pg_stat_tmp being a symlink would cause a corrupt tar header to be created. Now such symlinks are included in the backup as empty directories. Bug found by Ashutosh Sharma <ashu.coek88@gmail.com>. From: David Steele <david@pgmasters.net> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
-
Alvaro Herrera authored
Reported by Peter Eisentraut. Coding suggested by Tom Lane.
-
Tom Lane authored
Add a validity flag to DCHCacheEntry and NUMCacheEntry entries, and do not set it true until after we've parsed the supplied format string. This allows dealing with possible errors while parsing the format without the baroque hack that was there before (which only covered errors within NUMDesc_prepare, anyway). We can get rid of the PG_TRY in NUMDesc_prepare, as well as last_NUMCacheEntry and NUM_cache_remove. (Essentially, this reverts commit ff783fba in favor of a less fragile solution; the problems with that approach are well illustrated by later hacking such as 55f927a4.) In passing, define the size of these caches as DCH_CACHE_ENTRIES not DCH_CACHE_FIELDS + 1 (whoever thought that was a good definition?) and likewise for the NUM cache. Also const-ify format string parameters where convenient, and merge duplicated cache lookup logic. This is primarily driven by a proposed patch from Artur Zakirov, which introduced some ereport's into format string parsing for the datetime case. He proposed preventing the creation of invalid cache entries by parsing the format string first into a local-variable array, and then copying that to a cache entry. That seemed a bit ugly to me, and anyway randomly different from the way the identical problem had been solved for the numeric case. Let's make the two sets of code more similar not less so. I'm not sure whether we'll adopt the new error conditions Artur proposes, but this patch seems like good code cleanup and future-proofing in any case. The existing code is critically (and undocumented-ly) dependent on no elog being thrown out of several nontrivial functions, which is trouble waiting to happen, though it doesn't seem to be actively broken today. Discussion: <b2a39359-3282-b402-f4a3-057aae500ee7@postgrespro.ru>
-
Tom Lane authored
Historically, something like to_date('2009-06-40','YYYY-MM-DD') would return '2009-07-10' because there was no prohibition on out-of-range month or day numbers. This has been widely panned, and it also turns out that Oracle throws an error in such cases. Since these functions are nominally Oracle-compatibility features, let's change that. There's no particular restriction on year (modulo the fact that the scanner may not believe that more than 4 digits are year digits, a matter to be addressed separately if at all). But we now check month, day, hour, minute, second, and fractional-second fields, as well as day-of-year and second-of-day fields if those are used. Currently, no checks are made on ISO-8601-style week numbers or day numbers; it's not very clear what the appropriate rules would be there, and they're probably so little used that it's not worth sweating over. Artur Zakirov, reviewed by Amul Sul, further adjustments by me Discussion: <1873520224.1784572.1465833145330.JavaMail.yahoo@mail.yahoo.com> See-Also: <57786490.9010201@wars-nicht.de>
-
Peter Eisentraut authored
-
Robert Haas authored
Without this, statistics changes accumulated by the worker never get reported to the stats collector, which is bad. Julien Rouhaud
-
Peter Eisentraut authored
The previous patch broke this by returning NULL for a failed CRC check, which pg_controldata would then try to read. Fix by returning the result of the CRC check in a separate argument. Michael Paquier and myself
-
Robert Haas authored
Commit 3fe3511d introduced a new case into this function, but neglected to ensure that the "ondisk" pointer got updated after a possible reallocation as the code does in other cases. Stas Kelvich, per diagnosis by Konstantin Knizhnik.
-
Heikki Linnakangas authored
This makes the parameter easier to extend, to support other password-based authentication protocols than MD5. (SCRAM is being worked on.) The GUC still accepts on/off as aliases for "md5" and "plain", although we may want to remove those once we actually add support for another password hash type. Michael Paquier, reviewed by David Steele, with some further edits by me. Discussion: <CAB7nPqSMXU35g=W9X74HVeQp0uvgJxvYOuA4A-A3M+0wfEBv-w@mail.gmail.com>
-
- 27 Sep, 2016 6 commits
-
-
Tom Lane authored
Pushing an upper-level restriction clause into an unflattened subquery-in-FROM is okay when the subquery contains no SRFs in its targetlist, or when it does but the SRFs are unreferenced by the clause *and the clause is not volatile*. Otherwise, we're changing the number of times the clause is evaluated, which is bad for volatile quals, and possibly changing the result, since a volatile qual might succeed for some SRF output rows and not others despite not referencing any of the changing columns. (Indeed, if the clause is something like "random() > 0.5", the user is probably expecting exactly that behavior.) We had most of these restrictions down, but not the one about the upper clause not being volatile. Fix that, and add a regression test to illustrate the expected behavior. Although this is definitely a bug, it doesn't seem like back-patch material, since possibly some users don't realize that the broken behavior is broken and are relying on what happens now. Also, while the added test is quite cheap in the wake of commit a4c35ea1, it would be much more expensive (or else messier) in older branches. Per report from Tom van Tilburg. Discussion: <CAP3PPDiucxYCNev52=YPVkrQAPVF1C5PFWnrQPT7iMzO1fiKFQ@mail.gmail.com>
-
Tom Lane authored
The only field of this struct that other files have any need to touch is the pointer to the TocEntry a worker is working on. (Well, pg_backup_archiver.c is actually looking at workerStatus too, but that can be finessed by specifying that the TocEntry pointer is NULL for a non-busy worker.) Hence, move out the TocEntry pointers to a separate array within struct ParallelState, and then we can make struct ParallelSlot private. I noted the possibility of this previously, but hadn't got round to actually doing it. Discussion: <1188.1464544443@sss.pgh.pa.us>
-
Tom Lane authored
The existing APIs for creating and parsing command and status messages are rather messy; for example, archive-format modules have to provide code for constructing command messages, which is entirely pointless since the code to read them is hard-wired in WaitForCommands() and hence no format-specific variation is actually possible. But there's little foreseeable reason to need format-specific variation anyway. The situation for status messages is no better; at least those are both constructed and parsed by format-specific code, but said code is quite redundant since there's no actual need for format-specific variation. To add insult to injury, the first API involves returning pointers to static buffers, which is bad, while the second involves returning pointers to malloc'd strings, which is safer but randomly inconsistent. Hence, get rid of the MasterStartParallelItem and MasterEndParallelItem APIs, and instead write centralized functions that construct and parse command and status messages. If we ever do need more flexibility, these functions can be the standard implementations of format-specific callback methods, but that's a long way off if it ever happens. Tom Lane, reviewed by Kevin Grittner Discussion: <17340.1464465717@sss.pgh.pa.us>
-
Tom Lane authored
The ListenToWorkers/ReapWorkerStatus APIs were messy and hard to use. Instead, make DispatchJobForTocEntry register a callback function that will take care of state cleanup, doing whatever had been done by the caller of ReapWorkerStatus in the old design. (This callback is essentially just the old mark_work_done function in the restore case, and a trivial test for worker failure in the dump case.) Then we can have ListenToWorkers call the callback immediately on receipt of a status message, and return the worker to WRKR_IDLE state; so the WRKR_FINISHED state goes away. This allows us to design a unified wait-for-worker-messages loop: WaitForWorkers replaces EnsureIdleWorker and EnsureWorkersFinished as well as the mess in restore_toc_entries_parallel. Also, we no longer need the fragile API spec that the caller of DispatchJobForTocEntry is responsible for ensuring there's an idle worker, since DispatchJobForTocEntry can just wait until there is one. In passing, I got rid of the ParallelArgs struct, which was a net negative in terms of notational verboseness, and didn't seem to be providing any noticeable amount of abstraction either. Tom Lane, reviewed by Kevin Grittner Discussion: <1188.1464544443@sss.pgh.pa.us>
-
Tom Lane authored
It's always been possible to create a zero-dimensional cube by converting from a zero-length float8 array, but cube_in failed to accept the '()' representation that cube_out produced for that case, resulting in a dump/reload hazard. Make it accept the case. Also fix a couple of other places that didn't behave sanely for zero-dimensional cubes: cube_size would produce 1.0 when surely the answer should be 0.0, and g_cube_distance risked a divide-by-zero failure. Likewise, it's always been possible to create cubes containing float8 infinity or NaN coordinate values, but cube_in couldn't parse such input, and cube_out produced platform-dependent spellings of the values. Convert them to use float8in_internal and float8out_internal so that the behavior will be the same as for float8, as we recently did for the core geometric types (cf commit 50861cd6). As in that commit, I don't pretend that this patch fixes all insane corner-case behaviors that may exist for NaNs, but it's a step forward. (This change allows removal of the separate cube_1.out and cube_3.out expected-files, as the platform dependency that previously required them is now gone: an underflowing coordinate value will now produce an error not plus or minus zero.) Make errors from cube_in follow project conventions as to spelling ("invalid input syntax for cube" not "bad cube representation") and errcode (INVALID_TEXT_REPRESENTATION not SYNTAX_ERROR). Also a few marginal code cleanups and comment improvements. Tom Lane, reviewed by Amul Sul Discussion: <15085.1472494782@sss.pgh.pa.us>
-
Alvaro Herrera authored
<sys/select.h> is required by POSIX.1-2001 to get the prototype of select(2), but nearly no systems enforce that because older standards let you get away with including some other headers. Recent OpenBSD hacking has removed that frail touch of friendliness, however, which broke some compiles; fix all the way back to 9.1 by adding the required standard. Only vacuumdb.c was reported to fail, but it seems easier to fix the whole lot in a fell swoop. Per bug #14334 by Sean Farrell.
-
- 26 Sep, 2016 1 commit
-
-
Peter Eisentraut authored
-
- 27 Sep, 2016 1 commit
-
-
Tom Lane authored
The result of FD_ISSET() doesn't necessarily fit in a bool, though assigning it to one might accidentally work depending on platform and which socket FD number is being inquired of. Rewrite to test it with if(), rather than making any specific assumption about the result width, to match the way every other such call in PG is written. Don't break out of the input_mask-filling loop after finding the first client that we're waiting for results from. That mostly breaks parallel query management. Also, if we choose not to call select(), be sure to clear out any bits the mask-filling loop might have set, so that we don't accidentally call doCustom for clients we don't know have input. Doing so would likely be harmless, but it's a waste of cycles and doesn't seem to be intended. Make this_usec wide enough. (Yeah, the value would usually fit in an int, but then why are we using int64 everywhere else?) Minor cosmetic adjustments, mostly comment improvements. Problems introduced by commit 12788ae4. The first issue was discovered by buildfarm testing, the others by code review.
-
- 26 Sep, 2016 3 commits
-
-
Tom Lane authored
We had thirty different GIN array opclasses sharing the same operators and support functions. That still didn't cover all the built-in types, nor did it cover arrays of extension-added types. What we want is a single polymorphic opclass for "anyarray". There were two missing features needed to make this possible: 1. We have to be able to declare the index storage type as ANYELEMENT when the opclass is declared to index ANYARRAY. This just takes a few more lines in index_create(). Although this currently seems of use only for GIN, there's no reason to make index_create() restrict it to that. 2. We have to be able to identify the proper GIN compare function for the index storage type. This patch proceeds by making the compare function optional in GIN opclass definitions, and specifying that the default btree comparison function for the index storage type will be looked up when the opclass omits it. Again, that seems pretty generically useful. Since the comparison function lookup is done in initGinState(), making use of the second feature adds an additional cache lookup to GIN index access setup. It seems unlikely that that would be very noticeable given the other costs involved, but maybe at some point we should consider making GinState data persist longer than it now does --- we could keep it in the index relcache entry, perhaps. Rather fortuitously, we don't seem to need to do anything to get this change to play nice with dump/reload or pg_upgrade scenarios: the new opclass definition is automatically selected to replace existing index definitions, and the on-disk data remains compatible. Also, if a user has created a custom opclass definition for a non-builtin type, this doesn't break that, since CREATE INDEX will prefer an exact match to opcintype over a match to ANYARRAY. However, if there's anyone out there with handwritten DDL that explicitly specifies _bool_ops or one of the other replaced opclass names, they'll need to adjust that. Tom Lane, reviewed by Enrique Meneses Discussion: <14436.1470940379@sss.pgh.pa.us>
-
Heikki Linnakangas authored
The doCustom() function had grown into quite a mess. Rewrite it, in a more explicit state machine style, for readability. This also fixes one minor bug: if a script consisted entirely of meta commands, doCustom() never returned to the caller, so progress reports with the -P option were not printed. I don't want to backpatch this refactoring, and the bug is quite insignificant, so only commit this to master, and leave the bug unfixed in back-branches. Review and original bug report by Fabien Coelho. Discussion: <alpine.DEB.2.20.1607090850120.3412@sto>
-
- 25 Sep, 2016 1 commit
-
-
Tom Lane authored
We weren't terribly consistent about whether to call Apple's OS "OS X" or "Mac OS X", and the former is probably confusing to people who aren't Apple users. Now that Apple has rebranded it "macOS", follow their lead to establish a consistent naming pattern. Also, avoid the use of the ancient project name "Darwin", except as the port code name which does not seem desirable to change. (In short, this patch touches documentation and comments, but no actual code.) I didn't touch contrib/start-scripts/osx/, either. I suspect those are obsolete and due for a rewrite, anyway. I dithered about whether to apply this edit to old release notes, but those were responsible for quite a lot of the inconsistencies, so I ended up changing them too. Anyway, Apple's being ahistorical about this, so why shouldn't we be?
-
- 24 Sep, 2016 1 commit
-
-
Tom Lane authored
Set release date, document a few recent commits, do one last pass of copy-editing.
-
- 23 Sep, 2016 5 commits
-
-
Tom Lane authored
When configured with --enable-tap-tests, "make install" will now install the Perl support files for TAP testing where PGXS will find them. This allows extensions to rely on $(prove_check) even when being built out-of-tree. Back-patch to 9.4 where we first started to support TAP testing, to reduce the number of cases extension makefiles need to consider. Craig Ringer Discussion: <CAMsr+YFXv+2qne6xJW7z_25mYBtktRX5rpkrgrb+DRgQ_FxgHQ@mail.gmail.com>
-
Tom Lane authored
These worked as-is until around 7.0, but fail in newer versions because there are more operators named "#". Besides it's a bit inconsistent that only two of the examples on this page lack type names on their constants. Report: <20160923081530.1517.75670@wrigleys.postgresql.org>
-
Tom Lane authored
Faulty AND/OR nesting in the WHERE clause of getFuncs' SQL query led to dumping range constructor functions if they are part of an extension and we're in binary-upgrade mode. Actually, we don't want to dump them separately even then, since CREATE TYPE AS RANGE will create the range's constructor functions regardless. Per report from Andrew Dunstan. It looks like this mistake was introduced by me, in commit b985d487, in perhaps-overzealous refactoring to reduce code duplication. I'm suitably embarrassed. Report: <34854939-02d7-f591-5677-ce2994104599@dunslane.net>
-
Tom Lane authored
Apparent copy-and-pasteo in standby_desc_invalidations() had two entries for msg->id == SHAREDINVALRELMAP_ID. Aleksander Alekseev Discussion: <20160923090814.GB1238@e733>
-
Tom Lane authored
We must test GetLastError() even when CreateFileMapping() returns a non-null handle. If that value were left over from some previous system call, we might be fooled into thinking the segment already existed. Experimentation on Windows 7 suggests that CreateFileMapping() clears the error code on success, but it is not documented to do so, so let's not rely on that happening in all Windows releases. Amit Kapila Discussion: <20811.1474390987@sss.pgh.pa.us>
-