- 18 Feb, 2015 3 commits
-
-
Alvaro Herrera authored
We were neglecting to schema-qualify them. Backpatch to 9.3, where object identities were introduced as a concept by commit f8348ea3.
-
Tom Lane authored
Somebody apparently threw darts at the code to decide where to insert these. They certainly didn't proceed by adding them where other similar SETs were handled. This at least broke pg_restore, and perhaps other use-cases too.
-
Tom Lane authored
cfopen() and cfopen_write() failed to pass the compression level through to zlib, so that you always got the default compression level if you got any at all. In passing, also fix these and related functions so that the correct errno is reliably returned on failure; the original coding supposes that free() cannot change errno, which is untrue on at least some platforms. Per bug #12779 from Christoph Berg. Back-patch to 9.1 where the faulty code was introduced. Michael Paquier
-
- 17 Feb, 2015 5 commits
-
-
Tom Lane authored
The previous coding in EXPLAIN always labeled a ModifyTable node with the name of the target table affected by its first child plan. When originally written, this was necessarily the parent table of the inheritance tree, so everything was unconfusing. But when we added NO INHERIT constraints, it became possible for the parent table to be deleted from the plan by constraint exclusion while still leaving child tables present. This led to the ModifyTable plan node being labeled with the first surviving child, which was deemed confusing. Fix it by retaining the parent table's RT index in a new field in ModifyTable. Etsuro Fujita, reviewed by Ashutosh Bapat and myself
-
Heikki Linnakangas authored
After removal, the next_sibling pointer of a node was sometimes incorrectly left to point to another node in the heap, which meant that a node was sometimes linked twice into the heap. Surprisingly that didn't cause any crashes in my testing, but it was clearly wrong and could easily segfault in other scenarios. Also always keep the prev_or_parent pointer as NULL on the root node. That was not a correctness issue AFAICS, but let's be tidy. Add a debugging function, to dump the contents of a pairing heap as a string. It's #ifdef'd out, as it's not used for anything in any normal code, but it was highly useful in debugging this. Let's keep it handy for further reference.
-
Heikki Linnakangas authored
The part of the comparison function that was supposed to keep heap tuples ahead of index items was backwards. It would not lead to incorrect results, but it is more efficient to return heap tuples first, before scanning more index pages, when both have the same distance. Alexander Korotkov
-
Tom Lane authored
In investigating yesterday's crash report from Hugo Osvaldo Barrera, I only looked back as far as commit f3aec2c7 where the breakage occurred (which is why I thought the IPv4-in-IPv6 business was undocumented). But actually the logic dates back to commit 3c9bb888 and was simply broken by erroneous refactoring in the later commit. A bit of archives excavation shows that we added the whole business in response to a report that some 2003-era Linux kernels would report IPv4 connections as having IPv4-in-IPv6 addresses. The fact that we've had no complaints since 9.0 seems to be sufficient confirmation that no modern kernels do that, so let's just rip it all out rather than trying to fix it. Do this in the back branches too, thus essentially deciding that our effective behavior since 9.0 is correct. If there are any platforms on which the kernel reports IPv4-in-IPv6 addresses as such, yesterday's fix would have made for a subtle and potentially security-sensitive change in the effective meaning of IPv4 pg_hba.conf entries, which does not seem like a good thing to do in minor releases. So let's let the post-9.0 behavior stand, and change the documentation to match it. In passing, I failed to resist the temptation to wordsmith the description of pg_hba.conf IPv4 and IPv6 address entries a bit. A lot of this text hasn't been touched since we were IPv4-only.
-
Robert Haas authored
Avoid losing errno if readdir() fails and closedir() works. Consistently return 4 rather than 3 if both a lost+found directory and other files are found, rather than returning one value or the other depending on the order of the directory listing. Update comments to match the actual behavior. These oversights date to commits 6f03927f and 17f15239. Marco Nenciarini
-
- 16 Feb, 2015 9 commits
-
-
Kevin Grittner authored
Where these checks were being done there was no code path which could leave them NULL. Michael Paquier per Coverity
-
Tom Lane authored
The previous coding copied garbage into a local variable, pretty much ensuring that the intended test of an IPv6 connection address against a promoted IPv4 address from pg_hba.conf would never match. The lack of field complaints likely indicates that nobody realized this was supposed to work, which is unsurprising considering that no user-facing docs suggest it should work. In principle this could have led to a SIGSEGV due to reading off the end of memory, but since the source address would have pointed to somewhere in the function's stack frame, that's quite unlikely. What led to discovery of the bug is Hugo Osvaldo Barrera's report of a crash after an OS upgrade, which is probably because he is now running a system in which memcpy raises abort() upon detecting overlapping source and destination areas. (You'd have to additionally suppose some things about the stack frame layout to arrive at this conclusion, but it seems plausible.) This has been broken since the code was added, in commit f3aec2c7, so back-patch to all supported branches.
-
Heikki Linnakangas authored
The comment was copy-pasted from the backend code along with the implementation, but libpq has different reasons for using the BIO.
-
Heikki Linnakangas authored
This reverts the removal of the call in commit (272923a0). It turns out it wasn't superfluous after all: without it, renegotiation fails if a client certificate was used. The rest of the changes in that commit are still OK and not reverted. Per investigation of bug #12769 by Arne Scheffer, although this doesn't fix the reported bug yet.
-
Tom Lane authored
exec_stmt_return() and exec_stmt_return_next() have fast-path code for handling a simple variable reference (i.e. "return var") without going through the full expression evaluation machinery. For some reason, pl_gram.y was under the impression that this fast path only applied for record/row variables; but in reality code for handling regular scalar variables has been there all along. Adjusting the logic to allow that code to be used actually results in a net savings of code in pl_gram.y (by eliminating some redundancy), and it buys a measurable though not very impressive amount of speedup. Noted while fooling with my expanded-array patch, wherein this makes a much bigger difference because it enables returning an expanded array variable without an extra flattening step. But AFAICS this is a win regardless, so commit it separately.
-
Heikki Linnakangas authored
All the other certificates were created to be valid for 10000 days, because we don't want to have to recreate them. But I missed the root CA cert, and the pre-created certificates included in the repository expired in January. Fix, and re-create all the certificates.
-
Tom Lane authored
The four functions array_ref, array_set, array_get_slice, array_set_slice have traditionally declared their array inputs and results as being of type "ArrayType *". This is a lie, and has been since Berkeley days, because they actually also support "fixed-length array" types such as "name" and "point"; not to mention that the inputs could be toasted. These values should be declared Datum instead to avoid confusion. The current coding already risks possible misoptimization by compilers, and it'll get worse when "expanded" array representations become a valid alternative. However, there's a fair amount of code using array_ref and array_set with arrays that *are* known to be ArrayType structures, and there might be more such places in third-party code. Rather than cluttering those call sites with PointerGetDatum/DatumGetArrayTypeP cruft, what I did was to rename the existing functions to array_get_element/array_set_element, fix their signatures, then reincarnate array_ref/array_set as backwards compatibility wrappers. array_get_slice/array_set_slice have no such constituency in the core code, and probably not in third-party code either, so I just changed their APIs.
-
Fujii Masao authored
Commit 40bede54 moved pg_lzcompress.c to src/common, but forgot to update its path in doc. This commit fixes that oversight.
-
Tom Lane authored
In commit bf7ca158 I introduced an assumption that an RTE referenced by a whole-row Var must have a valid eref field. This is false for RTEs constructed by DoCopy, and there are other places taking similar shortcuts. Perhaps we should make all those places go through addRangeTableEntryForRelation or its siblings instead of having ad-hoc logic, but the most reliable fix seems to be to make the new code in ExecEvalWholeRowVar cope if there's no eref. We can reasonably assume that there's no need to insert column aliases if no aliases were provided. Add a regression test case covering this, and also verifying that a sane column name is in fact available in this situation. Although the known case only crashes in 9.4 and HEAD, it seems prudent to back-patch the code change to 9.2, since all the ingredients for a similar failure exist in the variant patch applied to 9.3 and 9.2. Per report from Jean-Pierre Pelletier.
-
- 15 Feb, 2015 2 commits
-
-
Andrew Dunstan authored
-
Peter Eisentraut authored
Before, it was writing the processed files into the input directory, which is incorrect in a vpath build.
-
- 14 Feb, 2015 1 commit
-
-
Tom Lane authored
We can't really fix the problem that the result is defined to depend on random(), so it is still going to fail the "unstable input conversion" test in parse_type.c. However, we can at least satify valgrind. (It looks like this code used to be valgrind-clean, actually, until somebody did a careless s/strncpy/strlcpy/g on it.) In passing, let's just make real sure that chkpass_out doesn't overrun its output buffer. No need for backpatch, I think, since this is just to satisfy debugging tools. Asif Naeem
-
- 13 Feb, 2015 3 commits
-
-
Heikki Linnakangas authored
Rob Rowan. Backpatch to all supported versions, like the patch that added the broken #ifdef.
-
Heikki Linnakangas authored
The client socket is always in non-blocking mode, and if we actually want blocking behaviour, we emulate it by sleeping and retrying. But we have retry loops at different layers for reads and writes, which was confusing. To simplify, remove all the sleeping and retrying code from the lower levels, from be_tls_read and secure_raw_read and secure_raw_write, and put all the logic in secure_read() and secure_write().
-
Heikki Linnakangas authored
At least in all modern versions of OpenSSL, it is enough to call SSL_renegotiate() once, and then forget about it. Subsequent SSL_write() and SSL_read() calls will finish the handshake. The SSL_set_session_id_context() call is unnecessary too. We only have one SSL context, and the SSL session was created with that to begin with.
-
- 12 Feb, 2015 6 commits
-
-
Bruce Momjian authored
Patch by Greg Sabino Mullane, slight adjustments by me
-
Bruce Momjian authored
This allows the delete script to properly function when special characters appear in directory paths, e.g. spaces. Backpatch through 9.0
-
Bruce Momjian authored
pg_database.datfrozenxid and pg_database.datminmxid were not preserved for the 'postgres' and 'template1' databases. This could cause missing clog file errors on access to user tables and indexes after upgrades in these databases. Backpatch through 9.0
-
Andres Freund authored
Author: Tatsuo Ishii Backpatch to 9.4, where logicaldecoding was introduced.
-
Tom Lane authored
This omission leaked one PGresult per WAL streaming cycle, which possibly would never be enough to notice in the real world, but it's still a leak. Per Coverity. Back-patch to 9.3 where the error was introduced.
-
Tom Lane authored
We'd leak the ident_serv data structure if the second pg_getaddrinfo_all (the one for the local address) failed. This is not of great consequence because a failure return here just leads directly to backend exit(), but if this function is going to try to clean up after itself at all, it should not have such holes in the logic. Try to fix it in a future-proof way by having all the failure exits go through the same cleanup path, rather than "optimizing" some of them. Per Coverity. Back-patch to 9.2, which is as far back as this patch applies cleanly.
-
- 11 Feb, 2015 3 commits
-
-
Tom Lane authored
We already had one go at this issue in commit d73b7f97, but we failed to notice that buildACLCommands also leaked several PQExpBuffers along with a simply malloc'd string. This time let's try to make the fix a bit more future-proof by eliminating the separate exit path. It's still not exactly critical because pg_dump will curl up and die on failure; but since the amount of the potential leak is now several KB, it seems worth back-patching as far as 9.2 where the previous fix landed. Per Coverity, which evidently is smarter than clang's static analyzer.
-
Tom Lane authored
Back in 2003 we had a discussion about how to decide which casts to dump. At the time pg_dump really only considered an object's containing schema to decide what to dump (ie, dump whatever's not in pg_catalog), and so we chose a complicated idea involving whether the underlying types were to be dumped (cf commit a6790ce8). But users are allowed to create casts between built-in types, and we failed to dump such casts. Let's get rid of that heuristic, which has accreted even more ugliness since then, in favor of just looking at the cast's OID to decide if it's a built-in cast or not. In passing, also fix some really ancient code that supposed that it had to manufacture a dependency for the cast on its cast function; that's only true when dumping from a pre-7.3 server. This just resulted in some wasted cycles and duplicate dependency-list entries with newer servers, but we might as well improve it. Per gripes from a number of people, most recently Greg Sabino Mullane. Back-patch to all supported branches.
-
Tom Lane authored
Back in commit 400e2c93 I rewrote GEQO's gimme_tree function to improve its heuristic for modifying the given tour into a legal join order. In what can only be called a fit of hubris, I supposed that this new heuristic would *always* find a legal join order, and ripped out the old logic that allowed gimme_tree to sometimes fail. The folly of this is exposed by bug #12760, in which the "greedy" clumping behavior of merge_clump() can lead it into a dead end which could only be recovered from by un-clumping. We have no code for that and wouldn't know exactly what to do with it if we did. Rather than try to improve the heuristic rules still further, let's just recognize that it *is* a heuristic and probably must always have failure cases. So, put back the code removed in the previous commit to allow for failure (but comment it a bit better this time). It's possible that this code was actually fully correct at the time and has only been broken by the introduction of LATERAL. But having seen this example I no longer have much faith in that proposition, so back-patch to all supported branches.
-
- 10 Feb, 2015 2 commits
-
-
Michael Meskes authored
When ecpg was rewritten to the new protocol version not all variable types were corrected. This patch rewrites the code for these types to fix that. It also fixes the documentation to correctly tell the status of array handling.
-
Heikki Linnakangas authored
This speeds up WAL generation and replay. The new algorithm is significantly faster with large inputs, like full-page images or when inserting wide rows. It is slower with tiny inputs, i.e. less than 10 bytes or so, but the speedup with longer inputs more than make up for that. Even small WAL records at least have 24 byte header in the front. The output is identical to the current byte-at-a-time computation, so this does not affect compatibility. The new algorithm is only used for the CRC-32C variant, not the legacy version used in tsquery or the "traditional" CRC-32 used in hstore and ltree. Those are not as performance critical, and are usually only applied over small inputs, so it seems better to not carry around the extra lookup tables to speed up those rare cases. Abhijit Menon-Sen
-
- 09 Feb, 2015 4 commits
-
-
Heikki Linnakangas authored
When I moved pg_crc.c from src/port to src/common, I forgot to modify MSVC build script accordingly.
-
Tom Lane authored
Fix some issues I noticed while fooling with an extension to allow an additional kind of toast pointer. Much of this is just comment improvement, but there are a couple of actual bugs, which might or might not be reachable today depending on what can happen during logical decoding. An example is that toast_flatten_tuple() failed to cover the possibility of an indirection pointer in its input. Back-patch to 9.4 just in case that is reachable now. In HEAD, also correct some really minor issues with recent compression reorganization, such as dangerously underparenthesized macros.
-
Heikki Linnakangas authored
To get CRC functionality in a client program, you now need to link with libpgcommon instead of libpgport. The CRC code has nothing to do with portability, so libpgcommon is a better home. (libpgcommon didn't exist when pg_crc.c was originally moved to src/port.) Remove the possibility to get CRC functionality by just #including pg_crc_tables.h. I'm not aware of any extensions that actually did that and couldn't simply link with libpgcommon. This also moves the pg_crc.h header file from src/include/utils to src/include/common, which will require changes to any external programs that currently does #include "utils/pg_crc.h". That seems acceptable, as include/common is clearly the right home for it now, and the change needed to any such programs is trivial.
-
Fujii Masao authored
The meta data of PGLZ symbolized by PGLZ_Header is removed, to make the compression and decompression code independent on the backend-only varlena facility. PGLZ_Header is being used to store some meta data related to the data being compressed like the raw length of the uncompressed record or some varlena-related data, making it unpluggable once PGLZ is stored in src/common as it contains some backend-only code paths with the management of varlena structures. The APIs of PGLZ are reworked at the same time to do only compression and decompression of buffers without the meta-data layer, simplifying its use for a more general usage. On-disk format is preserved as well, so there is no incompatibility with previous major versions of PostgreSQL for TOAST entries. Exposing compression and decompression APIs of pglz makes possible its use by extensions and contrib modules. Especially this commit is required for upcoming WAL compression feature so that the WAL reader facility can decompress the WAL data by using pglz_decompress. Michael Paquier, reviewed by me.
-
- 07 Feb, 2015 2 commits
-
-
Noah Misch authored
We reserve space for the full amount, not one less. The affected checks deal with localized month and day names. Today's DCH_MAX_ITEM_SIZ value would suffice for a 60-byte day name, while the longest known is the 49-byte mn_CN.utf-8 word for "Saturday." Thus, the upshot of this change is merely to avoid misdirecting future readers of the code; users are not expected to see errors either way.
-
Noah Misch authored
Interrupting pq_recvbuf() can break protocol sync, so its callers all deserve this assertion. The one pq_peekbyte() caller suffices already.
-