- 04 Feb, 2015 7 commits
-
-
Heikki Linnakangas authored
It was getting tedious to track and release all the different things that form a scan key. We were leaking at least the queryCategories array, and possibly more, on a rescan. That was visible if a GIN index was used in a nested loop join. This also protects from leaks in extractQuery method. No backpatching, given the lack of complaints from the field. Maybe later, after this has received more field testing.
-
Heikki Linnakangas authored
If an insertion or update had to wait for another transaction to finish, because there was another insertion with conflicting key in progress, we would pass a just-free'd item pointer to XactLockTableWait(). All calls to XactLockTableWait() and MultiXactIdWait() had similar issues. Some passed a pointer to a buffer in the buffer cache, after already releasing the lock. The call in EvalPlanQualFetch had already released the pin too. All but the call in execUtils.c would merely lead to reporting a bogus ctid, however (or an assertion failure, if enabled). All the callers that passed HeapTuple->t_data->t_ctid were slightly bogus anyway: if the tuple was updated (again) in the same transaction, its ctid field would point to the next tuple in the chain, not the tuple itself. Backpatch to 9.4, where the 'ctid' argument to XactLockTableWait was added (in commit f88d4cfc)
-
Robert Haas authored
Remove some unnecessary null-tests, and replace a goto-label construct with an "if" block. Michael Paquier, reviewed by me.
-
Heikki Linnakangas authored
These are fairly obscure cases, but let's keep Coverity happy. Michael Paquier with some further fixes by me.
-
Andres Freund authored
On windows _isnan() (which isnan() is redirected to in port/win32.h) is declared in float.h, not math.h. Per buildfarm animal currawong. Backpatch to all supported branches.
-
Fujii Masao authored
Ian Barwick
-
Heikki Linnakangas authored
All the other new SSL information functions had dummy versions in be-secure.c, but I missed PQsslAttributes(). Oops. Surprisingly, the linker did not complain about the missing function on most platforms represented in the buildfarm, even though it is exported, except for a few Windows systems.
-
- 03 Feb, 2015 15 commits
-
-
Andres Freund authored
It's perfectly fine to have blocked interrupts when ProcessClientWriteInterrupt() is called. In fact it's commonly the case when emitting error reports. And we deal with that correctly. Even if that'd not be the case, it'd be a bad location for such a assertion. Because ProcessClientWriteInterrupt() is only called when the socket is blocked it's hard to hit. Per Heikki and buildfarm animals nightjar and dunlin.
-
Andres Freund authored
Now that nothing sets ImmediateInterruptOK to true anymore, we can remove all the supporting code. Reviewed-By: Heikki Linnakangas
-
Andres Freund authored
The remaining caller (lwlocks) doesn't need that facility, and we plan to remove ImmedidateInterruptOK entirely. That means that interrupts can't be serviced race-free and portably anyway, so there's little reason for keeping the feature. Reviewed-By: Heikki Linnakangas
-
Andres Freund authored
Deadlock checking was performed inside signal handlers up to now. While it's a remarkable feat to have made this work reliably, it's quite complex to understand why that is the case. Partially it worked due to the assumption that semaphores are signal safe - which is not actually documented to be the case for sysv semaphores. The reason we had to rely on performing this work inside signal handlers is that semaphores aren't guaranteed to be interruptable by signals on all platforms. But now that latches provide a somewhat similar API, which actually has the guarantee of being interruptible, we can avoid doing so. Signalling between ProcSleep, ProcWakeup, ProcWaitForSignal and ProcSendSignal is now done using latches. This increases the likelihood of spurious wakeups. As spurious wakeup already were possible and aren't likely to be frequent enough to be an actual problem, this seems acceptable. This change would allow for further simplification of the deadlock checking, now that it doesn't have to run in a signal handler. But even if I were motivated to do so right now, it would still be better to do that separately. Such a cleanup shouldn't have to be reviewed a the same time as the more fundamental changes in this commit. There is one possible usability regression due to this commit. Namely it is more likely than before that log_lock_waits messages are output more than once. Reviewed-By: Heikki Linnakangas
-
Andres Freund authored
We used to handle authentication_timeout by setting ImmediateInterruptOK to true during large parts of the authentication phase of a new connection. While that happens to work acceptably in practice, it's not particularly nice and has ugly corner cases. Previous commits converted the FE/BE communication to use latches and implemented support for interrupt handling during both send/recv. Building on top of that work we can get rid of ImmediateInterruptOK during authentication, by immediately treating timeouts during authentication as a reason to die. As die interrupts are handled immediately during client communication that provides a sensibly quick reaction time to authentication timeout. Additionally add a few CHECK_FOR_INTERRUPTS() to some more complex authentication methods. More could be added, but this already should provides a reasonable coverage. While it this overall increases the maximum time till a timeout is reacted to, it greatly reduces complexity and increases reliability. That seems like a overall win. If the increase proves to be noticeable we can deal with those cases by moving to nonblocking network code and add interrupt checking there. Reviewed-By: Heikki Linnakangas
-
Tom Lane authored
This field has been unreferenced since 1998, and does not appear in lseg values stored on disk (since sizeof(lseg) is only 32 bytes according to pg_type). There was apparently some idea of maintaining it just in values appearing in memory, but the bookkeeping required to make that work would surely far outweigh the cost of recalculating the line's slope when needed. Remove it to (a) simplify matters and (b) suppress some uninitialized-field whining from Coverity.
-
Andres Freund authored
Up to now it was impossible to terminate a backend that was trying to send/recv data to/from the client when the socket's buffer was already full/empty. While the send/recv calls itself might have gotten interrupted by signals on some platforms, we just immediately retried. That could lead to situations where a backend couldn't be terminated , after a client died without the connection being closed, because it was blocked in send/recv. The problem was far more likely to be hit when sending data than when reading. That's because while reading a command from the client, and during authentication, we processed interrupts immediately . That primarily left COPY FROM STDIN as being problematic for recv. Change things so that that we process 'die' events immediately when the appropriate signal arrives. We can't sensibly react to query cancels at that point, because we might loose sync with the client as we could be in the middle of writing a message. We don't interrupt writes if the write buffer isn't full, as indicated by write() returning EWOULDBLOCK, as that would lead to fewer error messages reaching clients. Per discussion with Kyotaro HORIGUCHI and Heikki Linnakangas Discussion: 20140927191243.GD5423@alap3.anarazel.de
-
Andres Freund authored
Up to now large swathes of backend code ran inside signal handlers while reading commands from the client, to allow for speedy reaction to asynchronous events. Most prominently shared invalidation and NOTIFY handling. That means that complex code like the starting/stopping of transactions is run in signal handlers... The required code was fragile and verbose, and is likely to contain bugs. That approach also severely limited what could be done while communicating with the client. As the read might be from within openssl it wasn't safely possible to trigger an error, e.g. to cancel a backend in idle-in-transaction state. We did that in some cases, namely fatal errors, nonetheless. Now that FE/BE communication in the backend employs non-blocking sockets and latches to block, we can quite simply interrupt reads from signal handlers by setting the latch. That allows us to signal an interrupted read, which is supposed to be retried after returning from within the ssl library. As signal handlers now only need to set the latch to guarantee timely interrupt processing, remove a fair amount of complicated & fragile code from async.c and sinval.c. We could now actually start to process some kinds of interrupts, like sinval ones, more often that before, but that seems better done separately. This work will hopefully allow to handle cases like being blocked by sending data, interrupting idle transactions and similar to be implemented without too much effort. In addition to allowing getting rid of ImmediateInterruptOK, that is. Author: Andres Freund Reviewed-By: Heikki Linnakangas
-
Andres Freund authored
This allows to introduce more elaborate handling of interrupts while reading from a socket. Currently some interrupt handlers have to do significant work from inside signal handlers, and it's very hard to correctly write code to do so. Generic signal handler limitations, combined with the fact that we can't safely jump out of a signal handler while reading from the client have prohibited implementation of features like timeouts for idle-in-transaction. Additionally we use the latch code to wait in a couple places where we previously only had waiting code on windows as other platforms just busy looped. This can increase the number of systemcalls happening during FE/BE communication. Benchmarks so far indicate that the impact isn't very high, and there's room for optimization in the latch code. The chance of cleaning up the usage of latches gives us, seem to outweigh the risk of small performance regressions. This commit theoretically can't used without the next patch in the series, as WaitLatchOrSocket is not defined to be fully signal safe. As we already do that in some cases though, it seems better to keep the commits separate, so they're easier to understand. Author: Andres Freund Reviewed-By: Heikki Linnakangas
-
Tom Lane authored
LINE doesn't have an "m" field (anymore anyway). Also fix unportable assumption that %x can print the result of pointer subtraction. In passing, improve single_decode() in minor ways: * Remove unnecessary leading-whitespace skip (strtod does that already). * Make GEODEBUG message more intelligible. * Remove entirely-useless test to see if strtod returned a silly pointer. * Don't bother computing trailing-whitespace skip unless caller wants an ending pointer. This has been broken since 261c7d4b. Although it's only debug code, might as well fix the 9.4 branch too.
-
Heikki Linnakangas authored
This makes it possible to query for things like the SSL version and cipher used, without depending on OpenSSL functions or macros. That is a good thing if we ever get another SSL implementation. PQgetssl() still works, but it should be considered as deprecated as it only works with OpenSSL. In particular, PQgetSslInUse() should be used to check if a connection uses SSL, because as soon as we have another implementation, PQgetssl() will return NULL even if SSL is in use.
-
Heikki Linnakangas authored
The logic to compact away removed tuples from page was duplicated with small differences in PageRepairFragmentation, PageIndexMultiDelete, and PageIndexDeleteNoCompact. Put it into a common function. Reviewed by Peter Geoghegan.
-
Heikki Linnakangas authored
The old wording talked about a "sync command", meaining fsync(), but it was not very clear.
-
Heikki Linnakangas authored
Amit Langote
-
Heikki Linnakangas authored
Commit 13629df5 changed metaphone() function to return an empty string on empty input, but it left the old error message in place. It's now dead code. Michael Paquier, per Coverity warning.
-
- 02 Feb, 2015 12 commits
-
-
Robert Haas authored
Sometimes it's useful for a background worker to be able to initialize its database connection by OID rather than by name, so provide a way to do that.
-
Tom Lane authored
Add entries for security issues. Security: CVE-2015-0241 through CVE-2015-0244
-
Heikki Linnakangas authored
If any error occurred while we were in the middle of reading a protocol message from the client, we could lose sync, and incorrectly try to interpret a part of another message as a new protocol message. That will usually lead to an "invalid frontend message" error that terminates the connection. However, this is a security issue because an attacker might be able to deliberately cause an error, inject a Query message in what's supposed to be just user data, and have the server execute it. We were quite careful to not have CHECK_FOR_INTERRUPTS() calls or other operations that could ereport(ERROR) in the middle of processing a message, but a query cancel interrupt or statement timeout could nevertheless cause it to happen. Also, the V2 fastpath and COPY handling were not so careful. It's very difficult to recover in the V2 COPY protocol, so we will just terminate the connection on error. In practice, that's what happened previously anyway, as we lost protocol sync. To fix, add a new variable in pqcomm.c, PqCommReadingMsg, that is set whenever we're in the middle of reading a message. When it's set, we cannot safely ERROR out and continue running, because we might've read only part of a message. PqCommReadingMsg acts somewhat similarly to critical sections in that if an error occurs while it's set, the error handler will force the connection to be terminated, as if the error was FATAL. It's not implemented by promoting ERROR to FATAL in elog.c, like ERROR is promoted to PANIC in critical sections, because we want to be able to use PG_TRY/CATCH to recover and regain protocol sync. pq_getmessage() takes advantage of that to prevent an OOM error from terminating the connection. To prevent unnecessary connection terminations, add a holdoff mechanism similar to HOLD/RESUME_INTERRUPTS() that can be used hold off query cancel interrupts, but still allow die interrupts. The rules on which interrupts are processed when are now a bit more complicated, so refactor ProcessInterrupts() and the calls to it in signal handlers so that the signal handlers always call it if ImmediateInterruptOK is set, and ProcessInterrupts() can decide to not do anything if the other conditions are not met. Reported by Emil Lenngren. Patch reviewed by Noah Misch and Andres Freund. Backpatch to all supported versions. Security: CVE-2015-0244
-
Noah Misch authored
This function uses uninitialized stack and heap buffers as supplementary entropy sources. Mark them so Memcheck will not complain. Back-patch to 9.4, where Valgrind Memcheck cooperation first appeared. Marko Tiikkaja
-
Noah Misch authored
This covers alterations to buffer sizing and zeroing made between imath 1.3 and imath 1.20. Valgrind Memcheck identified the buffer overruns and reliance on uninitialized data; their exploit potential is unknown. Builds specifying --with-openssl are unaffected, because they use the OpenSSL BIGNUM facility instead of imath. Back-patch to 9.0 (all supported versions). Security: CVE-2015-0243
-
Noah Misch authored
Most callers pass a stack buffer. The ensuing stack smash can crash the server, and we have not ruled out the viability of attacks that lead to privilege escalation. Back-patch to 9.0 (all supported versions). Marko Tiikkaja Security: CVE-2015-0243
-
Bruce Momjian authored
Prevent port/snprintf() from overflowing its local fixed-size buffer and pad to the desired number of digits with zeros, even if the precision is beyond the ability of the native sprintf(). port/snprintf() is only used on systems that lack a native snprintf(). Reported by Bruce Momjian. Patch by Tom Lane. Backpatch to all supported versions. Security: CVE-2015-0242
-
Bruce Momjian authored
Previously very long localized month and weekday strings could overflow the allocated buffers, causing a server crash. Reported and patch reviewed by Noah Misch. Backpatch to all supported versions. Security: CVE-2015-0241
-
Bruce Momjian authored
Previously very long field masks for floats could access memory beyond the existing buffer allocated to hold the result. Reported by Andres Freund and Peter Geoghegan. Backpatch to all supported versions. Security: CVE-2015-0241
-
Tom Lane authored
The variable name isn't optional --- looks like a copy-and-paste-o from the \set command, where it is. Dilip Kumar
-
Peter Eisentraut authored
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 19c72ea8d856d7b1d4f5d759a766c8206bf9ce53
-
Peter Eisentraut authored
The previous wording claimed that the file was always in /etc, but of course this varies with the installation layout. Write instead that it can be found via `pg_config --sysconfdir`. Even though this is still somewhat incorrect because it doesn't account of moved installations, it at least conveys that the location depends on the installation.
-
- 01 Feb, 2015 1 commit
-
-
Tom Lane authored
-
- 31 Jan, 2015 3 commits
-
-
Tom Lane authored
"ECHO all" is ignored for interactive input, and has been for a very long time, though possibly not for as long as the documentation has claimed the opposite. Fix that, and also note that empty lines aren't echoed, which while dubious is another longstanding behavior (it's embedded in our regression test files for one thing). Per bug #12721 from Hans Ginzel. In HEAD, also improve the code comments in this area, and suppress an unnecessary fflush(stdout) when we're not echoing. That would likely be safe to back-patch, but I'll not risk it mere hours before a release wrap.
-
Tom Lane authored
As usual, the release notes for older branches will be made by cutting these down, but put them up for community review first. Note: a significant fraction of these items don't apply to 9.4.1, only to older branches, because the fixes already appeared in 9.4.0. These can be distinguished by noting the branch commits in the associated SGML comments. This will be adjusted tomorrow while copying items into the older release-X.Y.sgml files. In a few cases I've made two separate entries with different wordings for 9.4 than for the equivalent commits in the older branches.
-
Tom Lane authored
DST law changes in Chile and Mexico (state of Quintana Roo). Historical changes for Iceland.
-
- 30 Jan, 2015 2 commits
-
-
Stephen Frost authored
In ALTER POLICY, use 'check_expression' instead of 'expression' for the parameter, to match up with the recent CREATE POLICY change. In CREATE POLICY, frame the discussion as granting access to rows instead of limiting access to rows. Further, clarify that the expression must return true for rows to be visible/allowed and that a false or NULL result will mean the row is not visible/allowed. Per discussion with Dean Rasheed and Robert.
-
Tom Lane authored
We've been trying to support \u0000 in JSON values since commit 78ed8e03, and have introduced increasingly worse hacks to try to make it work, such as commit 0ad1a816. However, it fundamentally can't work in the way envisioned, because the stored representation looks the same as for \\u0000 which is not the same thing at all. It's also entirely bogus to output \u0000 when de-escaped output is called for. The right way to do this would be to store an actual 0x00 byte, and then throw error only if asked to produce de-escaped textual output. However, getting to that point seems likely to take considerable work and may well never be practical in the 9.4.x series. To preserve our options for better behavior while getting rid of the nasty side-effects of 0ad1a816, revert that commit in toto and instead throw error if \u0000 is used in a context where it needs to be de-escaped. (These are the same contexts where non-ASCII Unicode escapes throw error if the database encoding isn't UTF8, so this behavior is by no means without precedent.) In passing, make both the \u0000 case and the non-ASCII Unicode case report ERRCODE_UNTRANSLATABLE_CHARACTER / "unsupported Unicode escape sequence" rather than claiming there's something wrong with the input syntax. Back-patch to 9.4, where we have to do something because 0ad1a816 broke things for many cases having nothing to do with \u0000. 9.3 also has bogus behavior, but only for that specific escape value, so given the lack of field complaints it seems better to leave 9.3 alone.
-