- 04 Jun, 2011 1 commit
-
-
Tom Lane authored
We were trying to make that strictly an internal implementation detail, but it turns out that it's exposed anyway when dumping a view defined like CREATE VIEW test_view AS VALUES (1), (2), (3) ORDER BY 1; This comes out as CREATE VIEW ... ORDER BY "*VALUES*".column1; which fails to parse when reloading the dump. Hacking ruleutils.c to suppress the column qualification looks like it'd be a risky business, so instead promote the RTE alias to full-fledged usability. Per bug #6049 from Dylan Adams. Back-patch to all supported branches.
-
- 03 Jun, 2011 5 commits
-
-
Alvaro Herrera authored
This case was missed when NOT VALID constraints were first introduced in commit 722bf701 by Simon Riggs on 2011-02-08. Among other things, it causes pg_dump to omit the NOT VALID flag when dumping such constraints, which may cause them to fail to load afterwards, if they contained values failing the constraint. Per report from Thom Brown.
-
Tom Lane authored
The existence of a btree opclass accepting composite types caused us to assume that every composite type is sortable. This isn't true of course; we need to check if the column types are all sortable. There was logic for this for the case of array comparison (ie, check that the element type is sortable), but we missed the point for rowtypes. Per Teodor's report of an ANALYZE failure for an unsortable composite type. Rather than just add some more ad-hoc logic for this, I moved knowledge of the issue into typcache.c. The typcache will now only report out array_eq, record_cmp, and friends as usable operators if the array or composite type will work with those functions. Unfortunately we don't have enough info to do this for anonymous RECORD types; in that case, just assume it will work, and take the runtime failure as before if it doesn't. This patch might be a candidate for back-patching at some point, but given the lack of complaints from the field, I'd rather just test it in HEAD for now. Note: most of the places touched in this patch will need further work when we get around to supporting hashing of record types.
-
Peter Eisentraut authored
This is the original DocBook SGML limit, but apparently most installations have changed it or ignore it, which is why few people have run into this problem. pointed out by Brendan Jurd
-
Heikki Linnakangas authored
flag actually means that the transaction has a conflict out to a transaction that committed before the flagged transaction. Kevin Grittner
-
Tom Lane authored
-
- 02 Jun, 2011 11 commits
-
-
Bruce Momjian authored
Marco Nenciarini
-
Tom Lane authored
We need this now because we allow domains over arrays, and we'll probably allow domains over composites pretty soon, which makes the problem even more obvious. Although domains over arrays also exist in previous versions, this does not need to be back-patched, because the coding used in older versions successfully "looked through" domains over arrays. The problem is exposed by not treating a domain as having a typelem. Problem identified by Noah Misch, though I did not use his patch, since it would require additional work to handle domains over composites that way. This approach is more future-proof.
-
Tom Lane authored
... for lack of the uid_t and gid_t typedefs. Per buildfarm.
-
Tom Lane authored
... on some platforms, anyway. Per buildfarm.
-
Peter Eisentraut authored
Josh Kupershmidt
-
Tom Lane authored
My previous commit disallowed this operation, but did nothing about cleaning up the damage if one had already been done. With the operation disallowed, it's okay to just forcibly clear xmax in a sequence's tuple, since any value seen there could not represent a live transaction's lock. So, any sequence-specific operation will repair the problem automatically, whether or not the user has already seen "could not access status of transaction" failures.
-
Robert Haas authored
-
Tom Lane authored
We can't allow this because such an operation stores its transaction XID into the sequence tuple's xmax. Because VACUUM doesn't process sequences (and we don't want it to start doing so), such an xmax value won't get frozen, meaning it will eventually refer to nonexistent pg_clog storage, and even wrap around completely. Since the row lock is ignored by nextval and setval, the usefulness of the operation is highly debatable anyway. Per reports of trouble with pgpool 3.0, which had ill-advisedly started using such commands as a form of locking. In HEAD, also disallow SELECT FOR UPDATE/SHARE on toast tables. Although this does work safely given the current implementation, there seems no good reason to allow it. I refrained from changing that behavior in back branches, however.
-
Tom Lane authored
-
Robert Haas authored
Report by Greg Sabino Mullane, diagnosis and preliminary patch by Andres Freund, corrections by me.
-
Tom Lane authored
This unifies a bunch of ugly #ifdef's in one place. Per discussion, we only need this where HAVE_UNIX_SOCKETS, so no need to cover Windows. Marko Kreen, some adjustment by Tom Lane
-
- 01 Jun, 2011 3 commits
-
-
Tom Lane authored
Per experimentation with a recent example, in which unreasonable amounts of time could elapse before the backend would respond to a query-cancel. This might be something to back-patch, but the patch doesn't apply cleanly because this code was rewritten for 9.1. Given the lack of field complaints I won't bother for now. Cédric Villemain
-
Peter Eisentraut authored
-
Tom Lane authored
Add a postmaster_is_alive() test to the wait loop, so that we stop waiting if the postmaster dies without removing its pidfile. Unfortunately this only helps after the postmaster has created its pidfile, since until then we don't know which PID to check. But if it never does create the pidfile, we can give up in a relatively short time, so this is a useful addition in practice. Per suggestion from Fujii Masao, though this doesn't look very much like his patch. In addition, improve pg_ctl's ability to cope with pre-existing pidfiles. Such a file might or might not represent a live postmaster that is going to block our postmaster from starting, but the previous code pre-judged the situation and gave up waiting immediately. Now, we will wait for up to 5 seconds to see if our postmaster overwrites such a file. This issue interacts with Fujii's patch because we would make the wrong conclusion if we did the postmaster_is_alive() test with a pre-existing PID. All of this could be improved if we rewrote start_postmaster() so that it could report the child postmaster's PID, so that we'd know a-priori the correct PID to test with postmaster_is_alive(). That looks like a bit too much change for so late in the 9.1 development cycle, unfortunately.
-
- 31 May, 2011 4 commits
-
-
Tom Lane authored
Apparently sane-looking penalty code might return small negative values, for example because of roundoff error. This will confuse places like gistchoose(). Prevent problems by clamping negative penalty values to zero. (Just to be really sure, I also made it force NaNs to zero.) Back-patch to all supported branches. Alexander Korotkov
-
Peter Eisentraut authored
For consistency, have all non-ASCII characters from contributors' names in the source be in UTF-8. But remove some other more gratuitous uses of non-ASCII characters.
-
Peter Eisentraut authored
This has already been the case for the most part; just some cases had slipped through.
-
Tom Lane authored
It turns out the reason we hadn't found out about the portability issues with our credential-control-message code is that almost no modern platforms use that code at all; the ones that used to need it now offer getpeereid(), which we choose first. The last holdout was NetBSD, and they added getpeereid() as of 5.0. So far as I can tell, the only live platform on which that code was being exercised was Debian/kFreeBSD, ie, FreeBSD kernel with Linux userland --- since glibc doesn't provide getpeereid(), we fell back to the control message code. However, the FreeBSD kernel provides a LOCAL_PEERCRED socket parameter that's functionally equivalent to Linux's SO_PEERCRED. That is both much simpler to use than control messages, and superior because it doesn't require receiving a message from the other end at just the right time. Therefore, add code to use LOCAL_PEERCRED when necessary, and rip out all the credential-control-message code in the backend. (libpq still has such code so that it can still talk to pre-9.1 servers ... but eventually we can get rid of it there too.) Clean up related autoconf probes, too. This means that libpq's requirepeer parameter now works on exactly the same platforms where the backend supports peer authentication, so adjust the documentation accordingly.
-
- 30 May, 2011 9 commits
-
-
Tom Lane authored
Even though our existing code for handling credentials control messages has been basically unchanged since 2001, it was fundamentally wrong: it did not ensure proper alignment of the supplied buffer, and it was calculating buffer sizes and message sizes incorrectly. This led to failures on platforms where alignment padding is relevant, for instance FreeBSD on 64-bit platforms, as seen in a recent Debian bug report passed on by Martin Pitt (http://bugs.debian.org//cgi-bin/bugreport.cgi?bug=612888). Rewrite to do the message-whacking using the macros specified in RFC 2292, following a suggestion from Theo de Raadt in that thread. Tested by me on Debian/kFreeBSD-amd64; since OpenBSD and NetBSD document the identical CMSG API, it should work there too. Back-patch to all supported branches.
-
Tom Lane authored
When we added the ability for vacuum to skip heap pages by consulting the visibility map, we made it just not update the reltuples/relpages statistics if it skipped any pages. But this could leave us with extremely out-of-date stats for a table that contains any unchanging areas, especially for TOAST tables which never get processed by ANALYZE. In particular this could result in autovacuum making poor decisions about when to process the table, as in recent report from Florian Helmberger. And in general it's a bad idea to not update the stats at all. Instead, use the previous values of reltuples/relpages as an estimate of the tuple density in unvisited pages. This approach results in a "moving average" estimate of reltuples, which should converge to the correct value over multiple VACUUM and ANALYZE cycles even when individual measurements aren't very good. This new method for updating reltuples is used by both VACUUM and ANALYZE, with the result that we no longer need the grotty interconnections that caused ANALYZE to not update the stats depending on what had happened in the parent VACUUM command. Also, fix the logic for skipping all-visible pages during VACUUM so that it looks ahead rather than behind to decide what to do, as per a suggestion from Greg Stark. This eliminates useless scanning of all-visible pages at the start of the relation or just after a not-all-visible page. In particular, the first few pages of the relation will not be invariably included in the scanned pages, which seems to help in not overweighting them in the reltuples estimate. Back-patch to 8.4, where the visibility map was introduced.
-
Peter Eisentraut authored
This is consistent with the behavior of other global objects such as languages and extensions. Omitting foreign servers also omits the respective user mappings.
-
Magnus Hagander authored
We only support up to version 7.0, so don't recommend upgrading past it. The rest of the documentation around this was already updated, but one spot was missed.
-
Magnus Hagander authored
This makes the behavior compatible with that of hostssl, which also throws an error when there is no SSL support included.
-
Magnus Hagander authored
Since we now include a sample line for replication on local connections in pg_hba.conf, don't include it where local connections aren't available (such as on win32). Also make sure we use authmethodlocal and not authmethod on the sample line.
-
Heikki Linnakangas authored
On further analysis, it turns out that it is not needed to duplicate predicate locks to the new row version at update, the lock on the version that the transaction saw as visible is enough. However, there was a different bug in the code that checks for dangerous structures when a new rw-conflict happens. Fix that bug, and remove all the row-version chaining related code. Kevin Grittner & Dan Ports, with some comment editorialization by me.
-
Alvaro Herrera authored
-
Alvaro Herrera authored
According to perlguts, &PL_sv_undef is not the right thing to use in those cases because it doesn't behave the same way as an undef value via Perl code. Seems the intuitive way to deal with undef values is subtly enough broken that it's hard to notice when misused. The broken uses got inadvertently introduced in commit 87bb2ade by Alexey Klyukin, Alex Hunsaker and myself on 2011-02-17; no backpatch is necessary. Per testing report from Greg Mullane. Author: Alex Hunsaker
-
- 29 May, 2011 2 commits
-
-
Peter Eisentraut authored
-
Peter Eisentraut authored
The previous claim when these parameters could be changed was incorrect. Fujii Masao
-
- 28 May, 2011 4 commits
-
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
Tom Lane authored
parse_xml_decl's header comment says you can pass NULL for any unwanted output parameter, but it failed to honor this contract for the "standalone" flag. The only currently-affected caller is xml_recv, so the net effect is that sending a binary XML value containing a standalone parameter in its xml declaration would crash the backend. Per bug #6044 from Christopher Dillard. In passing, remove useless initializations of parse_xml_decl's output parameters in xml_parse. Back-patch to 8.3, where this code was introduced.
-
Alvaro Herrera authored
Cédric Villemain
-
- 27 May, 2011 1 commit
-
-
Peter Eisentraut authored
-