- 18 Jul, 2017 5 commits
-
-
Tom Lane authored
It seemed a bit silly that each caller of make_tsvector() was laboriously special-casing the situation where no lexemes were found, when it would be easy and much more bullet-proof to make make_tsvector() handle that.
-
Tom Lane authored
In an off-list followup to bug #14745, Bob Jones complained that to_tsvector() on a 2MB jsonb value took an unreasonable amount of time and space --- enough to draw the wrath of the OOM killer on his machine. On my machine, his example proved to require upwards of 18 seconds and 4GB, which seemed pretty bogus considering that to_tsvector() on the same data treated as text took just a couple hundred msec and 10 or so MB. On investigation, the problem is that the implementation scans each string element of the json(b) and converts it to tsvector separately, then applies tsvector_concat() to join those separate tsvectors. The unreasonable memory usage came from leaking every single one of the transient tsvectors --- but even without that mistake, this is an O(N^2) or worse algorithm, because tsvector_concat() has to repeatedly process the words coming from earlier elements. We can fix it by accumulating all the lexeme data and applying make_tsvector() just once. As a side benefit, that also makes the desired adjustment of lexeme positions far cheaper, because we can just tweak the running "pos" counter between JSON elements. In passing, try to make the explanation of that tweak more intelligible. (I didn't think that a barely-readable comment far removed from the actual code was helpful.) And do some minor other code beautification.
-
Tom Lane authored
s/log_destination/log_directory/, per Jov in bug #14749. Report: https://postgr.es/m/20170718082444.9229.99690@wrigleys.postgresql.org
-
Robert Haas authored
Just as we already do in ExecConstraints, and for the same reason: to improve the quality of error messages. Etsuro Fujita, reviewed by Amit Langote Discussion: http://postgr.es/m/56e0baa8-e458-2bbb-7936-367f7d832e43@lab.ntt.co.jp
-
Robert Haas authored
Before, we always used a dummy value of 1, but that's not right when the partitioned table being modified is inside of a WITH clause rather than part of the main query. Amit Langote, reported and reviewd by Etsuro Fujita, with a comment change by me. Discussion: http://postgr.es/m/ee12f648-8907-77b5-afc0-2980bcb0aa37@lab.ntt.co.jp
-
- 17 Jul, 2017 6 commits
-
-
Tom Lane authored
We're throwing people into the guts of the syntax with not much context; let's back up one step and point out that this goes inside a literal in a CREATE FUNCTION command. Per suggestion from Kurt Kartaltepe. Discussion: https://postgr.es/m/CACawnnyWAmH+au8nfZhLiFfWKjXy4d0kY+eZWfcxPRnjVfaa_Q@mail.gmail.com
-
Andrew Dunstan authored
-
Tom Lane authored
It seems pretty confusing to have tests named both largeobject and large_object. The latter is of very recent vintage (commit ff992c07), so get rid of it in favor of merging into the former. Also, enable the LO comment test that was added by commit 70ad7ed4, since the later commit added the then-missing pg_upgrade functionality. The large_object.sql test case is almost completely redundant with that, but not quite: it seems like creating a user-defined LO with an OID in the system range might be an interesting case for pg_upgrade, so let's keep it. Like the earlier patch, back-patch to all supported branches. Discussion: https://postgr.es/m/18665.1500306372@sss.pgh.pa.us
-
Andrew Dunstan authored
select() for pure timeouts is not portable, and in particular doesn't work on Windows. Discussion: https://postgr.es/m/186943e0-3405-978d-b19d-9d3335427c86@2ndQuadrant.com
-
Robert Haas authored
One, logging for CREATE INDEX was oblivious to the fact that when an unlogged table is created, *only* operations on the init fork should be logged. Two, init fork buffers need to be flushed after they are written; otherwise, a filesystem-level copy following recovery may do the wrong thing. (There may be a better fix for this issue than the one used here, but this is transposed from the similar logic already present in XLogReadBufferForRedoExtended, and a broader refactoring after beta2 seems inadvisable.) Amit Kapila, reviewed by Ashutosh Sharma, Kyotaro Horiguchi, and Michael Paquier Discussion: http://postgr.es/m/CAA4eK1JpcMsEtOL_J7WODumeEfyrPi7FPYHeVdS7fyyrCrgp4w@mail.gmail.com
-
Noah Misch authored
Doing so was useful in 273c458a but became obsolete when 818fd4a6 caused postgres.exe to provide the relevant symbols. No other loadable module links to libpgcommon directly.
-
- 16 Jul, 2017 2 commits
-
-
Andrew Dunstan authored
-
Andrew Dunstan authored
This change didn't adjust the publicly visible taptest function, causing buildfarm failures on bowerbird. Backpatch to 9.4 like previous change.
-
- 15 Jul, 2017 2 commits
-
-
Tom Lane authored
Given this code's general eagerness to use subexpressions' output variables as temporary workspace, it's not exactly clear that it is safe for FieldStore to tell a newval subexpression that it can write into the same variable that is being supplied as a potential input. Document the chain of assumptions needed for that to be safe.
-
Tom Lane authored
I got confused about why this function doesn't need to recursively search the expression tree for a CaseTestExpr node. After figuring that out, add a comment to save the next person some time.
-
- 14 Jul, 2017 4 commits
-
-
Alvaro Herrera authored
The original wording was impossible to translate correctly. Discussion: https://postgr.es/m/20170523002827.lzc2jkzh2gubclqb@alvherre.pgsql
-
Tom Lane authored
Add missing infrastructure for this node type, notably in ruleutils.c where its lack could demonstrably cause EXPLAIN to fail. Add outfuncs/readfuncs support. (outfuncs support is useful today for debugging purposes. The readfuncs support may never be needed, since at present it would only matter for parallel query and NextValueExpr should never appear in a parallelizable query; but it seems like a bad idea to have a primnode type that isn't fully supported here.) Teach planner infrastructure that NextValueExpr is a volatile, parallel-unsafe, non-leaky expression node with cost cpu_operator_cost. Given its limited scope of usage, there *might* be no live bug today from the lack of that knowledge, but it's certainly going to bite us on the rear someday. Teach pg_stat_statements about the new node type, too. While at it, also teach cost_qual_eval() that MinMaxExpr, SQLValueFunction, XmlExpr, and CoerceToDomain should be charged as cpu_operator_cost. Failing to do this for SQLValueFunction was an oversight in my commit 0bb51aa9. The others are longer-standing oversights, but no time like the present to fix them. (In principle, CoerceToDomain could have cost much higher than this, but it doesn't presently seem worth trying to examine the domain's constraints here.) Modify execExprInterp.c to execute NextValueExpr as an out-of-line function; it seems quite unlikely to me that it's worth insisting that it be inlined in all expression eval methods. Besides, providing the out-of-line function doesn't stop anyone from inlining if they want to. Adjust some places where NextValueExpr support had been inserted with the aid of a dartboard rather than keeping it in the same order as elsewhere. Discussion: https://postgr.es/m/23862.1499981661@sss.pgh.pa.us
-
Tom Lane authored
In the frontend Makefiles that pull in libpgfeutils, we'd generally done it like this: LDFLAGS += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport) That method is badly broken, as seen in bug #14742 from Chris Ruprecht. The -L flag for src/fe_utils ends up being placed after whatever random -L flags are in LDFLAGS already. That puts us at risk of pulling in libpgfeutils.a from some previous installation rather than the freshly built one in src/fe_utils. Also, the lack of an "override" is hazardous if someone tries to specify some LDFLAGS on the make command line. The correct way to do it is like this: override LDFLAGS := -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport) $(LDFLAGS) so that libpgfeutils, along with libpq, libpgport, and libpgcommon, are guaranteed to be pulled in from the build tree and not from any referenced system directory, because their -L flags will appear first. In some places we'd been even lazier and done it like this: LDFLAGS += -L$(top_builddir)/src/fe_utils -lpgfeutils -lpq which is subtly wrong in an additional way: on platforms where we can't restrict the symbols exported by libpq.so, it allows libpgfeutils to latch onto libpgport and libpgcommon symbols from libpq.so, rather than directly from those static libraries as intended. This carries hazards like those explained in the comments for the libpq_pgport macro. In addition to fixing the broken libpgfeutils usages, I tried to standardize on using $(libpq_pgport) like so: override LDFLAGS := $(libpq_pgport) $(LDFLAGS) even where libpgfeutils is not in the picture. This makes no difference right now but will hopefully discourage future mistakes of the same ilk. And it's more like the way we handle CPPFLAGS in libpq-using Makefiles. In passing, just for consistency, make pgbench include PTHREAD_LIBS the same way everyplace else does, ie just after LIBS rather than in some random place in the command line. This might have practical effect if there are -L switches in that macro on some platform. It looks to me like the MSVC build scripts are not affected by this error, but someone more familiar with them than I might want to double check. Back-patch to 9.6 where libpgfeutils was introduced. In 9.6, the hazard this error creates is that a reinstallation might link to the prior installation's copy of libpgfeutils.a and thereby fail to absorb a minor-version bug fix. Discussion: https://postgr.es/m/20170714125106.9231.13772@wrigleys.postgresql.org
-
Heikki Linnakangas authored
When writing a backup to stdout with pg_basebackup on Windows, put stdout to binary mode. Any CR bytes in the output will otherwise be output incorrectly as CR+LF. In the passing, standardize on using "_setmode" instead of "setmode", for the sake of consistency. They both do the same thing, but according to MSDN documentation, setmode is deprecated. Fixes bug #14634, reported by Henry Boehlert. Patch by Haribabu Kommi. Backpatch to all supported versions. Discussion: https://www.postgresql.org/message-id/20170428082818.24366.13134@wrigleys.postgresql.org
-
- 13 Jul, 2017 3 commits
-
-
Tom Lane authored
The grammar will only accept something syntactically similar to a function call in a function-in-FROM expression. However, there are various ways to input something that ruleutils.c won't deparse that way, potentially leading to a view or rule that fails dump/reload. Fix by inserting a dummy CAST around anything that isn't going to deparse as a function (which is one of the ways to get something like that in there in the first place). In HEAD, also make use of the infrastructure added by this to avoid emitting unnecessary parentheses in CREATE INDEX deparsing. I did not change that in back branches, thinking that people might find it to be unexpected/unnecessary behavioral change. In HEAD, also fix incorrect logic for when to add extra parens to partition key expressions. Somebody apparently thought they could get away with simpler logic than pg_get_indexdef_worker has, but they were wrong --- a counterexample is PARTITION BY LIST ((a[1])). Ignoring the prettyprint flag for partition expressions isn't exactly a nice solution anyway. This has been broken all along, so back-patch to all supported branches. Discussion: https://postgr.es/m/10477.1499970459@sss.pgh.pa.us
-
Alvaro Herrera authored
The new functions return a list of files in the corresponding directory, not the name of the directory itself. Pointed out by Gianni Ciolli.
-
Heikki Linnakangas authored
The race condition goes like this: 1. GetNewTransactionId advances nextXid e.g. from 100 to 101 2. GetOldestActiveTransactionId reads the new nextXid, 101 3. GetOldestActiveTransactionId loops through the proc array. There are no active XIDs there, so it returns 101 as the oldest active XID. 4. GetNewTransactionid stores XID 100 to MyPgXact->xid So, GetOldestActiveTransactionId returned XID 101, even though 100 only just started and is surely still running. This would be hard to hit in practice, and even harder to spot any ill effect if it happens. GetOldestActiveTransactionId is only used when creating a checkpoint in a master server, and the race condition can only happen on an online checkpoint, as there are no backends running during a shutdown checkpoint. The oldestActiveXid value of an online checkpoint is only used when starting up a hot standby server, to determine the starting point where pg_subtrans is initialized from. For the race condition to happen, there must be no other XIDs in the proc array that would hold back the oldest-active XID value, which means that the missed XID must be a top transaction's XID. However, pg_subtrans is not used for top XIDs, so I believe an off-by-one error is in fact inconsequential. Nevertheless, let's fix it, as it's clearly wrong and the fix is simple. This has been wrong ever since hot standby was introduced, so backport to all supported versions. Discussion: https://www.postgresql.org/message-id/e7258662-82b6-7a45-56d4-99b337a32bf7@iki.fi
-
- 12 Jul, 2017 7 commits
-
-
Tom Lane authored
Further investigation shows that ruleutils isn't quite up to speed either for cases where we have a domain-over-array: it needs to be prepared to look past a CoerceToDomain at the top level of field and element assignments, else it decompiles them incorrectly. Potentially this would result in failure to dump/reload a rule, if it looked like the one in the new test case. (I also added a test for EXPLAIN; that output isn't broken, but clearly we need more test coverage here.) Like commit b1cb32fb, this bug is reachable in cases we already support, so back-patch all the way.
-
Heikki Linnakangas authored
compute_tsvector_stats() detoasted and kept in memory every tsvector value in the sample, but that can be a lot of memory. The original bug report described a case using over 10 gigabytes, with statistics target of 10000 (the maximum). To fix, allocate a separate copy of just the lexemes that we keep around, and free the detoasted tsvector values as we go. This adds some palloc/pfree overhead, when you have a lot of distinct lexemes in the sample, but it's better than running out of memory. Fixes bug #14654 reported by James C. Reviewed by Tom Lane. Backport to all supported versions. Discussion: https://www.postgresql.org/message-id/20170514200602.1451.46797@wrigleys.postgresql.org
-
Alvaro Herrera authored
Otherwise, the script output has a lot of pointless warnings. This was forgotten in 9def031b
-
Tom Lane authored
If the number of tuples in the heap exceeds approximately INT_MAX/2, this loop's calculation "2*i+1" could overflow, resulting in a crash. Fix it by using unsigned int rather than int for the relevant local variables; that shouldn't cost anything extra on any popular hardware. Per bug #14722 from Sergey Koposov. Original patch by Sergey Koposov, modified by me per a suggestion from Heikki Linnakangas to use unsigned int not int64. Back-patch to 9.4, where tuplesort.c grew the ability to sort as many as INT_MAX tuples in-memory (commit 263865a4). Discussion: https://postgr.es/m/20170629161637.1478.93109@wrigleys.postgresql.org
-
Heikki Linnakangas authored
Kyotaro Horiguchi Discussion: https://www.postgresql.org/message-id/20170711.163441.241981736.horiguchi.kyotaro@lab.ntt.co.jp
-
Heikki Linnakangas authored
Commit 14e8803f removed the locking in SyncRepWaitForLSN, but that introduced a race condition, where SyncRepWaitForLSN might see syncRepState already set to SYNC_REP_WAIT_COMPLETE, but the process was not yet removed from the queue. That tripped the assertion, that the process should no longer be in the uqeue. Reorder the operations in SyncRepWakeQueue to remove the process from the queue first, and update syncRepState only after that, and add a memory barrier in between to make sure the operations are made visible to other processes in that order. Fixes bug #14721 reported by Const Zhang. Analysis and fix by Thomas Munro. Backpatch down to 9.5, where the locking was removed. Discussion: https://www.postgresql.org/message-id/20170629023623.1480.26508%40wrigleys.postgresql.org
-
Heikki Linnakangas authored
Mostly in the new subscription-related commands. Backport the few that were also present in older versions. Thomas Munro Discussion: https://www.postgresql.org/message-id/CAEepm=3CyW1QmXcXJXmqiJXtXzFDc8SvSfnxkEGD3Bkv2SrkeQ@mail.gmail.com
-
- 11 Jul, 2017 1 commit
-
-
Tom Lane authored
We allow INSERT and UPDATE commands to assign to the same column more than once, as long as the assignments are to subfields or elements rather than the whole column. However, this failed when the target column was a domain over array rather than plain array. Fix by teaching process_matched_tle() to look through CoerceToDomain nodes, and add relevant test cases. Also add a group of test cases exercising domains over array of composite. It's doubtless accidental that CREATE DOMAIN allows this case while not allowing straight domain over composite; but it does, so we'd better make sure we don't break it. (I could not find any documentation mentioning either side of that, so no doc changes.) It's been like this for a long time, so back-patch to all supported branches. Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us
-
- 10 Jul, 2017 10 commits
-
-
Tom Lane authored
-
Alvaro Herrera authored
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: c5a8de3653bb1af6b0eb41cc6bf090c5522df52b
-
Tom Lane authored
We've heard occasional reports of backend launch failing because pgwin32_ReserveSharedMemoryRegion() fails, indicating that something has already used that address space in the child process. It's not very clear what, given that we disable ASLR in Windows builds, but suspicion falls on antivirus products. It'd be better if we didn't have to disable ASLR, anyway. So let's try to ameliorate the problem by retrying the process launch after such a failure, up to 100 times. Patch by me, based on previous work by Amit Kapila and others. This is a longstanding issue, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAA4eK1+R6hSx6t_yvwtx+NRzneVp+MRqXAdGJZChcau8Uij-8g@mail.gmail.com
-
Heikki Linnakangas authored
Masahiko Sawada Discussion: https://www.postgresql.org/message-id/CAD21AoBCwcTNMdrVWq8T0hoOs2mWSYq9PRJ_fr6SH8HdO+m=0g@mail.gmail.com
-
Heikki Linnakangas authored
As reported by Arthur Zakirov, Gcc 7.1 complained about this with -Wpointer-compare. Discussion: https://www.postgresql.org/message-id/CAKNkYnybV_NFVacGbW=VspzAo3TwRJFNi+9iBob66YqQMZopwg@mail.gmail.com
-
Andrew Gierth authored
Commit c46c0e52 failed to pass the TransitionCaptureState object to ExecARInsertTriggers() in the case where it's using heap_multi_insert and there are indexes. Repair. Thomas Munro, from a report by David Fetter Discussion: https://postgr.es/m/20170708084213.GA14720%40fetter.org
-
Heikki Linnakangas authored
Also fix two other issues, while we're at it: * In error message on connection failure, if multiple network addresses were given as the host option, as in "host=127.0.0.1,127.0.0.2", the error message printed the address twice. * If there were many more ports than hostnames, the error message would always claim that there was one port too many, even if there was more than one. For example, if you gave 2 hostnames and 5 ports, the error message claimed that you gave 2 hostnames and 3 ports. Discussion: https://www.postgresql.org/message-id/10badbc6-4d5a-a769-623a-f7ada43e14dd@iki.fi
-
Tom Lane authored
Original wording had confusingly vague antecedent for "they", so replace that with a more repetitive but clearer formulation. In passing, make the link to the installation requirements section more specific. Per gripe from Martin Mai, though this is not the fix he initially proposed. Discussion: https://postgr.es/m/CAN_NWRu-cWuNaiXUjV3m4H-riWURuPW=j21bSaLADs6rjjzXgQ@mail.gmail.com
-
Tom Lane authored
Improve many item descriptions, improve markup, relocate some items that seemed to be in the wrong section.
-