- 09 Nov, 2009 1 commit
-
-
Tom Lane authored
like the core parser's code. In particular, track locations at the character rather than line level during parsing, allowing many more parse-time error conditions to be reported with precise error pointers rather than just "near line N". Also, exploit the fact that we no longer need to substitute $N for variable references by making extracted SQL queries and expressions be exact copies of subranges of the function text, rather than having random whitespace changes within them. This makes it possible to directly map parse error positions from the core parser onto positions in the function text, which lets us report them without the previous kluge of showing the intermediate internal-query form. (Later it might be good to do that for core parse-analysis errors too, but this patch is just touching plpgsql's lexer/parser, not what happens at runtime.) In passing, make plpgsql's lexer use palloc not malloc. These changes make plpgsql's parse-time error reports noticeably nicer (as illustrated by the regression test changes), and will also simplify the planned removal of plpgsql's separate lexer by reducing the impedance mismatch between what it does and what the core lexer does.
-
- 07 Nov, 2009 2 commits
-
-
Tom Lane authored
This was long ago superseded by the standard build process and main SGML documentation.
-
Tom Lane authored
* Pull the responsibility for %TYPE and %ROWTYPE out of the scanner, letting read_datatype manage it instead. * Avoid unnecessary scanner-driven lookups of plpgsql variables in places where it's not needed, which is actually most of the time; we do not need it in DECLARE sections nor in text that is a SQL query or expression. * Rationalize the set of token types returned by the scanner: distinguishing T_SCALAR, T_RECORD, T_ROW seems to complicate the grammar in more places than it simplifies it, so merge these into one token type T_DATUM; but split T_ERROR into T_DBLWORD and T_TRIPWORD for clarity and simplicity of later processing. Some of this will need to be revisited again when we try to make plpgsql use the core scanner, but this patch gets some of the bigger stumbling blocks out of the way.
-
- 06 Nov, 2009 2 commits
-
-
Andrew Dunstan authored
Keep track of language's trusted flag in InlineCodeBlock. Needed to support DO blocks for languages that have both trusted and untrusted variants.
-
Tom Lane authored
into SQL expressions, to using the newly added parser callback hooks. This allows us to do the substitutions in a more semantically-aware way: a variable reference will only be recognized where it can validly go, ie, a place where a column value or parameter would be legal, instead of the former behavior that would replace any textual match including table names and column aliases (leading to syntax errors later on). A release-note-worthy fine point is that plpgsql variable names that match fully-reserved words will now need to be quoted. This commit preserves the former behavior that variable references take precedence over any possible match to a column name. The infrastructure is in place to support the reverse precedence or throwing an error on ambiguity, but those behaviors aren't accessible yet. Most of the code changes here are associated with making the namespace data structure persist so that it can be consulted at runtime, instead of throwing it away at the end of initial function parsing. The plpgsql scanner is still doing name lookups, but that behavior is now irrelevant for SQL expressions. A future commit will deal with removing unnecessary lookups.
-
- 05 Nov, 2009 4 commits
-
-
Tom Lane authored
it works just as well to have them be ordinary identifiers, and this gets rid of a number of ugly special cases. Plus we aren't interfering with non-rule usage of these names. catversion bump because the names change internally in stored rules.
-
Peter Eisentraut authored
Pointed out by Debian's lintian.
-
Tom Lane authored
behavior, and is so little used that no one has been interested in fixing it. To ensure that possible uses are covered, remove the ALIAS declaration's arbitrary restriction that only $n identifiers can be aliased. (We could alternatively make RENAME act just like ALIAS, but per discussion having two different ways to do the same thing is probably more confusing than helpful.)
-
Tom Lane authored
that are not handled by find_coercion_pathway, notably composite->RECORD. Now that 8.4 supports composites as primary keys, it's worth dealing with this case.
-
- 04 Nov, 2009 5 commits
-
-
Tom Lane authored
tarballs under 100 characters. This should avoid failures with certain untarring tools (WinZip and Midnight Commander have been mentioned as likely suspects). Per my proposal of yesterday. catversion bumped since the initial contents of pg_proc change.
-
Tom Lane authored
at the first keyword of the expression, rather than drawing a rather artificial distinction between the ESCAPE subclause and the rest. Per gripe from Gokulakannan Somasundaram and subsequent discusssion.
-
Tom Lane authored
As proof of concept, modify plpgsql to use the hooks. plpgsql is still inserting $n symbols textually, but the "back end" of the parsing process now goes through the ParamRef hook instead of using a fixed parameter-type array, and then execution only fetches actually-referenced parameters, using a hook added to ParamListInfo. Although there's a lot left to be done in plpgsql, this already cures the "if (TG_OP = 'INSERT' and NEW.foo ...)" problem, as illustrated by the changed regression test.
-
Heikki Linnakangas authored
Windows doesn't do signal processing like other platforms do. It never really worked, but recent changes to the signal handling made it crash. This fixes bug #4961. Patch by Fujii Masao.
-
Heikki Linnakangas authored
Itagaki Takahiro, with small changes by me and Simon.
-
- 03 Nov, 2009 5 commits
-
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
Peter Eisentraut authored
When the elog functions (plpy.info etc.) get a single argument, just print that argument instead of printing the single-member tuple like ('foo',).
-
Peter Eisentraut authored
The rationale is that view definitions tend to be long and obscure the main information about the view.
-
Peter Eisentraut authored
In PLy_output(), when the elog() call in the TRY branch throws an exception (this can happen when a statement timeout kicks in, for example), the PyErr_SetString() call in the CATCH branch can cause a segfault, because the Py_XDECREF(so) call before it releases memory that is still used by the sv variable that PyErr_SetString() uses as argument, because sv points into memory owned by so. Backpatched back to 8.0, where this code was introduced. I also threw in a couple of volatile declarations for variables that are used before and after the TRY. I don't think they caused the crash that I observed, but they could become issues.
-
- 01 Nov, 2009 2 commits
-
-
Tom Lane authored
that it can scribble on scan->xs_ctup.t_self while following HOT chains, so we can't rely on that to stay valid between hashgettuple() calls. Introduce a private variable in HashScanOpaque, instead.
-
Tom Lane authored
hash indexes keep entries sorted by hash value. First, the original plans for concurrency assumed that insertions would happen only at the end of a page, which is no longer true; this could cause scans to transiently fail to find index entries in the presence of concurrent insertions. We can compensate by teaching scans to re-find their position after re-acquiring read locks. Second, neither the bucket split nor the bucket compaction logic had been fixed to preserve hashvalue ordering, so application of either of those processes could lead to permanent corruption of an index, in the sense that searches might fail to find entries that are present. This patch fixes the split and compaction logic to preserve hashvalue ordering, but it cannot do anything about pre-existing corruption. We will need to recommend reindexing all hash indexes in the 8.4.2 release notes. To buy back the performance loss hereby induced in split and compaction, fix them to use PageIndexMultiDelete instead of retail PageIndexDelete operations. We might later want to do something with qsort'ing the page contents rather than doing a binary search for each insertion, but that seemed more invasive than I cared to risk in a back-patch. Per bug #5157 from Jeff Janes and subsequent investigation.
-
- 31 Oct, 2009 2 commits
-
-
Tom Lane authored
plperl_call_handler, in both the normal and error-exit paths. Per report from Alexey Klyukin.
-
Tom Lane authored
recent proposal. As proof of concept, remove knowledge of Params from the core parser, arranging for them to be handled entirely by parser hook functions. It turns out we need an additional hook for that --- I had forgotten about the code that handles inferring a parameter's type from context. This is a preliminary step towards letting plpgsql handle its variables through parser hooks. Additional work remains to be done to expose the facility through SPI, but I think this is all the changes needed in the core parser.
-
- 30 Oct, 2009 1 commit
-
-
Tom Lane authored
The original coding ensured nbuckets and nbatch didn't exceed INT_MAX, which while not insane on its own terms did nothing to protect subsequent code like "palloc(nbatch * sizeof(BufFile *))". Since enormous join size estimates might well be planner error rather than reality, it seems best to constrain the initial sizes to be not more than work_mem/sizeof(pointer), thus ensuring the allocated arrays don't exceed work_mem. We will allow nbatch to get bigger than that during subsequent ExecHashIncreaseNumBatches calls, but we should still guard against integer overflow in those palloc requests. Per bug #5145 from Bernt Marius Johnsen. Although the given test case only seems to fail back to 8.2, previous releases have variants of this issue, so patch all supported branches.
-
- 29 Oct, 2009 1 commit
-
-
Peter Eisentraut authored
-
- 28 Oct, 2009 4 commits
-
-
Tom Lane authored
adding the ModifyTable node type --- I had been thinking ModifyTable should replace Append as a special case in push_plan(), but actually both of them have to be special-cased.
-
Tom Lane authored
This has always worked, up until somebody's thinko here: http://archives.postgresql.org/pgsql-committers/2009-04/msg00233.php Per bug #5143 from Piotr Wolinski.
-
Tom Lane authored
when FOR UPDATE is propagated down into a sub-select expanded from a view. Similar bug to parser's isLockedRel issue that I fixed yesterday; likewise seems not quite worth the effort to back-patch.
-
Tom Lane authored
underneath the Limit node, not atop it. This fixes the old problem that such a query might unexpectedly return fewer rows than the LIMIT says, due to LockRows discarding updated rows. There is a related problem that LockRows might destroy the sort ordering produced by earlier steps; but fixing that by pushing LockRows below Sort would create serious performance problems that are unjustified in many real-world applications, as well as potential deadlock problems from locking many more rows than expected. Instead, keep the present semantics of applying FOR UPDATE after ORDER BY within a single query level; but allow the user to specify the other way by writing FOR UPDATE in a sub-select. To make that work, track whether FOR UPDATE appeared explicitly in sub-selects or got pushed down from the parent, and don't flatten a sub-select that contained an explicit FOR UPDATE.
-
- 27 Oct, 2009 3 commits
-
-
Tom Lane authored
that it's called within an AfterTriggerBeginQuery/AfterTriggerEndQuery pair. The RI cascade triggers suppress that overhead on the assumption that they are always run non-deferred, so it's possible to violate the condition if someone mistakenly changes pg_trigger to mark such a trigger deferred. We don't really care about supporting that, but throwing an error instead of crashing seems desirable. Per report from Marcelo Costa.
-
Tom Lane authored
for example in WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE the FOR UPDATE will now affect bar but not foo. This is more useful and consistent than the original 8.4 behavior, which tried to propagate FOR UPDATE into the WITH query but always failed due to assorted implementation restrictions. Even though we are in process of removing those restrictions, it seems correct on philosophical grounds to not let the outer query's FOR UPDATE affect the WITH query. In passing, fix isLockedRel which frequently got things wrong in nested-subquery cases: "FOR UPDATE OF foo" applies to an alias foo in the current query level, not subqueries. This has been broken for a long time, but it doesn't seem worth back-patching further than 8.4 because the actual consequences are minimal. At worst the parser would sometimes get RowShareLock on a relation when it should be AccessShareLock or vice versa. That would only make a difference if someone were using ExclusiveLock concurrently, which no standard operation does, and anyway FOR UPDATE doesn't result in visible changes so it's not clear that the someone would notice any problem. Between that and the fact that FOR UPDATE barely works with subqueries at all in existing releases, I'm not excited about worrying about it.
-
Alvaro Herrera authored
Per note from Zoltan Boszormenyi.
-
- 26 Oct, 2009 4 commits
-
-
Peter Eisentraut authored
files in one run.
-
Peter Eisentraut authored
-
Heikki Linnakangas authored
those accepted by date_in(). I confused julian day numbers and number of days since the postgres epoch 2000-01-01 in the original patch. I just noticed that it's still easy to get such out-of-range values into the database using to_date or +- operators, but this patch doesn't do anything about those functions. Per report from James Pye.
-
Tom Lane authored
a lot of strange behaviors that occurred in join cases. We now identify the "current" row for every joined relation in UPDATE, DELETE, and SELECT FOR UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the appropriate row into each scan node in the rechecking plan, forcing it to emit only that one row. The former behavior could rescan the whole of each joined relation for each recheck, which was terrible for performance, and what's much worse could result in duplicated output tuples. Also, the original implementation of EvalPlanQual could not re-use the recheck execution tree --- it had to go through a full executor init and shutdown for every row to be tested. To avoid this overhead, I've associated a special runtime Param with each LockRows or ModifyTable plan node, and arranged to make every scan node below such a node depend on that Param. Thus, by signaling a change in that Param, the EPQ machinery can just rescan the already-built test plan. This patch also adds a prohibition on set-returning functions in the targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the duplicate-output-tuple problem. It seems fairly reasonable since the other restrictions on SELECT FOR UPDATE are meant to ensure that there is a unique correspondence between source tuples and result tuples, which an output SRF destroys as much as anything else does.
-
- 23 Oct, 2009 1 commit
-
-
Peter Eisentraut authored
child tables. This was found to be useless and confusing in virtually all cases, and also contrary to the SQL standard.
-
- 21 Oct, 2009 3 commits
-
-
Tom Lane authored
style by default. Per discussion, there seems to be hardly anything that really relies on being able to change the regex flavor, so the ability to select it via embedded options ought to be enough for any stragglers. Also, if we didn't remove the GUC, we'd really be morally obligated to mark the regex functions non-immutable, which'd possibly create performance issues.
-
Tom Lane authored
Per recent discussion, add_missing_from has been deprecated for long enough to consider removing, and it's getting in the way of planned parser refactoring. The system now always behaves as though add_missing_from were OFF.
-
Peter Eisentraut authored
-