- 06 Aug, 2009 2 commits
-
-
Tom Lane authored
by supporting conversions in places that used to demand exact rowtype match. Since this issue is certain to come up elsewhere (in fact, already has, in ExecEvalConvertRowtype), factor out the support code into new core functions for tuple conversion. I chose to put these in a new source file since heaptuple.c is already overly long. Heavily revised version of a patch by Pavel Stehule.
-
Magnus Hagander authored
backend startup on Win32. Instead, log the error and just forget about the potentially dangling process, since we can't do anything about it anyway.
-
- 05 Aug, 2009 5 commits
-
-
Alvaro Herrera authored
This patch adds declaration so that they end up in section 3, and adds them to the Makefiles to install them. Also, some synopses needed reflowing so that they look nice in 80-column terminals.
-
Tom Lane authored
Sergey Karpov
-
Heikki Linnakangas authored
fsync() fails, say "file" rather than "relation" when printing the filename. This makes messages that display block numbers a bit confusing. For example, in message 'could not read block 150000 of file "base/1234/5678.1"', 150000 is the block number from the beginning of the relation, ie. segment 0, not 150000th block within that segment. Per discussion, users aren't usually interested in the exact location within the file, so we can live with that. To ease constructing error messages, add FilePathName(File) function to return the pathname of a virtual fd.
-
Joe Conway authored
Adds the ability to retrieve async notifications using dblink, via the addition of the function dblink_get_notify(). Original patch by Marcus Kempe, suggestions by Tom Lane and Alvaro Herrera, patch review and adjustments by Joe Conway.
-
Michael Meskes authored
-
- 04 Aug, 2009 10 commits
-
-
Peter Eisentraut authored
This switches the man page building process to use the DocBook XSL stylesheet toolchain. The previous targets for Docbook2X are removed. configure has been updated to look for the new tools. The Documentation appendix contains the new build instructions. There are also a few isolated tweaks in the documentation to improve places that came out strangely in the man pages.
-
Tom Lane authored
The previous implementation got it right in most cases but failed in one: if you pg_dump into an archive with standard_conforming_strings enabled, then pg_restore to a script file (not directly to a database), the script will set standard_conforming_strings = on but then emit large object data as nonstandardly-escaped strings. At the moment the code is made to emit hex-format bytea strings when dumping to a script file. We might want to change to old-style escaping for backwards compatibility, but that would be slower and bulkier. If we do, it's just a matter of reimplementing appendByteaLiteral(). This has been broken for a long time, but given the lack of field complaints I'm not going to worry about back-patching.
-
Alvaro Herrera authored
source files that need it.
-
Tom Lane authored
-
Tom Lane authored
run when built with --with-openssl).
-
Tom Lane authored
-
Tom Lane authored
Add bytea.h inclusions as needed. Some of the contrib regression tests need to be de-hexified, too. Per buildfarm.
-
Tom Lane authored
to a server >= 8.5. Per my proposal in discussion of hex-format patch.
-
Tom Lane authored
Both hex format and the traditional "escape" format are automatically handled on input. The output format is selected by the new GUC variable bytea_output. As committed, bytea_output defaults to HEX, which is an *incompatible change*. We will keep it this way for awhile for testing purposes, but should consider whether to switch to the more backwards-compatible default of ESCAPE before 8.5 is released. Peter Eisentraut
-
Tom Lane authored
already treating it as text anyway, to the point that I couldn't find anything to change except the datatype markings in catalog/*.h. The only effect that the bytea declaration had was to cause byteaout() to be invoked when pg_dump (or another client program) inspected the column value. Since pg_dump wasn't expecting that, but just treating what it got as text, the net result is that dump and reload would mangle any backslashes or non-ASCII characters in the filename string for a C-language function. That is a very long-standing bug, but given the lack of field complaints it doesn't seem worth trying to find a back-patchable fix. We'll just make this change to fix it going forward. This change will also forestall problems after the planned change to let bytea emit hex output instead of escaped characters.
-
- 03 Aug, 2009 3 commits
-
-
Joe Conway authored
Add family of functions that did not exist earlier, mainly due to historical omission. Original patch by Abhijit Menon-Sen, with review and modifications by Joe Conway. catversion.h bumped.
-
Tom Lane authored
-
Tatsuo Ishii authored
reviewed by Greg Smith and Josh Williams. Following is the proposal from ITAGAKI Takahiro: Pgbench is a famous tool to measure postgres performance, but nowadays it does not work well because it cannot use multiple CPUs. On the other hand, postgres server can use CPUs very well, so the bottle-neck of workload is *in pgbench*. Multi-threading would be a solution. The attached patch adds -j (number of jobs) option to pgbench. If the value N is greater than 1, pgbench runs with N threads. Connections are equally-divided into them (ex. -c64 -j4 => 4 threads with 16 connections each). It can run on POSIX platforms with pthread and on Windows with win32 threads. Here are results of multi-threaded pgbench runs on Fedora 11 with intel core i7 (8 logical cores = 4 physical cores * HT). -j8 (8 threads) was the best and the tps is 4.5 times of -j1, that is a traditional result. $ pgbench -i -s10 $ pgbench -n -S -c64 -j1 => tps = 11600.158593 $ pgbench -n -S -c64 -j2 => tps = 17947.100954 $ pgbench -n -S -c64 -j4 => tps = 26571.124001 $ pgbench -n -S -c64 -j8 => tps = 52725.470403 $ pgbench -n -S -c64 -j16 => tps = 38976.675319 $ pgbench -n -S -c64 -j32 => tps = 28998.499601 $ pgbench -n -S -c64 -j64 => tps = 26701.877815 Is it acceptable to use pthread in contrib module? If ok, I will add the patch to the next commitfest.
-
- 02 Aug, 2009 1 commit
-
-
Tom Lane authored
Robert Haas
-
- 01 Aug, 2009 2 commits
- 31 Jul, 2009 1 commit
-
-
Tom Lane authored
This patch gets us out from under the Unix limitation of two user-defined signal types. We already had done something similar for signals directed to the postmaster process; this adds multiplexing for signals directed to backends and auxiliary processes (so long as they're connected to shared memory). As proof of concept, replace the former usage of SIGUSR1 and SIGUSR2 for backends with use of the multiplexing mechanism. There are still some hard-wired definitions of SIGUSR1 and SIGUSR2 for other process types, but getting rid of those doesn't seem interesting at the moment. Fujii Masao
-
- 30 Jul, 2009 2 commits
-
-
Magnus Hagander authored
header files. Josh Williams
-
Tom Lane authored
This was foreseen to be a good idea long ago, but nobody had got round to doing it. The recent patch for deferred unique constraints made transformConstraintAttrs() ugly enough that I decided it was time. This change will also greatly simplify parsing of deferred CHECK constraints, if anyone ever gets around to implementing that. While at it, add a location field to Constraint, and use that to provide an error cursor for some of the constraint-related error messages.
-
- 29 Jul, 2009 3 commits
-
-
Tom Lane authored
include a fractional part in the output for MILLISECOND and SECOND cases, rather than truncating the source value. This is what the float-timestamp code has always done, and it was clearly the code author's intent to do the same for integer timestamps, but he forgot about integer division in C. The other datatypes supported by EXTRACT() already do this correctly. Backpatch to 8.4, so that the default (integer) behavior of that branch will match the default (float) behavior of older branches. Arguably we should patch further back, but it's possible that applications are expecting the broken behavior in older branches. 8.4 is new enough that expectations shouldn't be too settled. Per report from Greg Stark.
-
Tom Lane authored
The current implementation fires an AFTER ROW trigger for each tuple that looks like it might be non-unique according to the index contents at the time of insertion. This works well as long as there aren't many conflicts, but won't scale to massive unique-key reassignments. Improving that case is a TODO item. Dean Rasheed
-
Tom Lane authored
we should ignore NULL array entries, not non-NULL ones. This had the effect of disabling commit_delay, and could have caused a crash in the rare race condition the patch was intended to fix. Bug report and diagnosis by Jeff Janes, in bug #4952.
-
- 28 Jul, 2009 3 commits
-
-
Teodor Sigaev authored
-
Teodor Sigaev authored
Aaron Marcuse-Kubitza <aaronmk@blackducksoftware.com>
-
Tom Lane authored
conindid is the index supporting a constraint. We can use this not only for unique/primary-key constraints, but also foreign-key constraints, which depend on the unique index that constrains the referenced columns. tgconstrindid is just copied from the constraint's conindid field, or is zero for triggers not associated with constraints. This is mainly intended as infrastructure for upcoming patches, but it has some virtue in itself, since it exposes a relationship that you formerly had to grovel in pg_depend to determine. I simplified one information_schema view accordingly. (There is a pg_dump query that could also use conindid, but I left it alone because it wasn't clear it'd get any faster.)
-
- 27 Jul, 2009 6 commits
-
-
Magnus Hagander authored
since it's only called during process startup, thus no backpatch. Found by TAKATSUKA Haruka, patch by Magnus Hagander and Andrew Chernow
-
Magnus Hagander authored
affects the C compiler step - we still only build one target at a time.
-
Tom Lane authored
After a patch originally submitted by Nobuhiro Iwamatsu, but corrected (I think) to match our guidelines for safe use of asm fragments. This should be considered untested ...
-
Tom Lane authored
-
Tom Lane authored
We should not try to load old statistics when re-attaching to existing shared memory. Per bug #4941. Itagaki Takahiro
-
Tom Lane authored
This is a simple test to see whether COSTS OFF will help much with getting EXPLAIN output that's sufficiently platform-independent for use in the regression tests. The planner does have some freedom of choice in these examples (plain via bitmap indexscan), so I'm not sure what will happen.
-
- 26 Jul, 2009 1 commit
-
-
Tom Lane authored
The original syntax made it difficult to add options without making them into reserved words. This change parenthesizes the options to avoid that problem, and makes provision for an explicit (and perhaps non-Boolean) value for each option. The original syntax is still supported, but only for the two original options ANALYZE and VERBOSE. As a test case, add a COSTS option that can suppress the planner cost estimates. This may be useful for including EXPLAIN output in the regression tests, which are otherwise unable to cope with cross-platform variations in cost estimates. Robert Haas
-
- 25 Jul, 2009 1 commit
-
-
Tom Lane authored
QUOTE * as a variety of FORCE QUOTE, and update psql documentation to include the option. (The actual psql code doesn't seem to need any changes.)
-