- 21 Mar, 2003 6 commits
-
-
Michael Meskes authored
-
Bruce Momjian authored
-
Bruce Momjian authored
-
Bruce Momjian authored
> * -Change NUMERIC data type to use base 10,000 internally
-
Bruce Momjian authored
-
Tom Lane authored
some of the algorithms for higher functions. I see about a factor of ten speedup on the 'numeric' regression test, but it's unlikely that that test is representative of real-world applications. initdb forced due to change of on-disk representation for NUMERIC.
-
- 20 Mar, 2003 34 commits
-
-
Bruce Momjian authored
-
Bruce Momjian authored
-
Bruce Momjian authored
effectively used to mean a default value that could also be spelled out explicitly. (ACLs behave that way, and useconfig/datconfig do too IIRC.) It's a bit of a hack, but it saves table space and backend code --- without this convention the default would have to be inserted "manually" since we have no mechanism to supply defaults when C code is forming a new catalog tuple. I'm inclined to leave the code alone. But Alvaro is right that it'd be good to point out the 'infinity' option in the CREATE USER and ALTER USER man pages. (Doc patch please?) Alvaro Herrera
-
Bruce Momjian authored
join is defined as: from_item [ NATURAL ] join_type from_item [ ON join_condition | USING ( join_column_list ) ] However, if the join_type is an INNER or OUTER join, an ON, USING, or NATURAL clause *must* be specified (it's not optional, as that segment of the docs suggest). I'm not exactly sure what the best way to fix this is, so I've attached a patch adding a FIXME comment to the relevant section of the SGML. If anyone has any ideas on the proper way to outline join syntax, please speak up. Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC
-
Bruce Momjian authored
btree_gist now supports int2 ! Thanks Janko Richter for contribution.
-
Bruce Momjian authored
a trigger as its parameter. It is basically copied from the pg_dump code. Christopher Kings-Lynne
-
Bruce Momjian authored
be simplified (I'd thought that it can even be removed). This patch does that. Alvaro Herrera
-
Bruce Momjian authored
Alvaro Herrera
-
Bruce Momjian authored
queries while the rest remain blank. Kevin Brown
-
Bruce Momjian authored
changed as per discussion on the patches list). This version should be a good bit better. It addresses all the issues pointed out by Neil Conway. Vacuum and Analyze are now handled separately. It now monitors for xid wraparound. The number of database connections and queries has been significantly reduced compared the previous version. I have moved it from bin to contrib. More detail on the changes are in the TODO file. I have not tested the xid wraparound code as I have to let my AthlonXP 1600 run select 1 in a tight loop for approx. two days in order to perform the required 500,000,000 xacts. Matthew T. O'Connor
-
Bruce Momjian authored
-
Bruce Momjian authored
-
Michael Meskes authored
-
Bruce Momjian authored
-
Bruce Momjian authored
-
Bruce Momjian authored
default datestyle. This is not portable between installations. This patch sets DATESTYLE to ISO at the start of a pg_dump, so that the dates written into the dump will be restorable onto any database, regardless of how its default datestyle is set. Oliver Elphick
-
Bruce Momjian authored
Add ALTER SEQUENCE to modify min/max/increment/cache/cycle values Also updated create sequence docs to mention NO MINVALUE, & NO MAXVALUE. New Files: doc/src/sgml/ref/alter_sequence.sgml src/test/regress/expected/sequence.out src/test/regress/sql/sequence.sql ALTER SEQUENCE is NOT transactional. It behaves similarly to setval(). It matches the proposed SQL200N spec, as well as Oracle in most ways -- Oracle lacks RESTART WITH for some strange reason. -- Rod Taylor <rbt@rbt.ca>
-
Bruce Momjian authored
> o -Add ALTER SEQUENCE to modify min/max/increment/cache/cycle values
-
Bruce Momjian authored
version of crosstab. This fixes a major deficiency in real-world use of the original version. Easiest to undestand with an illustration: Data: ------------------------------------------------------------------- select * from cth; id | rowid | rowdt | attribute | val ----+-------+---------------------+----------------+--------------- 1 | test1 | 2003-03-01 00:00:00 | temperature | 42 2 | test1 | 2003-03-01 00:00:00 | test_result | PASS 3 | test1 | 2003-03-01 00:00:00 | volts | 2.6987 4 | test2 | 2003-03-02 00:00:00 | temperature | 53 5 | test2 | 2003-03-02 00:00:00 | test_result | FAIL 6 | test2 | 2003-03-02 00:00:00 | test_startdate | 01 March 2003 7 | test2 | 2003-03-02 00:00:00 | volts | 3.1234 (7 rows) Original crosstab: ------------------------------------------------------------------- SELECT * FROM crosstab( 'SELECT rowid, attribute, val FROM cth ORDER BY 1,2',4) AS c(rowid text, temperature text, test_result text, test_startdate text, volts text); rowid | temperature | test_result | test_startdate | volts -------+-------------+-------------+----------------+-------- test1 | 42 | PASS | 2.6987 | test2 | 53 | FAIL | 01 March 2003 | 3.1234 (2 rows) Hashed crosstab: ------------------------------------------------------------------- SELECT * FROM crosstab( 'SELECT rowid, attribute, val FROM cth ORDER BY 1', 'SELECT DISTINCT attribute FROM cth ORDER BY 1') AS c(rowid text, temperature int4, test_result text, test_startdate timestamp, volts float8); rowid | temperature | test_result | test_startdate | volts -------+-------------+-------------+---------------------+-------- test1 | 42 | PASS | | 2.6987 test2 | 53 | FAIL | 2003-03-01 00:00:00 | 3.1234 (2 rows) Notice that the original crosstab slides data over to the left in the result tuple when it encounters missing data. In order to work around this you have to be make your source sql do all sorts of contortions (cartesian join of distinct rowid with distinct attribute; left join that back to the real source data). The new version avoids this by building a hash table using a second distinct attribute query. The new version also allows for "extra" columns (see the README) and allows the result columns to be coerced into differing datatypes if they are suitable (as shown above). In testing a "real-world" data set (69 distinct rowid's, 27 distinct categories/attributes, multiple missing data points) I saw about a 5-fold improvement in execution time (from about 2200 ms old, to 440 ms new). I left the original version intact because: 1) BC, 2) it is probably slightly faster if you know that you have no missing attributes. README and regression test adjustments included. If there are no objections, please apply. Joe Conway
-
Bruce Momjian authored
now, my changes seem to work. Some possible minor bugs got squished on the way but I can't be sure without more feedback from people who really put the code to the test. The new patch mostly simplifies variable handling and reduces code duplication. Changes in the command parser eliminate some redundant variables (boolean state + depth counter), replaces some "else if" constructs with switches, and so on. It is meant to be applied together with my previous patch, although I hope they don't conflict; I went back to the CVS version for this one. One more thing I thought should perhaps be changed: an IGNOREEOF value of n will ignore only n-1 EOFs. I didn't want to touch this for fear of breaking existing applications, but it does seem a tad illogical. Jeroen T. Vermeulen
-
Bruce Momjian authored
changes to the SQL to retrieve attributes for older versions of Postgres is probably wise. Also, please make sure that I have mapped the storage types to the correct storage names, as this is relatively poorly documented. I think that this patch might need to be considered for back-porting to 7.3.3 since at the moment, people will be losing valuable information after upgrades. Will dump: CREATE TABLE test ( a text, b text, c text, d text ); ALTER TABLE ONLY test ALTER COLUMN a SET STATISTICS 55; ALTER TABLE ONLY test ALTER COLUMN a SET STORAGE PLAIN; ALTER TABLE ONLY test ALTER COLUMN b SET STATISTICS 1000; ALTER TABLE ONLY test ALTER COLUMN c SET STORAGE EXTERNAL; ALTER TABLE ONLY test ALTER COLUMN d SET STORAGE MAIN; Christopher Kings-Lynne
-
Bruce Momjian authored
Lennert Buytenhek
-
Bruce Momjian authored
what is capable using integer-datatime timestamps. It does attempt to exercise the maximum allowable timestamp range. Also is a small error check when converting a timestamp from external to internal format that prevents out of range timestamps from being entered. Files patched: Index: src/backend/utils/adt/timestamp.c Added range check to prevent out of range timestamps from being used. Index: src/test/regress/sql/horology.sql Index: src/test/regress/expected/horology-no-DST-before-1970.out Index: src/test/regress/expected/horology-solaris-1947.out Limited range of timestamps being checked to Jan 1, 4713 BC to Dec 31, 294276 In creating this patch, I have seen some definite problems with integer timestamps and how they react when used near their limits. For example, the following statement gives the correct result: SELECT timestamp without time zone 'Jan 1, 4713 BC' + interval '109203489 days' AS "Dec 31, 294276"; However, this statement which is the logical inverse of the above gives incorrect results: SELECT timestamp without time zone '12/31/294276' - timestamp without time zone 'Jan 1, 4713 BC' AS "109203489 Days"; John Cochran
-
Bruce Momjian authored
7.3.2). It removes some code duplication and #ifdeffing, and some unstructured ugliness such as tacky breaks and an unneeded continue. Breaks up a large function into smaller functions and reduces required nesting levels, and kills a variable or two. Jeroen T. Vermeulen
-
Bruce Momjian authored
patch fix it -- but this patch doesn't contains tests or docs fixes. I will send it later. Fixed outputs: select to_char(x, '9999.999') as x, to_char(x, 'S9999.999') as s, to_char(x, 'SG9999.999') as sg, to_char(x, 'MI9999.999') as mi, to_char(x, 'PL9999.999') as pl, to_char(x, 'PLMI9999.999') as plmi, to_char(x, '9999.999SG') as sg2, to_char(x, '9999.999PL') as pl2, to_char(x, '9999.999MI') as mi2 from num; Karel Zak
-
Bruce Momjian authored
> > > > - Add check in pg_dump to see if the value returned is the max /min > > values and replace with NO MAXVALUE, NO MINVALUE. > > > > - Change START and INCREMENT to use START WITH and INCREMENT BY syntax. > > This makes it a touch easier to port to other databases with sequences > > (Oracle). PostgreSQL supports both syntaxes already. > > + char bufm[100], > + bufx[100]; > > This seems to be an arbitary size. Why not set it to the actual maximum > length? > > Also: > > + snprintf(bufm, 100, INT64_FORMAT, SEQ_MINVALUE); > + snprintf(bufx, 100, INT64_FORMAT, SEQ_MAXVALUE); > > sizeof(bufm), sizeof(bufx) is probably the more > maintenance-friendly/standard way to do it. I changed the code to use sizeof - but will wait for a response from Peter before changing the size. It's consistent throughout the sequence code to be 100 for this purpose. Rod Taylor <rbt@rbt.ca>
-
Bruce Momjian authored
- Add domain check constraints to "check_constraints" view - Create "domains" view - Create "domain_constraints" view -- Rod Taylor <rbt@rbt.ca>
-
Bruce Momjian authored
Envrironment and Files section, explained exactly what -w does) This is a patch which allows pg_ctl to make an intelligent guess as to the proper port when running 'psql -l' to determine if the database has started up (the -w flag). The environment variable PGPORT is used. If that is not found, it checks if a specific port has been set inside the postgresql.conf file. If it is has not, it uses the port that Postgres was compiled with. Greg Sabino Mullane greg@turnstep.com
-
Bruce Momjian authored
src/test/regress/regress.c (e.g. removing K & R style parameter declarations, improving sprintf() usage, etc.) Neil Conway
-
Bruce Momjian authored
> weird behavior across fork boundaries; (b) the additional memory space > that has to be duplicated into child processes will cost something per > child launch, even if the child never uses it. But these are only > arguments that it might not *always* be a prudent thing to do, not that > we shouldn't give the DBA the tool to do it if he wants. So fire away. Here is a patch for the above, including a documentation update. It creates a new GUC variable "preload_libraries", that accepts a list in the form: preload_libraries = '$libdir/mylib1:initfunc,$libdir/mylib2' If ":initfunc" is omitted or not found, no initialization function is executed, but the library is still preloaded. If "$libdir/mylib" isn't found, the postmaster refuses to start. In my testing with PL/R, it reduces the first call to a PL/R function (after connecting) from almost 2 seconds, down to about 8 ms. Joe Conway
-
Bruce Momjian authored
> like that patch still needs some work... Yeah. I'm really, really, *really* sorry for submitting it in the state it was in. I shouldn't have done that just before moving to another country. I found the problem last night, but couldn't get to a Net connection until now. The problem is in src/bin/psql/common.c, around line 250-335 somewhere depending on the version. The 2nd and 3rd clauses of the "while" loop condition: (rstatus == PGRES_COPY_IN) && (rstatus == PGRES_COPY_OUT)) should of course be: (rstatus != PGRES_COPY_IN) && (rstatus != PGRES_COPY_OUT)) Jeroen T. Vermeulen
-
Bruce Momjian authored
Gavin Sherry
-
Bruce Momjian authored
. replace CREATE OR REPLACE AGGREGATE with a separate DROP and CREATE . add DROP for all CREATE OPERATORs . use IMMUTABLE and STRICT instead of WITH (isStrict) . add IMMUTABLE and STRICT to int_array_aggregate's accumulator function Gregory Stark
-
Bruce Momjian authored
when running it with older(Pre 7.3.x) versions of Postgresql. Backpatched to 7.3.X. Steven Singer
-