Commit c30446b9 authored by Tom Lane's avatar Tom Lane

Proofreading for Bruce's recent round of documentation proofreading.

Most of those changes were good, but some not so good ...
parent e8d78d35
<!-- $PostgreSQL: pgsql/doc/src/sgml/advanced.sgml,v 1.58 2009/04/27 16:27:35 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/advanced.sgml,v 1.59 2009/06/17 21:58:48 tgl Exp $ -->
<chapter id="tutorial-advanced"> <chapter id="tutorial-advanced">
<title>Advanced Features</title> <title>Advanced Features</title>
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
<para> <para>
This chapter will on occasion refer to examples found in <xref This chapter will on occasion refer to examples found in <xref
linkend="tutorial-sql"> to change or improve them, so it will be linkend="tutorial-sql"> to change or improve them, so it will be
good if you have read that chapter. Some examples from useful to have read that chapter. Some examples from
this chapter can also be found in this chapter can also be found in
<filename>advanced.sql</filename> in the tutorial directory. This <filename>advanced.sql</filename> in the tutorial directory. This
file also contains some sample data to load, which is not file also contains some sample data to load, which is not
...@@ -173,7 +173,7 @@ UPDATE branches SET balance = balance + 100.00 ...@@ -173,7 +173,7 @@ UPDATE branches SET balance = balance + 100.00
</para> </para>
<para> <para>
The details of these commands are not important; the important The details of these commands are not important here; the important
point is that there are several separate updates involved to accomplish point is that there are several separate updates involved to accomplish
this rather simple operation. Our bank's officers will want to be this rather simple operation. Our bank's officers will want to be
assured that either all these updates happen, or none of them happen. assured that either all these updates happen, or none of them happen.
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/array.sgml,v 1.69 2009/04/27 16:27:35 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/array.sgml,v 1.70 2009/06/17 21:58:48 tgl Exp $ -->
<sect1 id="arrays"> <sect1 id="arrays">
<title>Arrays</title> <title>Arrays</title>
...@@ -60,18 +60,17 @@ CREATE TABLE tictactoe ( ...@@ -60,18 +60,17 @@ CREATE TABLE tictactoe (
</para> </para>
<para> <para>
In addition, the current implementation does not enforce the declared The current implementation does not enforce the declared
number of dimensions either. Arrays of a particular element type are number of dimensions either. Arrays of a particular element type are
all considered to be of the same type, regardless of size or number all considered to be of the same type, regardless of size or number
of dimensions. So, declaring the number of dimensions or sizes in of dimensions. So, declaring the array size or number of dimensions in
<command>CREATE TABLE</command> is simply documentation, it does not <command>CREATE TABLE</command> is simply documentation; it does not
affect run-time behavior. affect run-time behavior.
</para> </para>
<para> <para>
An alternative syntax, which conforms to the SQL standard by using An alternative syntax, which conforms to the SQL standard by using
they keyword <literal>ARRAY</>, can the keyword <literal>ARRAY</>, can be used for one-dimensional arrays.
be used for one-dimensional arrays;
<structfield>pay_by_quarter</structfield> could have been defined <structfield>pay_by_quarter</structfield> could have been defined
as: as:
<programlisting> <programlisting>
...@@ -109,7 +108,7 @@ CREATE TABLE tictactoe ( ...@@ -109,7 +108,7 @@ CREATE TABLE tictactoe (
for the type, as recorded in its <literal>pg_type</literal> entry. for the type, as recorded in its <literal>pg_type</literal> entry.
Among the standard data types provided in the Among the standard data types provided in the
<productname>PostgreSQL</productname> distribution, all use a comma <productname>PostgreSQL</productname> distribution, all use a comma
(<literal>,</>), except for the type <literal>box</> which uses a semicolon (<literal>,</>), except for type <type>box</> which uses a semicolon
(<literal>;</>). Each <replaceable>val</replaceable> is (<literal>;</>). Each <replaceable>val</replaceable> is
either a constant of the array element type, or a subarray. An example either a constant of the array element type, or a subarray. An example
of an array constant is: of an array constant is:
...@@ -121,7 +120,7 @@ CREATE TABLE tictactoe ( ...@@ -121,7 +120,7 @@ CREATE TABLE tictactoe (
</para> </para>
<para> <para>
To set an element of an array to NULL, write <literal>NULL</> To set an element of an array constant to NULL, write <literal>NULL</>
for the element value. (Any upper- or lower-case variant of for the element value. (Any upper- or lower-case variant of
<literal>NULL</> will do.) If you want an actual string value <literal>NULL</> will do.) If you want an actual string value
<quote>NULL</>, you must put double quotes around it. <quote>NULL</>, you must put double quotes around it.
...@@ -211,7 +210,7 @@ INSERT INTO sal_emp ...@@ -211,7 +210,7 @@ INSERT INTO sal_emp
First, we show how to access a single element of an array. First, we show how to access a single element of an array.
This query retrieves the names of the employees whose pay changed in This query retrieves the names of the employees whose pay changed in
the second quarter: the second quarter:
<programlisting> <programlisting>
SELECT name FROM sal_emp WHERE pay_by_quarter[1] &lt;&gt; pay_by_quarter[2]; SELECT name FROM sal_emp WHERE pay_by_quarter[1] &lt;&gt; pay_by_quarter[2];
...@@ -230,7 +229,7 @@ SELECT name FROM sal_emp WHERE pay_by_quarter[1] &lt;&gt; pay_by_quarter[2]; ...@@ -230,7 +229,7 @@ SELECT name FROM sal_emp WHERE pay_by_quarter[1] &lt;&gt; pay_by_quarter[2];
<para> <para>
This query retrieves the third quarter pay of all employees: This query retrieves the third quarter pay of all employees:
<programlisting> <programlisting>
SELECT pay_by_quarter[3] FROM sal_emp; SELECT pay_by_quarter[3] FROM sal_emp;
...@@ -248,7 +247,7 @@ SELECT pay_by_quarter[3] FROM sal_emp; ...@@ -248,7 +247,7 @@ SELECT pay_by_quarter[3] FROM sal_emp;
<literal><replaceable>lower-bound</replaceable>:<replaceable>upper-bound</replaceable></literal> <literal><replaceable>lower-bound</replaceable>:<replaceable>upper-bound</replaceable></literal>
for one or more array dimensions. For example, this query retrieves the first for one or more array dimensions. For example, this query retrieves the first
item on Bill's schedule for the first two days of the week: item on Bill's schedule for the first two days of the week:
<programlisting> <programlisting>
SELECT schedule[1:2][1:1] FROM sal_emp WHERE name = 'Bill'; SELECT schedule[1:2][1:1] FROM sal_emp WHERE name = 'Bill';
...@@ -417,14 +416,14 @@ SELECT ARRAY[5,6] || ARRAY[[1,2],[3,4]]; ...@@ -417,14 +416,14 @@ SELECT ARRAY[5,6] || ARRAY[[1,2],[3,4]];
</para> </para>
<para> <para>
The concatenation operator allows a single element to be pushed to the The concatenation operator allows a single element to be pushed onto the
beginning or end of a one-dimensional array. It also accepts two beginning or end of a one-dimensional array. It also accepts two
<replaceable>N</>-dimensional arrays, or an <replaceable>N</>-dimensional <replaceable>N</>-dimensional arrays, or an <replaceable>N</>-dimensional
and an <replaceable>N+1</>-dimensional array. and an <replaceable>N+1</>-dimensional array.
</para> </para>
<para> <para>
When a single element is pushed to either the beginning or end of a When a single element is pushed onto either the beginning or end of a
one-dimensional array, the result is an array with the same lower bound one-dimensional array, the result is an array with the same lower bound
subscript as the array operand. For example: subscript as the array operand. For example:
<programlisting> <programlisting>
...@@ -463,7 +462,7 @@ SELECT array_dims(ARRAY[[1,2],[3,4]] || ARRAY[[5,6],[7,8],[9,0]]); ...@@ -463,7 +462,7 @@ SELECT array_dims(ARRAY[[1,2],[3,4]] || ARRAY[[5,6],[7,8],[9,0]]);
</para> </para>
<para> <para>
When an <replaceable>N</>-dimensional array is pushed to the beginning When an <replaceable>N</>-dimensional array is pushed onto the beginning
or end of an <replaceable>N+1</>-dimensional array, the result is or end of an <replaceable>N+1</>-dimensional array, the result is
analogous to the element-array case above. Each <replaceable>N</>-dimensional analogous to the element-array case above. Each <replaceable>N</>-dimensional
sub-array is essentially an element of the <replaceable>N+1</>-dimensional sub-array is essentially an element of the <replaceable>N+1</>-dimensional
...@@ -601,9 +600,9 @@ SELECT * FROM ...@@ -601,9 +600,9 @@ SELECT * FROM
around the array value plus delimiter characters between adjacent items. around the array value plus delimiter characters between adjacent items.
The delimiter character is usually a comma (<literal>,</>) but can be The delimiter character is usually a comma (<literal>,</>) but can be
something else: it is determined by the <literal>typdelim</> setting something else: it is determined by the <literal>typdelim</> setting
for the array's element type. (Among the standard data types provided for the array's element type. Among the standard data types provided
in the <productname>PostgreSQL</productname> distribution, all in the <productname>PostgreSQL</productname> distribution, all use a comma,
use a comma, except for <literal>box</>, which uses a semicolon (<literal>;</>).) except for type <type>box</>, which uses a semicolon (<literal>;</>).
In a multidimensional array, each dimension (row, plane, In a multidimensional array, each dimension (row, plane,
cube, etc.) gets its own level of curly braces, and delimiters cube, etc.) gets its own level of curly braces, and delimiters
must be written between adjacent curly-braced entities of the same level. must be written between adjacent curly-braced entities of the same level.
...@@ -657,7 +656,7 @@ SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2 ...@@ -657,7 +656,7 @@ SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2
As shown previously, when writing an array value you can use double As shown previously, when writing an array value you can use double
quotes around any individual array element. You <emphasis>must</> do so quotes around any individual array element. You <emphasis>must</> do so
if the element value would otherwise confuse the array-value parser. if the element value would otherwise confuse the array-value parser.
For example, elements containing curly braces, commas (or the matching For example, elements containing curly braces, commas (or the data type's
delimiter character), double quotes, backslashes, or leading or trailing delimiter character), double quotes, backslashes, or leading or trailing
whitespace must be double-quoted. Empty strings and strings matching the whitespace must be double-quoted. Empty strings and strings matching the
word <literal>NULL</> must be quoted, too. To put a double quote or word <literal>NULL</> must be quoted, too. To put a double quote or
...@@ -668,7 +667,7 @@ SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2 ...@@ -668,7 +667,7 @@ SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2
</para> </para>
<para> <para>
You can use whitespace before a left brace or after a right You can add whitespace before a left brace or after a right
brace. You can also add whitespace before or after any individual item brace. You can also add whitespace before or after any individual item
string. In all of these cases the whitespace will be ignored. However, string. In all of these cases the whitespace will be ignored. However,
whitespace within double-quoted elements, or surrounded on both sides by whitespace within double-quoted elements, or surrounded on both sides by
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/config.sgml,v 1.219 2009/06/03 20:34:29 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/config.sgml,v 1.220 2009/06/17 21:58:48 tgl Exp $ -->
<chapter Id="runtime-config"> <chapter Id="runtime-config">
<title>Server Configuration</title> <title>Server Configuration</title>
...@@ -1252,8 +1252,8 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -1252,8 +1252,8 @@ SET ENABLE_SEQSCAN TO OFF;
Asynchronous I/O depends on an effective <function>posix_fadvise</> Asynchronous I/O depends on an effective <function>posix_fadvise</>
function, which some operating systems lack. If the function is not function, which some operating systems lack. If the function is not
present then setting this parameter to anything but zero will result present then setting this parameter to anything but zero will result
in an error. On some operating systems the function is present but in an error. On some operating systems (e.g., Solaris), the function
does not actually do anything (e.g., Solaris). is present but does not actually do anything.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
......
This diff is collapsed.
<!-- $PostgreSQL: pgsql/doc/src/sgml/ddl.sgml,v 1.86 2009/04/27 16:27:35 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/ddl.sgml,v 1.87 2009/06/17 21:58:49 tgl Exp $ -->
<chapter id="ddl"> <chapter id="ddl">
<title>Data Definition</title> <title>Data Definition</title>
...@@ -557,8 +557,8 @@ CREATE TABLE products ( ...@@ -557,8 +557,8 @@ CREATE TABLE products (
comparison. That means even in the presence of a comparison. That means even in the presence of a
unique constraint it is possible to store duplicate unique constraint it is possible to store duplicate
rows that contain a null value in at least one of the constrained rows that contain a null value in at least one of the constrained
columns. This behavior conforms to the SQL standard, but there columns. This behavior conforms to the SQL standard, but we have
might be other SQL databases might not follow this rule. So be heard that other SQL databases might not follow this rule. So be
careful when developing applications that are intended to be careful when developing applications that are intended to be
portable. portable.
</para> </para>
...@@ -1802,7 +1802,7 @@ REVOKE CREATE ON SCHEMA public FROM PUBLIC; ...@@ -1802,7 +1802,7 @@ REVOKE CREATE ON SCHEMA public FROM PUBLIC;
such names, to ensure that you won't suffer a conflict if some such names, to ensure that you won't suffer a conflict if some
future version defines a system table named the same as your future version defines a system table named the same as your
table. (With the default search path, an unqualified reference to table. (With the default search path, an unqualified reference to
your table name would be resolved as a system table instead.) your table name would then be resolved as the system table instead.)
System tables will continue to follow the convention of having System tables will continue to follow the convention of having
names beginning with <literal>pg_</>, so that they will not names beginning with <literal>pg_</>, so that they will not
conflict with unqualified user-table names so long as users avoid conflict with unqualified user-table names so long as users avoid
...@@ -2571,14 +2571,14 @@ CREATE TRIGGER insert_measurement_trigger ...@@ -2571,14 +2571,14 @@ CREATE TRIGGER insert_measurement_trigger
CREATE OR REPLACE FUNCTION measurement_insert_trigger() CREATE OR REPLACE FUNCTION measurement_insert_trigger()
RETURNS TRIGGER AS $$ RETURNS TRIGGER AS $$
BEGIN BEGIN
IF ( NEW.logdate &gt;= DATE '2006-02-01' AND IF ( NEW.logdate &gt;= DATE '2006-02-01' AND
NEW.logdate &lt; DATE '2006-03-01' ) THEN NEW.logdate &lt; DATE '2006-03-01' ) THEN
INSERT INTO measurement_y2006m02 VALUES (NEW.*); INSERT INTO measurement_y2006m02 VALUES (NEW.*);
ELSIF ( NEW.logdate &gt;= DATE '2006-03-01' AND ELSIF ( NEW.logdate &gt;= DATE '2006-03-01' AND
NEW.logdate &lt; DATE '2006-04-01' ) THEN NEW.logdate &lt; DATE '2006-04-01' ) THEN
INSERT INTO measurement_y2006m03 VALUES (NEW.*); INSERT INTO measurement_y2006m03 VALUES (NEW.*);
... ...
ELSIF ( NEW.logdate &gt;= DATE '2008-01-01' AND ELSIF ( NEW.logdate &gt;= DATE '2008-01-01' AND
NEW.logdate &lt; DATE '2008-02-01' ) THEN NEW.logdate &lt; DATE '2008-02-01' ) THEN
INSERT INTO measurement_y2008m01 VALUES (NEW.*); INSERT INTO measurement_y2008m01 VALUES (NEW.*);
ELSE ELSE
...@@ -2709,9 +2709,9 @@ SELECT count(*) FROM measurement WHERE logdate &gt;= DATE '2008-01-01'; ...@@ -2709,9 +2709,9 @@ SELECT count(*) FROM measurement WHERE logdate &gt;= DATE '2008-01-01';
Without constraint exclusion, the above query would scan each of Without constraint exclusion, the above query would scan each of
the partitions of the <structname>measurement</> table. With constraint the partitions of the <structname>measurement</> table. With constraint
exclusion enabled, the planner will examine the constraints of each exclusion enabled, the planner will examine the constraints of each
partition and try to determine which partitions need not partition and try to prove that the partition need not
be scanned because they cannot not contain any rows meeting the query's be scanned because it could not contain any rows meeting the query's
<literal>WHERE</> clause. When the planner can determine this, it <literal>WHERE</> clause. When the planner can prove this, it
excludes the partition from the query plan. excludes the partition from the query plan.
</para> </para>
...@@ -2906,7 +2906,7 @@ ANALYZE measurement; ...@@ -2906,7 +2906,7 @@ ANALYZE measurement;
<listitem> <listitem>
<para> <para>
Keep the partitioning constraints simple or else the planner may not be Keep the partitioning constraints simple, else the planner may not be
able to prove that partitions don't need to be visited. Use simple able to prove that partitions don't need to be visited. Use simple
equality conditions for list partitioning, or simple equality conditions for list partitioning, or simple
range tests for range partitioning, as illustrated in the preceding range tests for range partitioning, as illustrated in the preceding
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/dml.sgml,v 1.18 2009/04/27 16:27:35 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/dml.sgml,v 1.19 2009/06/17 21:58:49 tgl Exp $ -->
<chapter id="dml"> <chapter id="dml">
<title>Data Manipulation</title> <title>Data Manipulation</title>
...@@ -248,10 +248,7 @@ DELETE FROM products WHERE price = 10; ...@@ -248,10 +248,7 @@ DELETE FROM products WHERE price = 10;
<programlisting> <programlisting>
DELETE FROM products; DELETE FROM products;
</programlisting> </programlisting>
then all rows in the table will be deleted! (<xref then all rows in the table will be deleted! Caveat programmer.
linkend="sql-truncate" endterm="sql-truncate-title"> can also be used
to delete all rows.)
Caveat programmer.
</para> </para>
</sect1> </sect1>
</chapter> </chapter>
<!-- $PostgreSQL: pgsql/doc/src/sgml/docguide.sgml,v 1.75 2009/04/27 16:27:35 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/docguide.sgml,v 1.76 2009/06/17 21:58:49 tgl Exp $ -->
<appendix id="docguide"> <appendix id="docguide">
<title>Documentation</title> <title>Documentation</title>
...@@ -358,7 +358,7 @@ CATALOG "dsssl/catalog" ...@@ -358,7 +358,7 @@ CATALOG "dsssl/catalog"
Create the directory Create the directory
<filename>/usr/local/share/sgml/docbook-4.2</filename> and change <filename>/usr/local/share/sgml/docbook-4.2</filename> and change
to it. (The exact location is irrelevant, but this one is to it. (The exact location is irrelevant, but this one is
reasonable within the layout we are following here.): reasonable within the layout we are following here.)
<screen> <screen>
<prompt>$ </prompt><userinput>mkdir /usr/local/share/sgml/docbook-4.2</userinput> <prompt>$ </prompt><userinput>mkdir /usr/local/share/sgml/docbook-4.2</userinput>
<prompt>$ </prompt><userinput>cd /usr/local/share/sgml/docbook-4.2</userinput> <prompt>$ </prompt><userinput>cd /usr/local/share/sgml/docbook-4.2</userinput>
...@@ -421,7 +421,7 @@ perl -pi -e 's/iso-(.*).gml/ISO\1/g' docbook.cat ...@@ -421,7 +421,7 @@ perl -pi -e 's/iso-(.*).gml/ISO\1/g' docbook.cat
To install the style sheets, unzip and untar the distribution and To install the style sheets, unzip and untar the distribution and
move it to a suitable place, for example move it to a suitable place, for example
<filename>/usr/local/share/sgml</filename>. (The archive will <filename>/usr/local/share/sgml</filename>. (The archive will
automatically create a subdirectory.): automatically create a subdirectory.)
<screen> <screen>
<prompt>$</prompt> <userinput>gunzip docbook-dsssl-1.<replaceable>xx</>.tar.gz</userinput> <prompt>$</prompt> <userinput>gunzip docbook-dsssl-1.<replaceable>xx</>.tar.gz</userinput>
<prompt>$</prompt> <userinput>tar -C /usr/local/share/sgml -xf docbook-dsssl-1.<replaceable>xx</>.tar</userinput> <prompt>$</prompt> <userinput>tar -C /usr/local/share/sgml -xf docbook-dsssl-1.<replaceable>xx</>.tar</userinput>
......
This diff is collapsed.
<!-- $PostgreSQL: pgsql/doc/src/sgml/indices.sgml,v 1.77 2009/04/27 16:27:35 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/indices.sgml,v 1.78 2009/06/17 21:58:49 tgl Exp $ -->
<chapter id="indexes"> <chapter id="indexes">
<title id="indexes-title">Indexes</title> <title id="indexes-title">Indexes</title>
...@@ -36,7 +36,7 @@ SELECT content FROM test1 WHERE id = <replaceable>constant</replaceable>; ...@@ -36,7 +36,7 @@ SELECT content FROM test1 WHERE id = <replaceable>constant</replaceable>;
matching entries. If there are many rows in matching entries. If there are many rows in
<structname>test1</structname> and only a few rows (perhaps zero <structname>test1</structname> and only a few rows (perhaps zero
or one) that would be returned by such a query, this is clearly an or one) that would be returned by such a query, this is clearly an
inefficient method. But if the system maintains an inefficient method. But if the system has been instructed to maintain an
index on the <structfield>id</structfield> column, it can use a more index on the <structfield>id</structfield> column, it can use a more
efficient method for locating matching rows. For instance, it efficient method for locating matching rows. For instance, it
might only have to walk a few levels deep into a search tree. might only have to walk a few levels deep into a search tree.
...@@ -73,7 +73,7 @@ CREATE INDEX test1_id_index ON test1 (id); ...@@ -73,7 +73,7 @@ CREATE INDEX test1_id_index ON test1 (id);
<para> <para>
Once an index is created, no further intervention is required: the Once an index is created, no further intervention is required: the
system will update the index when the table is modified, and it will system will update the index when the table is modified, and it will
use the index in queries when it thinks it would be more efficient use the index in queries when it thinks doing so would be more efficient
than a sequential table scan. But you might have to run the than a sequential table scan. But you might have to run the
<command>ANALYZE</command> command regularly to update <command>ANALYZE</command> command regularly to update
statistics to allow the query planner to make educated decisions. statistics to allow the query planner to make educated decisions.
...@@ -294,7 +294,7 @@ CREATE TABLE test2 ( ...@@ -294,7 +294,7 @@ CREATE TABLE test2 (
<programlisting> <programlisting>
SELECT name FROM test2 WHERE major = <replaceable>constant</replaceable> AND minor = <replaceable>constant</replaceable>; SELECT name FROM test2 WHERE major = <replaceable>constant</replaceable> AND minor = <replaceable>constant</replaceable>;
</programlisting> </programlisting>
then it might be appropriate to define an index on columns then it might be appropriate to define an index on the columns
<structfield>major</structfield> and <structfield>major</structfield> and
<structfield>minor</structfield> together, e.g.: <structfield>minor</structfield> together, e.g.:
<programlisting> <programlisting>
...@@ -384,16 +384,16 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); ...@@ -384,16 +384,16 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor);
<para> <para>
The planner will consider satisfying an <literal>ORDER BY</> specification The planner will consider satisfying an <literal>ORDER BY</> specification
by either scanning an available index that matches the specification, either by scanning an available index that matches the specification,
or by scanning the table in physical order and doing an explicit or by scanning the table in physical order and doing an explicit
sort. For a query that requires scanning a large fraction of the sort. For a query that requires scanning a large fraction of the
table, the explicit sort is likely to be faster than using an index table, an explicit sort is likely to be faster than using an index
because it requires because it requires
less disk I/O due to a sequential access pattern. Indexes are less disk I/O due to following a sequential access pattern. Indexes are
more useful when only a few rows need be fetched. An important more useful when only a few rows need be fetched. An important
special case is <literal>ORDER BY</> in combination with special case is <literal>ORDER BY</> in combination with
<literal>LIMIT</> <replaceable>n</>: an explicit sort will have to process <literal>LIMIT</> <replaceable>n</>: an explicit sort will have to process
all data to identify the first <replaceable>n</> rows, but if there is all the data to identify the first <replaceable>n</> rows, but if there is
an index matching the <literal>ORDER BY</>, the first <replaceable>n</> an index matching the <literal>ORDER BY</>, the first <replaceable>n</>
rows can be retrieved directly, without scanning the remainder at all. rows can be retrieved directly, without scanning the remainder at all.
</para> </para>
...@@ -433,14 +433,14 @@ CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST); ...@@ -433,14 +433,14 @@ CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST);
<literal>ORDER BY x DESC, y DESC</> if we scan backward. <literal>ORDER BY x DESC, y DESC</> if we scan backward.
But it might be that the application frequently needs to use But it might be that the application frequently needs to use
<literal>ORDER BY x ASC, y DESC</>. There is no way to get that <literal>ORDER BY x ASC, y DESC</>. There is no way to get that
ordering from a simpler index, but it is possible if the index is defined ordering from a plain index, but it is possible if the index is defined
as <literal>(x ASC, y DESC)</> or <literal>(x DESC, y ASC)</>. as <literal>(x ASC, y DESC)</> or <literal>(x DESC, y ASC)</>.
</para> </para>
<para> <para>
Obviously, indexes with non-default sort orderings are a fairly Obviously, indexes with non-default sort orderings are a fairly
specialized feature, but sometimes they can produce tremendous specialized feature, but sometimes they can produce tremendous
speedups for certain queries. Whether it's worth creating such an speedups for certain queries. Whether it's worth maintaining such an
index depends on how often you use queries that require a special index depends on how often you use queries that require a special
sort ordering. sort ordering.
</para> </para>
...@@ -584,9 +584,9 @@ CREATE UNIQUE INDEX <replaceable>name</replaceable> ON <replaceable>table</repla ...@@ -584,9 +584,9 @@ CREATE UNIQUE INDEX <replaceable>name</replaceable> ON <replaceable>table</repla
</indexterm> </indexterm>
<para> <para>
An index column need not be just a column of an underlying table, An index column need not be just a column of the underlying table,
but can be a function or scalar expression computed from one or but can be a function or scalar expression computed from one or
more columns of a table. This feature is useful to obtain fast more columns of the table. This feature is useful to obtain fast
access to tables based on the results of computations. access to tables based on the results of computations.
</para> </para>
...@@ -666,8 +666,8 @@ CREATE INDEX people_names ON people ((first_name || ' ' || last_name)); ...@@ -666,8 +666,8 @@ CREATE INDEX people_names ON people ((first_name || ' ' || last_name));
values. Since a query searching for a common value (one that values. Since a query searching for a common value (one that
accounts for more than a few percent of all the table rows) will not accounts for more than a few percent of all the table rows) will not
use the index anyway, there is no point in keeping those rows in the use the index anyway, there is no point in keeping those rows in the
index. A partial index reduces the size of the index, which speeds index at all. This reduces the size of the index, which will speed
up queries that use the index. It will also speed up many table up those queries that do use the index. It will also speed up many table
update operations because the index does not need to be update operations because the index does not need to be
updated in all cases. <xref linkend="indexes-partial-ex1"> shows a updated in all cases. <xref linkend="indexes-partial-ex1"> shows a
possible application of this idea. possible application of this idea.
...@@ -701,7 +701,7 @@ CREATE TABLE access_log ( ...@@ -701,7 +701,7 @@ CREATE TABLE access_log (
such as this: such as this:
<programlisting> <programlisting>
CREATE INDEX access_log_client_ip_ix ON access_log (client_ip) CREATE INDEX access_log_client_ip_ix ON access_log (client_ip)
WHERE NOT (client_ip &gt; inet '192.168.100.0' AND WHERE NOT (client_ip &gt; inet '192.168.100.0' AND
client_ip &lt; inet '192.168.100.255'); client_ip &lt; inet '192.168.100.255');
</programlisting> </programlisting>
</para> </para>
...@@ -724,14 +724,14 @@ WHERE client_ip = inet '192.168.100.23'; ...@@ -724,14 +724,14 @@ WHERE client_ip = inet '192.168.100.23';
<para> <para>
Observe that this kind of partial index requires that the common Observe that this kind of partial index requires that the common
values be predetermined, so such partial indexes are best used for values be predetermined, so such partial indexes are best used for
data distribution that do not change. The indexes can be recreated data distributions that do not change. The indexes can be recreated
occasionally to adjust for new data distributions, but this adds occasionally to adjust for new data distributions, but this adds
maintenance overhead. maintenance effort.
</para> </para>
</example> </example>
<para> <para>
Another possible use for partial indexes is to exclude values from the Another possible use for a partial index is to exclude values from the
index that the index that the
typical query workload is not interested in; this is shown in <xref typical query workload is not interested in; this is shown in <xref
linkend="indexes-partial-ex2">. This results in the same linkend="indexes-partial-ex2">. This results in the same
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/installation.sgml,v 1.323 2009/06/12 15:53:32 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/installation.sgml,v 1.324 2009/06/17 21:58:49 tgl Exp $ -->
<chapter id="installation"> <chapter id="installation">
<title><![%standalone-include[<productname>PostgreSQL</>]]> <title><![%standalone-include[<productname>PostgreSQL</>]]>
...@@ -85,7 +85,7 @@ su - postgres ...@@ -85,7 +85,7 @@ su - postgres
<listitem> <listitem>
<para> <para>
You need an <acronym>ISO</>/<acronym>ANSI</> C compiler (minimum You need an <acronym>ISO</>/<acronym>ANSI</> C compiler (at least
C89-compliant). Recent C89-compliant). Recent
versions of <productname>GCC</> are recommendable, but versions of <productname>GCC</> are recommendable, but
<productname>PostgreSQL</> is known to build using a wide variety <productname>PostgreSQL</> is known to build using a wide variety
...@@ -118,7 +118,7 @@ su - postgres ...@@ -118,7 +118,7 @@ su - postgres
command you type, and allows you to use arrow keys to recall and command you type, and allows you to use arrow keys to recall and
edit previous commands. This is very helpful and is strongly edit previous commands. This is very helpful and is strongly
recommended. If you don't want to use it then you must specify recommended. If you don't want to use it then you must specify
the <option>--without-readline</option> option of the <option>--without-readline</option> option to
<filename>configure</>. As an alternative, you can often use the <filename>configure</>. As an alternative, you can often use the
BSD-licensed <filename>libedit</filename> library, originally BSD-licensed <filename>libedit</filename> library, originally
developed on <productname>NetBSD</productname>. The developed on <productname>NetBSD</productname>. The
...@@ -422,11 +422,10 @@ su - postgres ...@@ -422,11 +422,10 @@ su - postgres
On systems that have <productname>PostgreSQL</> started at boot time, On systems that have <productname>PostgreSQL</> started at boot time,
there is probably a start-up file that will accomplish the same thing. For there is probably a start-up file that will accomplish the same thing. For
example, on a <systemitem class="osname">Red Hat Linux</> system one example, on a <systemitem class="osname">Red Hat Linux</> system one
might find that: might find that this works:
<screen> <screen>
<userinput>/etc/rc.d/init.d/postgresql stop</userinput> <userinput>/etc/rc.d/init.d/postgresql stop</userinput>
</screen> </screen>
works.
</para> </para>
</step> </step>
...@@ -471,7 +470,7 @@ su - postgres ...@@ -471,7 +470,7 @@ su - postgres
<step> <step>
<para> <para>
Start the database server, again the special database user Start the database server, again using the special database user
account: account:
<programlisting> <programlisting>
<userinput>/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data</> <userinput>/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data</>
...@@ -1648,7 +1647,7 @@ All of PostgreSQL is successfully made. Ready to install. ...@@ -1648,7 +1647,7 @@ All of PostgreSQL is successfully made. Ready to install.
later on. To reset the source tree to the state in which it was later on. To reset the source tree to the state in which it was
distributed, use <command>gmake distclean</>. If you are going to distributed, use <command>gmake distclean</>. If you are going to
build for several platforms within the same source tree you must do build for several platforms within the same source tree you must do
this and rebuild for each platform. (Alternatively, use this and re-configure for each platform. (Alternatively, use
a separate build tree for each platform, so that the source tree a separate build tree for each platform, so that the source tree
remains unmodified.) remains unmodified.)
</para> </para>
...@@ -1675,7 +1674,7 @@ All of PostgreSQL is successfully made. Ready to install. ...@@ -1675,7 +1674,7 @@ All of PostgreSQL is successfully made. Ready to install.
</indexterm> </indexterm>
<para> <para>
On several systems with shared libraries On some systems with shared libraries
you need to tell the system how to find the newly installed you need to tell the system how to find the newly installed
shared libraries. The systems on which this is shared libraries. The systems on which this is
<emphasis>not</emphasis> necessary include <systemitem <emphasis>not</emphasis> necessary include <systemitem
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/monitoring.sgml,v 1.69 2009/04/27 16:27:36 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/monitoring.sgml,v 1.70 2009/06/17 21:58:49 tgl Exp $ -->
<chapter id="monitoring"> <chapter id="monitoring">
<title>Monitoring Database Activity</title> <title>Monitoring Database Activity</title>
...@@ -929,7 +929,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -929,7 +929,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
<function>read()</> calls issued for the table, index, or <function>read()</> calls issued for the table, index, or
database; the number of actual physical reads is usually database; the number of actual physical reads is usually
lower due to kernel-level buffering. The <literal>*_blks_read</> lower due to kernel-level buffering. The <literal>*_blks_read</>
statistics columns uses this subtraction, i.e., fetched minus hit. statistics columns use this subtraction, i.e., fetched minus hit.
</para> </para>
</note> </note>
......
This diff is collapsed.
<!-- $PostgreSQL: pgsql/doc/src/sgml/perform.sgml,v 1.70 2009/04/27 16:27:36 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/perform.sgml,v 1.71 2009/06/17 21:58:49 tgl Exp $ -->
<chapter id="performance-tips"> <chapter id="performance-tips">
<title>Performance Tips</title> <title>Performance Tips</title>
...@@ -45,8 +45,9 @@ ...@@ -45,8 +45,9 @@
table access methods: sequential scans, index scans, and bitmap index table access methods: sequential scans, index scans, and bitmap index
scans. If the query requires joining, aggregation, sorting, or other scans. If the query requires joining, aggregation, sorting, or other
operations on the raw rows, then there will be additional nodes operations on the raw rows, then there will be additional nodes
above the scan nodes to perform these operations. Other nodes types above the scan nodes to perform these operations. Again,
are also supported. The output there is usually more than one possible way to do these operations,
so different node types can appear here too. The output
of <command>EXPLAIN</command> has one line for each node in the plan of <command>EXPLAIN</command> has one line for each node in the plan
tree, showing the basic node type plus the cost estimates that the planner tree, showing the basic node type plus the cost estimates that the planner
made for the execution of that plan node. The first line (topmost node) made for the execution of that plan node. The first line (topmost node)
...@@ -83,24 +84,24 @@ EXPLAIN SELECT * FROM tenk1; ...@@ -83,24 +84,24 @@ EXPLAIN SELECT * FROM tenk1;
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para> <para>
Estimated start-up cost, e.g., time expended before the output scan can start, Estimated start-up cost (time expended before the output scan can start,
time to do the sorting in a sort node e.g., time to do the sorting in a sort node)
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Estimated total cost if all rows were to be retrieved (though they might Estimated total cost (if all rows are retrieved, though they might
not be, e.g., a query with a <literal>LIMIT</> clause will stop not be; e.g., a query with a <literal>LIMIT</> clause will stop
short of paying the total cost of the <literal>Limit</> node's short of paying the total cost of the <literal>Limit</> plan node's
input node) input node)
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Estimated number of rows output by this plan node (Again, only if Estimated number of rows output by this plan node (again, only if
executed to completion.) executed to completion)
</para> </para>
</listitem> </listitem>
...@@ -129,18 +130,18 @@ EXPLAIN SELECT * FROM tenk1; ...@@ -129,18 +130,18 @@ EXPLAIN SELECT * FROM tenk1;
the cost only reflects things that the planner cares about. the cost only reflects things that the planner cares about.
In particular, the cost does not consider the time spent transmitting In particular, the cost does not consider the time spent transmitting
result rows to the client, which could be an important result rows to the client, which could be an important
factor in the total elapsed time; but the planner ignores it because factor in the real elapsed time; but the planner ignores it because
it cannot change it by altering the plan. (Every correct plan will it cannot change it by altering the plan. (Every correct plan will
output the same row set, we trust.) output the same row set, we trust.)
</para> </para>
<para> <para>
The <command>EXPLAIN</command> <literal>rows=</> value is a little tricky The <literal>rows</> value is a little tricky
because it is <emphasis>not</emphasis> the because it is <emphasis>not</emphasis> the
number of rows processed or scanned by the plan node. It is usually less, number of rows processed or scanned by the plan node. It is usually less,
reflecting the estimated selectivity of any <literal>WHERE</>-clause reflecting the estimated selectivity of any <literal>WHERE</>-clause
conditions that are being conditions that are being
applied to the node. Ideally the top-level rows estimate will applied at the node. Ideally the top-level rows estimate will
approximate the number of rows actually returned, updated, or deleted approximate the number of rows actually returned, updated, or deleted
by the query. by the query.
</para> </para>
...@@ -197,7 +198,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 7000; ...@@ -197,7 +198,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 7000;
</para> </para>
<para> <para>
The actual number of rows this query would select is 7000, but the <literal>rows=</> The actual number of rows this query would select is 7000, but the <literal>rows</>
estimate is only approximate. If you try to duplicate this experiment, estimate is only approximate. If you try to duplicate this experiment,
you will probably get a slightly different estimate; moreover, it will you will probably get a slightly different estimate; moreover, it will
change after each <command>ANALYZE</command> command, because the change after each <command>ANALYZE</command> command, because the
...@@ -234,7 +235,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 100; ...@@ -234,7 +235,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 100;
<para> <para>
If the <literal>WHERE</> condition is selective enough, the planner might If the <literal>WHERE</> condition is selective enough, the planner might
switch to a <emphasis>simple</> index scan plan: switch to a <quote>simple</> index scan plan:
<programlisting> <programlisting>
EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 3; EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 3;
...@@ -248,8 +249,8 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 3; ...@@ -248,8 +249,8 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 3;
In this case the table rows are fetched in index order, which makes them In this case the table rows are fetched in index order, which makes them
even more expensive to read, but there are so few that the extra cost even more expensive to read, but there are so few that the extra cost
of sorting the row locations is not worth it. You'll most often see of sorting the row locations is not worth it. You'll most often see
this plan type in queries that fetch just a single row, and for queries this plan type for queries that fetch just a single row, and for queries
with an <literal>ORDER BY</> condition that matches the index that have an <literal>ORDER BY</> condition that matches the index
order. order.
</para> </para>
...@@ -320,7 +321,7 @@ WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2; ...@@ -320,7 +321,7 @@ WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2;
</para> </para>
<para> <para>
In this nested-loop join, the outer scan (upper) is the same bitmap index scan we In this nested-loop join, the outer (upper) scan is the same bitmap index scan we
saw earlier, and so its cost and row count are the same because we are saw earlier, and so its cost and row count are the same because we are
applying the <literal>WHERE</> clause <literal>unique1 &lt; 100</literal> applying the <literal>WHERE</> clause <literal>unique1 &lt; 100</literal>
at that node. at that node.
...@@ -409,7 +410,7 @@ WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2; ...@@ -409,7 +410,7 @@ WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2;
</screen> </screen>
Note that the <quote>actual time</quote> values are in milliseconds of Note that the <quote>actual time</quote> values are in milliseconds of
real time, whereas the <literal>cost=</> estimates are expressed in real time, whereas the <literal>cost</> estimates are expressed in
arbitrary units; so they are unlikely to match up. arbitrary units; so they are unlikely to match up.
The thing to pay attention to is whether the ratios of actual time and The thing to pay attention to is whether the ratios of actual time and
estimated costs are consistent. estimated costs are consistent.
...@@ -419,11 +420,11 @@ WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2; ...@@ -419,11 +420,11 @@ WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2;
In some query plans, it is possible for a subplan node to be executed more In some query plans, it is possible for a subplan node to be executed more
than once. For example, the inner index scan is executed once per outer than once. For example, the inner index scan is executed once per outer
row in the above nested-loop plan. In such cases, the row in the above nested-loop plan. In such cases, the
<literal>loops=</> value reports the <literal>loops</> value reports the
total number of executions of the node, and the actual time and rows total number of executions of the node, and the actual time and rows
values shown are averages per-execution. This is done to make the numbers values shown are averages per-execution. This is done to make the numbers
comparable with the way that the cost estimates are shown. Multiply by comparable with the way that the cost estimates are shown. Multiply by
the <literal>loops=</> value to get the total time actually spent in the <literal>loops</> value to get the total time actually spent in
the node. the node.
</para> </para>
...@@ -780,7 +781,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; ...@@ -780,7 +781,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
</indexterm> </indexterm>
<para> <para>
When doing <command>INSERT</>s, turn off autocommit and just do When using multiple <command>INSERT</>s, turn off autocommit and just do
one commit at the end. (In plain one commit at the end. (In plain
SQL, this means issuing <command>BEGIN</command> at the start and SQL, this means issuing <command>BEGIN</command> at the start and
<command>COMMIT</command> at the end. Some client libraries might <command>COMMIT</command> at the end. Some client libraries might
...@@ -824,7 +825,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; ...@@ -824,7 +825,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
<para> <para>
Note that loading a large number of rows using Note that loading a large number of rows using
<command>COPY</command> is almost always faster than using <command>COPY</command> is almost always faster than using
<command>INSERT</command>, even if the <command>PREPARE ... INSERT</> is used and <command>INSERT</command>, even if <command>PREPARE</> is used and
multiple insertions are batched into a single transaction. multiple insertions are batched into a single transaction.
</para> </para>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/postgres.sgml,v 1.87 2009/04/27 16:27:36 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/postgres.sgml,v 1.88 2009/06/17 21:58:49 tgl Exp $ -->
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [ <!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
...@@ -134,7 +134,7 @@ ...@@ -134,7 +134,7 @@
The first few chapters are written so they can be understood The first few chapters are written so they can be understood
without prerequisite knowledge, so new users who need to set without prerequisite knowledge, so new users who need to set
up their own server can begin their exploration with this part. up their own server can begin their exploration with this part.
The rest of this part is about tuning and management; the material The rest of this part is about tuning and management; that material
assumes that the reader is familiar with the general use of assumes that the reader is familiar with the general use of
the <productname>PostgreSQL</> database system. Readers are the <productname>PostgreSQL</> database system. Readers are
encouraged to look at <xref linkend="tutorial"> and <xref encouraged to look at <xref linkend="tutorial"> and <xref
......
This diff is collapsed.
<!-- $PostgreSQL: pgsql/doc/src/sgml/query.sgml,v 1.52 2009/04/27 16:27:36 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/query.sgml,v 1.53 2009/06/17 21:58:49 tgl Exp $ -->
<chapter id="tutorial-sql"> <chapter id="tutorial-sql">
<title>The <acronym>SQL</acronym> Language</title> <title>The <acronym>SQL</acronym> Language</title>
...@@ -53,7 +53,7 @@ ...@@ -53,7 +53,7 @@
</screen> </screen>
The <literal>\i</literal> command reads in commands from the The <literal>\i</literal> command reads in commands from the
specified file. The <command>psql</command> <literal>-s</> option puts you in specified file. <command>psql</command>'s <literal>-s</> option puts you in
single step mode which pauses before sending each statement to the single step mode which pauses before sending each statement to the
server. The commands used in this section are in the file server. The commands used in this section are in the file
<filename>basics.sql</filename>. <filename>basics.sql</filename>.
...@@ -150,7 +150,7 @@ CREATE TABLE weather ( ...@@ -150,7 +150,7 @@ CREATE TABLE weather (
<type>int</type> is the normal integer type. <type>real</type> is <type>int</type> is the normal integer type. <type>real</type> is
a type for storing single precision floating-point numbers. a type for storing single precision floating-point numbers.
<type>date</type> should be self-explanatory. (Yes, the column of <type>date</type> should be self-explanatory. (Yes, the column of
type <type>date</type> is also named <literal>date</literal>. type <type>date</type> is also named <structfield>date</structfield>.
This might be convenient or confusing &mdash; you choose.) This might be convenient or confusing &mdash; you choose.)
</para> </para>
...@@ -165,7 +165,7 @@ CREATE TABLE weather ( ...@@ -165,7 +165,7 @@ CREATE TABLE weather (
and a rich set of geometric types. and a rich set of geometric types.
<productname>PostgreSQL</productname> can be customized with an <productname>PostgreSQL</productname> can be customized with an
arbitrary number of user-defined data types. Consequently, type arbitrary number of user-defined data types. Consequently, type
names are not special key words in the syntax except where required to names are not key words in the syntax, except where required to
support special cases in the <acronym>SQL</acronym> standard. support special cases in the <acronym>SQL</acronym> standard.
</para> </para>
...@@ -291,7 +291,7 @@ COPY weather FROM '/home/user/weather.txt'; ...@@ -291,7 +291,7 @@ COPY weather FROM '/home/user/weather.txt';
tables from which to retrieve the data), and an optional tables from which to retrieve the data), and an optional
qualification (the part that specifies any restrictions). For qualification (the part that specifies any restrictions). For
example, to retrieve all the rows of table example, to retrieve all the rows of table
<classname>weather</classname>, type: <structname>weather</structname>, type:
<programlisting> <programlisting>
SELECT * FROM weather; SELECT * FROM weather;
</programlisting> </programlisting>
...@@ -450,9 +450,10 @@ SELECT DISTINCT city ...@@ -450,9 +450,10 @@ SELECT DISTINCT city
of the same or different tables at one time is called a of the same or different tables at one time is called a
<firstterm>join</firstterm> query. As an example, say you wish to <firstterm>join</firstterm> query. As an example, say you wish to
list all the weather records together with the location of the list all the weather records together with the location of the
associated city. To do that, we need to compare the city column of associated city. To do that, we need to compare the <structfield>city</>
each row of the <literal>weather</> table with the name column of all rows in column of each row of the <structname>weather</> table with the
the <literal>cities</> table, and select the pairs of rows where these values match. <structfield>name</> column of all rows in the <structname>cities</>
table, and select the pairs of rows where these values match.
<note> <note>
<para> <para>
This is only a conceptual model. The join is usually performed This is only a conceptual model. The join is usually performed
...@@ -485,8 +486,8 @@ SELECT * ...@@ -485,8 +486,8 @@ SELECT *
<para> <para>
There is no result row for the city of Hayward. This is There is no result row for the city of Hayward. This is
because there is no matching entry in the because there is no matching entry in the
<classname>cities</classname> table for Hayward, so the join <structname>cities</structname> table for Hayward, so the join
ignores the unmatched rows in the <literal>weather</> table. We will see ignores the unmatched rows in the <structname>weather</> table. We will see
shortly how this can be fixed. shortly how this can be fixed.
</para> </para>
</listitem> </listitem>
...@@ -494,9 +495,9 @@ SELECT * ...@@ -494,9 +495,9 @@ SELECT *
<listitem> <listitem>
<para> <para>
There are two columns containing the city name. This is There are two columns containing the city name. This is
correct because the columns from the correct because the lists of columns from the
<classname>weather</classname> and the <structname>weather</structname> and
<classname>cities</classname> tables are concatenated. In <structname>cities</structname> tables are concatenated. In
practice this is undesirable, though, so you will probably want practice this is undesirable, though, so you will probably want
to list the output columns explicitly rather than using to list the output columns explicitly rather than using
<literal>*</literal>: <literal>*</literal>:
...@@ -556,10 +557,10 @@ SELECT * ...@@ -556,10 +557,10 @@ SELECT *
Now we will figure out how we can get the Hayward records back in. Now we will figure out how we can get the Hayward records back in.
What we want the query to do is to scan the What we want the query to do is to scan the
<classname>weather</classname> table and for each row to find the <structname>weather</structname> table and for each row to find the
matching <classname>cities</classname> row(s). If no matching row is matching <structname>cities</structname> row(s). If no matching row is
found we want some <quote>empty values</quote> to be substituted found we want some <quote>empty values</quote> to be substituted
for the <classname>cities</classname> table's columns. This kind for the <structname>cities</structname> table's columns. This kind
of query is called an <firstterm>outer join</firstterm>. (The of query is called an <firstterm>outer join</firstterm>. (The
joins we have seen so far are inner joins.) The command looks joins we have seen so far are inner joins.) The command looks
like this: like this:
...@@ -603,10 +604,10 @@ SELECT * ...@@ -603,10 +604,10 @@ SELECT *
to find all the weather records that are in the temperature range to find all the weather records that are in the temperature range
of other weather records. So we need to compare the of other weather records. So we need to compare the
<structfield>temp_lo</> and <structfield>temp_hi</> columns of <structfield>temp_lo</> and <structfield>temp_hi</> columns of
each <classname>weather</classname> row to the each <structname>weather</structname> row to the
<structfield>temp_lo</structfield> and <structfield>temp_lo</structfield> and
<structfield>temp_hi</structfield> columns of all other <structfield>temp_hi</structfield> columns of all other
<classname>weather</classname> rows. We can do this with the <structname>weather</structname> rows. We can do this with the
following query: following query:
<programlisting> <programlisting>
...@@ -756,7 +757,7 @@ SELECT city, max(temp_lo) ...@@ -756,7 +757,7 @@ SELECT city, max(temp_lo)
</screen> </screen>
which gives us the same results for only the cities that have all which gives us the same results for only the cities that have all
<literal>temp_lo</> values below 40. Finally, if we only care about <structfield>temp_lo</> values below 40. Finally, if we only care about
cities whose cities whose
names begin with <quote><literal>S</literal></quote>, we might do: names begin with <quote><literal>S</literal></quote>, we might do:
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/rowtypes.sgml,v 2.10 2009/04/27 16:27:36 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/rowtypes.sgml,v 2.11 2009/06/17 21:58:49 tgl Exp $ -->
<sect1 id="rowtypes"> <sect1 id="rowtypes">
<title>Composite Types</title> <title>Composite Types</title>
...@@ -41,7 +41,7 @@ CREATE TYPE inventory_item AS ( ...@@ -41,7 +41,7 @@ CREATE TYPE inventory_item AS (
NULL</>) can presently be included. Note that the <literal>AS</> keyword NULL</>) can presently be included. Note that the <literal>AS</> keyword
is essential; without it, the system will think a different kind is essential; without it, the system will think a different kind
of <command>CREATE TYPE</> command is meant, and you will get odd syntax of <command>CREATE TYPE</> command is meant, and you will get odd syntax
error. errors.
</para> </para>
<para> <para>
...@@ -68,8 +68,8 @@ SELECT price_extension(item, 10) FROM on_hand; ...@@ -68,8 +68,8 @@ SELECT price_extension(item, 10) FROM on_hand;
</para> </para>
<para> <para>
Whenever you create a table, a composite type is automatically Whenever you create a table, a composite type is also automatically
created also, with the same name as the table, to represent the table's created, with the same name as the table, to represent the table's
row type. For example, had we said: row type. For example, had we said:
<programlisting> <programlisting>
CREATE TABLE inventory_item ( CREATE TABLE inventory_item (
...@@ -250,7 +250,7 @@ INSERT INTO mytab (complex_col.r, complex_col.i) VALUES(1.1, 2.2); ...@@ -250,7 +250,7 @@ INSERT INTO mytab (complex_col.r, complex_col.i) VALUES(1.1, 2.2);
The external text representation of a composite value consists of items that The external text representation of a composite value consists of items that
are interpreted according to the I/O conversion rules for the individual are interpreted according to the I/O conversion rules for the individual
field types, plus decoration that indicates the composite structure. field types, plus decoration that indicates the composite structure.
The decoration consists of parentheses The decoration consists of parentheses (<literal>(</> and <literal>)</>)
around the whole value, plus commas (<literal>,</>) between adjacent around the whole value, plus commas (<literal>,</>) between adjacent
items. Whitespace outside the parentheses is ignored, but within the items. Whitespace outside the parentheses is ignored, but within the
parentheses it is considered part of the field value, and might or might not be parentheses it is considered part of the field value, and might or might not be
...@@ -264,7 +264,7 @@ INSERT INTO mytab (complex_col.r, complex_col.i) VALUES(1.1, 2.2); ...@@ -264,7 +264,7 @@ INSERT INTO mytab (complex_col.r, complex_col.i) VALUES(1.1, 2.2);
</para> </para>
<para> <para>
As shown previously, when writing a composite value you can use double As shown previously, when writing a composite value you can write double
quotes around any individual field value. quotes around any individual field value.
You <emphasis>must</> do so if the field value would otherwise You <emphasis>must</> do so if the field value would otherwise
confuse the composite-value parser. In particular, fields containing confuse the composite-value parser. In particular, fields containing
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/start.sgml,v 1.49 2009/04/27 16:27:36 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/start.sgml,v 1.50 2009/06/17 21:58:49 tgl Exp $ -->
<chapter id="tutorial-start"> <chapter id="tutorial-start">
<title>Getting Started</title> <title>Getting Started</title>
...@@ -74,7 +74,7 @@ ...@@ -74,7 +74,7 @@
<para> <para>
A server process, which manages the database files, accepts A server process, which manages the database files, accepts
connections to the database from client applications, and connections to the database from client applications, and
performs database actions on the behalf of the clients. The performs database actions on behalf of the clients. The
database server program is called database server program is called
<filename>postgres</filename>. <filename>postgres</filename>.
<indexterm><primary>postgres</primary></indexterm> <indexterm><primary>postgres</primary></indexterm>
...@@ -164,8 +164,8 @@ ...@@ -164,8 +164,8 @@
createdb: command not found createdb: command not found
</screen> </screen>
then <productname>PostgreSQL</> was not installed properly. Either it was not then <productname>PostgreSQL</> was not installed properly. Either it was not
installed at all or your shell's search path was not set correctly. Try installed at all or your shell's search path was not set to include it.
calling the command with an absolute path instead: Try calling the command with an absolute path instead:
<screen> <screen>
<prompt>$</prompt> <userinput>/usr/local/pgsql/bin/createdb mydb</userinput> <prompt>$</prompt> <userinput>/usr/local/pgsql/bin/createdb mydb</userinput>
</screen> </screen>
...@@ -177,8 +177,7 @@ createdb: command not found ...@@ -177,8 +177,7 @@ createdb: command not found
<para> <para>
Another response could be this: Another response could be this:
<screen> <screen>
createdb: could not connect to database postgres: could not connect createdb: could not connect to database postgres: could not connect to server: No such file or directory
to server: No such file or directory
Is the server running locally and accepting Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"? connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
</screen> </screen>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/storage.sgml,v 1.28 2009/05/16 22:03:53 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/storage.sgml,v 1.29 2009/06/17 21:58:49 tgl Exp $ -->
<chapter id="storage"> <chapter id="storage">
...@@ -135,8 +135,9 @@ main file (a/k/a main fork), each table and index has a <firstterm>free space ...@@ -135,8 +135,9 @@ main file (a/k/a main fork), each table and index has a <firstterm>free space
map</> (see <xref linkend="storage-fsm">), which stores information about free map</> (see <xref linkend="storage-fsm">), which stores information about free
space available in the relation. The free space map is stored in a file named space available in the relation. The free space map is stored in a file named
with the filenode number plus the suffix <literal>_fsm</>. Tables also have a with the filenode number plus the suffix <literal>_fsm</>. Tables also have a
visibility map fork, with the suffix <literal>_vm</>, to track which pages are <firstterm>visibility map</>, stored in a fork with the suffix
known to have no dead tuples and therefore need no vacuuming. <literal>_vm</>, to track which pages are known to have no dead tuples.
The visibility map is described further in <xref linkend="storage-vm">.
</para> </para>
<caution> <caution>
...@@ -417,6 +418,38 @@ information stored in free space maps (see <xref linkend="pgfreespacemap">). ...@@ -417,6 +418,38 @@ information stored in free space maps (see <xref linkend="pgfreespacemap">).
</sect1> </sect1>
<sect1 id="storage-vm">
<title>Visibility Map</title>
<indexterm>
<primary>Visibility Map</primary>
</indexterm>
<indexterm><primary>VM</><see>Visibility Map</></indexterm>
<para>
Each heap relation has a Visibility Map
(VM) to keep track of which pages contain only tuples that are known to be
visible to all active transactions. It's stored
alongside the main relation data in a separate relation fork, named after the
filenode number of the relation, plus a <literal>_vm</> suffix. For example,
if the filenode of a relation is 12345, the VM is stored in a file called
<filename>12345_vm</>, in the same directory as the main relation file.
Note that indexes do not have VMs.
</para>
<para>
The visibility map simply stores one bit per heap page. A set bit means
that all tuples on the page are known to be visible to all transactions.
This means that the page does not contain any tuples that need to be vacuumed;
in future it might also be used to avoid visiting the page for visibility
checks. The map is conservative in the sense that we
make sure that whenever a bit is set, we know the condition is true, but if
a bit is not set, it might or might not be true.
</para>
</sect1>
<sect1 id="storage-page-layout"> <sect1 id="storage-page-layout">
<title>Database Page Layout</title> <title>Database Page Layout</title>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/syntax.sgml,v 1.132 2009/05/05 18:32:17 petere Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/syntax.sgml,v 1.133 2009/06/17 21:58:49 tgl Exp $ -->
<chapter id="sql-syntax"> <chapter id="sql-syntax">
<title>SQL Syntax</title> <title>SQL Syntax</title>
...@@ -442,7 +442,7 @@ SELECT 'foo' 'bar'; ...@@ -442,7 +442,7 @@ SELECT 'foo' 'bar';
</caution> </caution>
<para> <para>
The zero-byte (null byte) character cannot be in a string constant. The character with the code zero cannot be in a string constant.
</para> </para>
</sect3> </sect3>
...@@ -929,8 +929,8 @@ CAST ( '<replaceable>string</replaceable>' AS <replaceable>type</replaceable> ) ...@@ -929,8 +929,8 @@ CAST ( '<replaceable>string</replaceable>' AS <replaceable>type</replaceable> )
</para> </para>
<para> <para>
Comment are removed from the input stream before further syntax A comment is removed from the input stream before further syntax
analysis and are effectively replaced by whitespace. analysis and is effectively replaced by whitespace.
</para> </para>
</sect2> </sect2>
...@@ -1244,9 +1244,9 @@ SELECT 3 OPERATOR(pg_catalog.+) 4; ...@@ -1244,9 +1244,9 @@ SELECT 3 OPERATOR(pg_catalog.+) 4;
<listitem> <listitem>
<para> <para>
Another value expression in parentheses, useful to group Another value expression in parentheses (used to group
subexpressions and override subexpressions and override
precedence.<indexterm><primary>parenthesis</></> precedence<indexterm><primary>parenthesis</></>)
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
...@@ -1725,7 +1725,7 @@ CAST ( <replaceable>expression</replaceable> AS <replaceable>type</replaceable> ...@@ -1725,7 +1725,7 @@ CAST ( <replaceable>expression</replaceable> AS <replaceable>type</replaceable>
casts that are marked <quote>OK to apply implicitly</> casts that are marked <quote>OK to apply implicitly</>
in the system catalogs. Other casts must be invoked with in the system catalogs. Other casts must be invoked with
explicit casting syntax. This restriction is intended to prevent explicit casting syntax. This restriction is intended to prevent
surprising conversions from being silently applied. surprising conversions from being applied silently.
</para> </para>
<para> <para>
...@@ -1805,7 +1805,7 @@ SELECT name, (SELECT max(pop) FROM cities WHERE cities.state = states.name) ...@@ -1805,7 +1805,7 @@ SELECT name, (SELECT max(pop) FROM cities WHERE cities.state = states.name)
<para> <para>
An array constructor is an expression that builds an An array constructor is an expression that builds an
array using values for its member elements. A simple array array value using values for its member elements. A simple array
constructor constructor
consists of the key word <literal>ARRAY</literal>, a left square bracket consists of the key word <literal>ARRAY</literal>, a left square bracket
<literal>[</>, a list of expressions (separated by commas) for the <literal>[</>, a list of expressions (separated by commas) for the
...@@ -1936,7 +1936,7 @@ SELECT ARRAY(SELECT oid FROM pg_proc WHERE proname LIKE 'bytea%'); ...@@ -1936,7 +1936,7 @@ SELECT ARRAY(SELECT oid FROM pg_proc WHERE proname LIKE 'bytea%');
</indexterm> </indexterm>
<para> <para>
A row constructor is an expression that builds a row (also A row constructor is an expression that builds a row value (also
called a composite value) using values called a composite value) using values
for its member fields. A row constructor consists of the key word for its member fields. A row constructor consists of the key word
<literal>ROW</literal>, a left parenthesis, zero or more <literal>ROW</literal>, a left parenthesis, zero or more
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/textsearch.sgml,v 1.51 2009/04/27 16:27:36 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/textsearch.sgml,v 1.52 2009/06/17 21:58:49 tgl Exp $ -->
<chapter id="textsearch"> <chapter id="textsearch">
<title id="textsearch-title">Full Text Search</title> <title id="textsearch-title">Full Text Search</title>
...@@ -389,7 +389,7 @@ text @@ text ...@@ -389,7 +389,7 @@ text @@ text
<para> <para>
Text search parsers and templates are built from low-level C functions; Text search parsers and templates are built from low-level C functions;
therefore C programming ability is required to develop new ones, and therefore it requires C programming ability to develop new ones, and
superuser privileges to install one into a database. (There are examples superuser privileges to install one into a database. (There are examples
of add-on parsers and templates in the <filename>contrib/</> area of the of add-on parsers and templates in the <filename>contrib/</> area of the
<productname>PostgreSQL</> distribution.) Since dictionaries and <productname>PostgreSQL</> distribution.) Since dictionaries and
...@@ -519,7 +519,7 @@ CREATE INDEX pgweb_idx ON pgweb USING gin(to_tsvector(config_name, body)); ...@@ -519,7 +519,7 @@ CREATE INDEX pgweb_idx ON pgweb USING gin(to_tsvector(config_name, body));
recording which configuration was used for each index entry. This recording which configuration was used for each index entry. This
would be useful, for example, if the document collection contained would be useful, for example, if the document collection contained
documents in different languages. Again, documents in different languages. Again,
queries that wish to use the index must be phrased to match, e.g., queries that are meant to use the index must be phrased to match, e.g.,
<literal>WHERE to_tsvector(config_name, body) @@ 'a &amp; b'</>. <literal>WHERE to_tsvector(config_name, body) @@ 'a &amp; b'</>.
</para> </para>
...@@ -860,7 +860,8 @@ SELECT plainto_tsquery('english', 'The Fat &amp; Rats:C'); ...@@ -860,7 +860,8 @@ SELECT plainto_tsquery('english', 'The Fat &amp; Rats:C');
<term> <term>
<synopsis> <synopsis>
ts_rank(<optional> <replaceable class="PARAMETER">weights</replaceable> <type>float4[]</>, </optional> <replaceable class="PARAMETER">vector</replaceable> <type>tsvector</>, <replaceable class="PARAMETER">query</replaceable> <type>tsquery</> <optional>, <replaceable class="PARAMETER">normalization</replaceable> <type>integer</> </optional>) returns <type>float4</> ts_rank(<optional> <replaceable class="PARAMETER">weights</replaceable> <type>float4[]</>, </optional> <replaceable class="PARAMETER">vector</replaceable> <type>tsvector</>,
<replaceable class="PARAMETER">query</replaceable> <type>tsquery</> <optional>, <replaceable class="PARAMETER">normalization</replaceable> <type>integer</> </optional>) returns <type>float4</>
</synopsis> </synopsis>
</term> </term>
...@@ -1042,7 +1043,7 @@ LIMIT 10; ...@@ -1042,7 +1043,7 @@ LIMIT 10;
Ranking can be expensive since it requires consulting the Ranking can be expensive since it requires consulting the
<type>tsvector</type> of each matching document, which can be I/O bound and <type>tsvector</type> of each matching document, which can be I/O bound and
therefore slow. Unfortunately, it is almost impossible to avoid since therefore slow. Unfortunately, it is almost impossible to avoid since
practical queries often result in a large number of matches. practical queries often result in large numbers of matches.
</para> </para>
</sect2> </sect2>
...@@ -1068,7 +1069,7 @@ LIMIT 10; ...@@ -1068,7 +1069,7 @@ LIMIT 10;
<para> <para>
<function>ts_headline</function> accepts a document along <function>ts_headline</function> accepts a document along
with a query, and returns an excerpt of with a query, and returns an excerpt from
the document in which terms from the query are highlighted. The the document in which terms from the query are highlighted. The
configuration to be used to parse the document can be specified by configuration to be used to parse the document can be specified by
<replaceable>config</replaceable>; if <replaceable>config</replaceable> <replaceable>config</replaceable>; if <replaceable>config</replaceable>
...@@ -1085,8 +1086,8 @@ LIMIT 10; ...@@ -1085,8 +1086,8 @@ LIMIT 10;
<itemizedlist spacing="compact" mark="bullet"> <itemizedlist spacing="compact" mark="bullet">
<listitem> <listitem>
<para> <para>
<literal>StartSel</>, <literal>StopSel</literal>: the strings to delimit <literal>StartSel</>, <literal>StopSel</literal>: the strings with
query words appearing in the document, to distinguish which to delimit query words appearing in the document, to distinguish
them from other excerpted words. You must double-quote these strings them from other excerpted words. You must double-quote these strings
if they contain spaces or commas. if they contain spaces or commas.
</para> </para>
...@@ -1188,7 +1189,7 @@ SELECT id, ts_headline(body, q), rank ...@@ -1188,7 +1189,7 @@ SELECT id, ts_headline(body, q), rank
FROM (SELECT id, body, q, ts_rank_cd(ti, q) AS rank FROM (SELECT id, body, q, ts_rank_cd(ti, q) AS rank
FROM apod, to_tsquery('stars') q FROM apod, to_tsquery('stars') q
WHERE ti @@ q WHERE ti @@ q
ORDER BY rank DESC ORDER BY rank DESC
LIMIT 10) AS foo; LIMIT 10) AS foo;
</programlisting> </programlisting>
</para> </para>
...@@ -1678,9 +1679,9 @@ SELECT title, body FROM messages WHERE tsv @@ to_tsquery('title &amp; body'); ...@@ -1678,9 +1679,9 @@ SELECT title, body FROM messages WHERE tsv @@ to_tsquery('title &amp; body');
</para> </para>
<para> <para>
A limitation of built-in triggers is that they treat all the A limitation of these built-in triggers is that they treat all the
input columns alike. To process columns differently &mdash; for input columns alike. To process columns differently &mdash; for
example, to weigh title differently from body &mdash; it is necessary example, to weight title differently from body &mdash; it is necessary
to write a custom trigger. Here is an example using to write a custom trigger. Here is an example using
<application>PL/pgSQL</application> as the trigger language: <application>PL/pgSQL</application> as the trigger language:
...@@ -1722,8 +1723,8 @@ ON messages FOR EACH ROW EXECUTE PROCEDURE messages_trigger(); ...@@ -1722,8 +1723,8 @@ ON messages FOR EACH ROW EXECUTE PROCEDURE messages_trigger();
</para> </para>
<synopsis> <synopsis>
ts_stat(<replaceable class="PARAMETER">sqlquery</replaceable> <type>text</>, <optional> <replaceable class="PARAMETER">weights</replaceable> <type>text</>, ts_stat(<replaceable class="PARAMETER">sqlquery</replaceable> <type>text</>, <optional> <replaceable class="PARAMETER">weights</replaceable> <type>text</>, </optional>
</optional> OUT <replaceable class="PARAMETER">word</replaceable> <type>text</>, OUT <replaceable class="PARAMETER">ndoc</replaceable> <type>integer</>, OUT <replaceable class="PARAMETER">word</replaceable> <type>text</>, OUT <replaceable class="PARAMETER">ndoc</replaceable> <type>integer</>,
OUT <replaceable class="PARAMETER">nentry</replaceable> <type>integer</>) returns <type>setof record</> OUT <replaceable class="PARAMETER">nentry</replaceable> <type>integer</>) returns <type>setof record</>
</synopsis> </synopsis>
...@@ -2087,7 +2088,7 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h ...@@ -2087,7 +2088,7 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
by the parser, each dictionary in the list is consulted in turn, by the parser, each dictionary in the list is consulted in turn,
until some dictionary recognizes it as a known word. If it is identified until some dictionary recognizes it as a known word. If it is identified
as a stop word, or if no dictionary recognizes the token, it will be as a stop word, or if no dictionary recognizes the token, it will be
discarded and not indexed or searched. discarded and not indexed or searched for.
The general rule for configuring a list of dictionaries The general rule for configuring a list of dictionaries
is to place first the most narrow, most specific dictionary, then the more is to place first the most narrow, most specific dictionary, then the more
general dictionaries, finishing with a very general dictionary, like general dictionaries, finishing with a very general dictionary, like
...@@ -2439,7 +2440,7 @@ CREATE TEXT SEARCH DICTIONARY thesaurus_simple ( ...@@ -2439,7 +2440,7 @@ CREATE TEXT SEARCH DICTIONARY thesaurus_simple (
<programlisting> <programlisting>
ALTER TEXT SEARCH CONFIGURATION russian ALTER TEXT SEARCH CONFIGURATION russian
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart ALTER MAPPING FOR asciiword, asciihword, hword_asciipart
WITH thesaurus_simple; WITH thesaurus_simple;
</programlisting> </programlisting>
</para> </para>
...@@ -2679,9 +2680,9 @@ CREATE TEXT SEARCH DICTIONARY english_stem ( ...@@ -2679,9 +2680,9 @@ CREATE TEXT SEARCH DICTIONARY english_stem (
</para> </para>
<para> <para>
As an example, we will create a configuration As an example we will create a configuration
<literal>pg</literal> by duplicating the built-in <literal>pg</literal>, starting by duplicating the built-in
<literal>english</> configuration. <literal>english</> configuration:
<programlisting> <programlisting>
CREATE TEXT SEARCH CONFIGURATION public.pg ( COPY = pg_catalog.english ); CREATE TEXT SEARCH CONFIGURATION public.pg ( COPY = pg_catalog.english );
...@@ -3137,7 +3138,7 @@ SELECT plainto_tsquery('supernovae stars'); ...@@ -3137,7 +3138,7 @@ SELECT plainto_tsquery('supernovae stars');
</indexterm> </indexterm>
<para> <para>
There are two kinds of indexes which can be used to speed up full text There are two kinds of indexes that can be used to speed up full text
searches. searches.
Note that indexes are not mandatory for full text searching, but in Note that indexes are not mandatory for full text searching, but in
cases where a column is searched on a regular basis, an index is cases where a column is searched on a regular basis, an index is
...@@ -3204,7 +3205,7 @@ SELECT plainto_tsquery('supernovae stars'); ...@@ -3204,7 +3205,7 @@ SELECT plainto_tsquery('supernovae stars');
to check the actual table row to eliminate such false matches. to check the actual table row to eliminate such false matches.
(<productname>PostgreSQL</productname> does this automatically when needed.) (<productname>PostgreSQL</productname> does this automatically when needed.)
GiST indexes are lossy because each document is represented in the GiST indexes are lossy because each document is represented in the
index using a fixed-length signature. The signature is generated by hashing index by a fixed-length signature. The signature is generated by hashing
each word into a random bit in an n-bit string, with all these bits OR-ed each word into a random bit in an n-bit string, with all these bits OR-ed
together to produce an n-bit document signature. When two words hash to together to produce an n-bit document signature. When two words hash to
the same bit position there will be a false match. If all words in the same bit position there will be a false match. If all words in
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/typeconv.sgml,v 1.59 2009/04/27 16:27:36 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/typeconv.sgml,v 1.60 2009/06/17 21:58:49 tgl Exp $ -->
<chapter Id="typeconv"> <chapter Id="typeconv">
<title>Type Conversion</title> <title>Type Conversion</title>
...@@ -161,7 +161,7 @@ categories</firstterm>, including <type>boolean</type>, <type>numeric</type>, ...@@ -161,7 +161,7 @@ categories</firstterm>, including <type>boolean</type>, <type>numeric</type>,
user-defined. (For a list see <xref linkend="catalog-typcategory-table">; user-defined. (For a list see <xref linkend="catalog-typcategory-table">;
but note it is also possible to create custom type categories.) Within each but note it is also possible to create custom type categories.) Within each
category there can be one or more <firstterm>preferred types</firstterm>, which category there can be one or more <firstterm>preferred types</firstterm>, which
are selected when there is ambiguity. With careful selection are preferred when there is a choice of possible types. With careful selection
of preferred types and available implicit casts, it is possible to ensure that of preferred types and available implicit casts, it is possible to ensure that
ambiguous expressions (those with multiple candidate parsing solutions) can be ambiguous expressions (those with multiple candidate parsing solutions) can be
resolved in a useful way. resolved in a useful way.
...@@ -189,7 +189,7 @@ calls in the query. ...@@ -189,7 +189,7 @@ calls in the query.
<para> <para>
Additionally, if a query usually requires an implicit conversion for a function, and Additionally, if a query usually requires an implicit conversion for a function, and
if then the user defines a new function with the correct argument types, the parser if then the user defines a new function with the correct argument types, the parser
should use this new function and no longer do implicit conversion using the old function. should use this new function and no longer do implicit conversion to use the old function.
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
...@@ -206,10 +206,12 @@ should use this new function and no longer do implicit conversion using the old ...@@ -206,10 +206,12 @@ should use this new function and no longer do implicit conversion using the old
</indexterm> </indexterm>
<para> <para>
The specific operator invoked is determined by the following The specific operator that is referenced by an operator expression
steps. Note that this procedure is affected is determined using the following procedure.
by the precedence of the involved operators. See <xref Note that this procedure is indirectly affected
linkend="sql-precedence"> for more information. by the precedence of the involved operators, since that will determine
which sub-expressions are taken to be the inputs of which operators.
See <xref linkend="sql-precedence"> for more information.
</para> </para>
<procedure> <procedure>
...@@ -220,7 +222,7 @@ should use this new function and no longer do implicit conversion using the old ...@@ -220,7 +222,7 @@ should use this new function and no longer do implicit conversion using the old
Select the operators to be considered from the Select the operators to be considered from the
<classname>pg_operator</classname> system catalog. If a non-schema-qualified <classname>pg_operator</classname> system catalog. If a non-schema-qualified
operator name was used (the usual case), the operators operator name was used (the usual case), the operators
considered are those with a matching name and argument count that are considered are those with the matching name and argument count that are
visible in the current search path (see <xref linkend="ddl-schemas-path">). visible in the current search path (see <xref linkend="ddl-schemas-path">).
If a qualified operator name was given, only operators in the specified If a qualified operator name was given, only operators in the specified
schema are considered. schema are considered.
...@@ -250,8 +252,8 @@ operators considered), use it. ...@@ -250,8 +252,8 @@ operators considered), use it.
<para> <para>
If one argument of a binary operator invocation is of the <type>unknown</type> type, If one argument of a binary operator invocation is of the <type>unknown</type> type,
then assume it is the same type as the other argument for this check. then assume it is the same type as the other argument for this check.
Cases involving two <type>unknown</type> types will never find a match at Invocations involving two <type>unknown</type> inputs, or a unary operator
this step. with an <type>unknown</type> input, will never find a match at this step.
</para> </para>
</step> </step>
</substeps> </substeps>
...@@ -390,9 +392,9 @@ In this case there is no initial hint for which type to use, since no types ...@@ -390,9 +392,9 @@ In this case there is no initial hint for which type to use, since no types
are specified in the query. So, the parser looks for all candidate operators are specified in the query. So, the parser looks for all candidate operators
and finds that there are candidates accepting both string-category and and finds that there are candidates accepting both string-category and
bit-string-category inputs. Since string category is preferred when available, bit-string-category inputs. Since string category is preferred when available,
that category is selected, and the that category is selected, and then the
preferred type for strings, <type>text</type>, is used as the specific preferred type for strings, <type>text</type>, is used as the specific
type to resolve the unknown literals. type to resolve the unknown literals as.
</para> </para>
</example> </example>
...@@ -459,8 +461,8 @@ SELECT ~ CAST('20' AS int8) AS "negation"; ...@@ -459,8 +461,8 @@ SELECT ~ CAST('20' AS int8) AS "negation";
</indexterm> </indexterm>
<para> <para>
The specific function to be invoked is determined The specific function that is referenced by a function call
according to the following steps. is determined using the following procedure.
</para> </para>
<procedure> <procedure>
...@@ -471,7 +473,7 @@ SELECT ~ CAST('20' AS int8) AS "negation"; ...@@ -471,7 +473,7 @@ SELECT ~ CAST('20' AS int8) AS "negation";
Select the functions to be considered from the Select the functions to be considered from the
<classname>pg_proc</classname> system catalog. If a non-schema-qualified <classname>pg_proc</classname> system catalog. If a non-schema-qualified
function name was used, the functions function name was used, the functions
considered are those with a matching name and argument count that are considered are those with the matching name and argument count that are
visible in the current search path (see <xref linkend="ddl-schemas-path">). visible in the current search path (see <xref linkend="ddl-schemas-path">).
If a qualified function name was given, only functions in the specified If a qualified function name was given, only functions in the specified
schema are considered. schema are considered.
...@@ -554,7 +556,7 @@ Look for the best match. ...@@ -554,7 +556,7 @@ Look for the best match.
<substeps> <substeps>
<step performance="required"> <step performance="required">
<para> <para>
Discard candidate functions in which the input types do not match Discard candidate functions for which the input types do not match
and cannot be converted (using an implicit conversion) to match. and cannot be converted (using an implicit conversion) to match.
<type>unknown</type> literals are <type>unknown</type> literals are
assumed to be convertible to anything for this purpose. If only one assumed to be convertible to anything for this purpose. If only one
...@@ -615,9 +617,10 @@ Some examples follow. ...@@ -615,9 +617,10 @@ Some examples follow.
<title>Rounding Function Argument Type Resolution</title> <title>Rounding Function Argument Type Resolution</title>
<para> <para>
There is only one <function>round</function> function which takes two There is only one <function>round</function> function that takes two
arguments; it takes a first argument of <type>numeric</type> and arguments; it takes a first argument of type <type>numeric</type> and
a second argument of <type>integer</type>. So the following query automatically converts a second argument of type <type>integer</type>.
So the following query automatically converts
the first argument of type <type>integer</type> to the first argument of type <type>integer</type> to
<type>numeric</type>: <type>numeric</type>:
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment