Commit 5baf6da7 authored by Peter Eisentraut's avatar Peter Eisentraut

Documentation spell and markup checking

parent 5d0109bd
...@@ -3886,7 +3886,7 @@ ...@@ -3886,7 +3886,7 @@
particular operator family is applicable to a particular indexable column particular operator family is applicable to a particular indexable column
data type. The set of operators from the family that are actually usable data type. The set of operators from the family that are actually usable
with the indexed column are whichever ones accept the column's data type with the indexed column are whichever ones accept the column's data type
as their lefthand input. as their left-hand input.
</para> </para>
<para> <para>
...@@ -4431,7 +4431,7 @@ ...@@ -4431,7 +4431,7 @@
The function has no side effects. No information about the The function has no side effects. No information about the
arguments is conveyed except via the return value. Any function arguments is conveyed except via the return value. Any function
that might throw an error depending on the values of its arguments that might throw an error depending on the values of its arguments
is not leakproof. is not leak-proof.
</entry> </entry>
</row> </row>
......
...@@ -317,12 +317,12 @@ make install ...@@ -317,12 +317,12 @@ make install
<note> <note>
<para> <para>
Some users have reported encountering a segmentation fault using Some users have reported encountering a segmentation fault using
openjade 1.4devel to build the PDFs, with a message like: OpenJade 1.4devel to build the PDFs, with a message like:
<screen> <screen>
openjade:./stylesheet.dsl:664:2:E: flow object not accepted by port; only display flow objects accepted openjade:./stylesheet.dsl:664:2:E: flow object not accepted by port; only display flow objects accepted
make: *** [postgres-A4.tex-pdf] Segmentation fault make: *** [postgres-A4.tex-pdf] Segmentation fault
</screen> </screen>
Downgrading to openjade 1.3 should get rid of this error. Downgrading to OpenJade 1.3 should get rid of this error.
</para> </para>
</note> </note>
......
...@@ -2752,7 +2752,7 @@ timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr); ...@@ -2752,7 +2752,7 @@ timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr);
<para> <para>
In general, the input string can contain any combination of an allowed In general, the input string can contain any combination of an allowed
date specification, a whitespace character and an allowed time date specification, a whitespace character and an allowed time
specification. Note that timezones are not supported by ECPG. It can specification. Note that time zones are not supported by ECPG. It can
parse them but does not apply any calculation as the parse them but does not apply any calculation as the
<productname>PostgreSQL</> server does for example. Timezone <productname>PostgreSQL</> server does for example. Timezone
specifiers are silently discarded. specifiers are silently discarded.
...@@ -9164,7 +9164,7 @@ int rtypwidth(int sqltype, int sqllen); ...@@ -9164,7 +9164,7 @@ int rtypwidth(int sqltype, int sqllen);
int rsetnull(int t, char *ptr); int rsetnull(int t, char *ptr);
</synopsis> </synopsis>
The function receives an integer that indicates the type of the The function receives an integer that indicates the type of the
variable and a pointer to the variable itself that is casted to a C variable and a pointer to the variable itself that is cast to a C
char* pointer. char* pointer.
</para> </para>
<para> <para>
...@@ -9249,7 +9249,7 @@ int risnull(int t, char *ptr); ...@@ -9249,7 +9249,7 @@ int risnull(int t, char *ptr);
</synopsis> </synopsis>
The function receives the type of the variable to test (<literal>t</>) The function receives the type of the variable to test (<literal>t</>)
as well a pointer to this variable (<literal>ptr</>). Note that the as well a pointer to this variable (<literal>ptr</>). Note that the
latter needs to be casted to a char*. See the function <xref latter needs to be cast to a char*. See the function <xref
linkend="rsetnull"> for a list of possible variable types. linkend="rsetnull"> for a list of possible variable types.
</para> </para>
<para> <para>
......
...@@ -191,7 +191,7 @@ ExplainForeignScan (ForeignScanState *node, ...@@ -191,7 +191,7 @@ ExplainForeignScan (ForeignScanState *node,
related functions to add fields to the <command>EXPLAIN</> output. related functions to add fields to the <command>EXPLAIN</> output.
The flag fields in <literal>es</> can be used to determine what to The flag fields in <literal>es</> can be used to determine what to
print, and the state of the <structname>ForeignScanState</> node print, and the state of the <structname>ForeignScanState</> node
can be inspected to provide runtime statistics in the <command>EXPLAIN can be inspected to provide run-time statistics in the <command>EXPLAIN
ANALYZE</> case. ANALYZE</> case.
</para> </para>
...@@ -489,20 +489,20 @@ GetForeignServerByName(const char *name, bool missing_ok); ...@@ -489,20 +489,20 @@ GetForeignServerByName(const char *name, bool missing_ok);
The <structfield>fdw_private</> list has no other restrictions and is The <structfield>fdw_private</> list has no other restrictions and is
not interpreted by the core backend in any way. The not interpreted by the core backend in any way. The
<structfield>fdw_exprs</> list, if not NIL, is expected to contain <structfield>fdw_exprs</> list, if not NIL, is expected to contain
expression trees that are intended to be executed at runtime. These expression trees that are intended to be executed at run time. These
trees will undergo post-processing by the planner to make them fully trees will undergo post-processing by the planner to make them fully
executable. executable.
</para> </para>
<para> <para>
In <function>GetForeignPlan</>, generally the passed-in targetlist can In <function>GetForeignPlan</>, generally the passed-in target list can
be copied into the plan node as-is. The passed scan_clauses list be copied into the plan node as-is. The passed scan_clauses list
contains the same clauses as <literal>baserel-&gt;baserestrictinfo</>, contains the same clauses as <literal>baserel-&gt;baserestrictinfo</>,
but may be re-ordered for better execution efficiency. In simple cases but may be re-ordered for better execution efficiency. In simple cases
the FDW can just strip <structname>RestrictInfo</> nodes from the the FDW can just strip <structname>RestrictInfo</> nodes from the
scan_clauses list (using <function>extract_actual_clauses</>) and put scan_clauses list (using <function>extract_actual_clauses</>) and put
all the clauses into the plan node's qual list, which means that all the all the clauses into the plan node's qual list, which means that all the
clauses will be checked by the executor at runtime. More complex FDWs clauses will be checked by the executor at run time. More complex FDWs
may be able to check some of the clauses internally, in which case those may be able to check some of the clauses internally, in which case those
clauses can be removed from the plan node's qual list so that the clauses can be removed from the plan node's qual list so that the
executor doesn't waste time rechecking them. executor doesn't waste time rechecking them.
...@@ -523,9 +523,9 @@ GetForeignServerByName(const char *name, bool missing_ok); ...@@ -523,9 +523,9 @@ GetForeignServerByName(const char *name, bool missing_ok);
to ensure that it gets massaged into executable form. It would probably to ensure that it gets massaged into executable form. It would probably
also put control information into the plan node's also put control information into the plan node's
<structfield>fdw_private</> field to tell the execution functions what <structfield>fdw_private</> field to tell the execution functions what
to do at runtime. The query transmitted to the remote server would to do at run time. The query transmitted to the remote server would
involve something like <literal>WHERE <replaceable>foreign_variable</> = involve something like <literal>WHERE <replaceable>foreign_variable</> =
$1</literal>, with the parameter value obtained at runtime from $1</literal>, with the parameter value obtained at run time from
evaluation of the <structfield>fdw_exprs</> expression tree. evaluation of the <structfield>fdw_exprs</> expression tree.
</para> </para>
...@@ -541,7 +541,7 @@ GetForeignServerByName(const char *name, bool missing_ok); ...@@ -541,7 +541,7 @@ GetForeignServerByName(const char *name, bool missing_ok);
<literal>required_outer</> and list the specific join clause(s) in <literal>required_outer</> and list the specific join clause(s) in
<literal>param_clauses</>. In <function>GetForeignPlan</>, the <literal>param_clauses</>. In <function>GetForeignPlan</>, the
<replaceable>local_variable</> portion of the join clause would be added <replaceable>local_variable</> portion of the join clause would be added
to <structfield>fdw_exprs</>, and then at runtime the case works the to <structfield>fdw_exprs</>, and then at run time the case works the
same as for an ordinary restriction clause. same as for an ordinary restriction clause.
</para> </para>
......
...@@ -122,7 +122,7 @@ ...@@ -122,7 +122,7 @@
<listitem> <listitem>
<para> <para>
This is a boolean option. If true, it specifies that values of the This is a Boolean option. If true, it specifies that values of the
column should not be matched against the null string (that is, the column should not be matched against the null string (that is, the
file-level <literal>null</literal> option). This has the same effect file-level <literal>null</literal> option). This has the same effect
as listing the column in <command>COPY</>'s as listing the column in <command>COPY</>'s
...@@ -184,7 +184,7 @@ CREATE SERVER pglog FOREIGN DATA WRAPPER file_fdw; ...@@ -184,7 +184,7 @@ CREATE SERVER pglog FOREIGN DATA WRAPPER file_fdw;
<para> <para>
Now you are ready to create the foreign data table. Using the Now you are ready to create the foreign data table. Using the
<command>CREATE FOREIGN TABLE</> command, you will need to define <command>CREATE FOREIGN TABLE</> command, you will need to define
the columns for the table, the CSV filename, and its format: the columns for the table, the CSV file name, and its format:
<programlisting> <programlisting>
CREATE FOREIGN TABLE pglog ( CREATE FOREIGN TABLE pglog (
......
...@@ -9648,9 +9648,9 @@ table2-mapping ...@@ -9648,9 +9648,9 @@ table2-mapping
<literal>array_to_json(anyarray [, pretty_bool])</literal> <literal>array_to_json(anyarray [, pretty_bool])</literal>
</entry> </entry>
<entry> <entry>
Returns the array as JSON. A Postgres multi-dimensional array Returns the array as JSON. A PostgreSQL multidimensional array
becomes a JSON array of arrays. Line feeds will be added between becomes a JSON array of arrays. Line feeds will be added between
dimension 1 elements if pretty_bool is true. dimension 1 elements if <parameter>pretty_bool</parameter> is true.
</entry> </entry>
<entry><literal>array_to_json('{{1,5},{99,100}}'::int[])</literal></entry> <entry><literal>array_to_json('{{1,5},{99,100}}'::int[])</literal></entry>
<entry><literal>[[1,5],[99,100]]</literal></entry> <entry><literal>[[1,5],[99,100]]</literal></entry>
...@@ -9664,7 +9664,7 @@ table2-mapping ...@@ -9664,7 +9664,7 @@ table2-mapping
</entry> </entry>
<entry> <entry>
Returns the row as JSON. Line feeds will be added between level Returns the row as JSON. Line feeds will be added between level
1 elements if pretty_bool is true. 1 elements if <parameter>pretty_bool</parameter> is true.
</entry> </entry>
<entry><literal>row_to_json(row(1,'foo'))</literal></entry> <entry><literal>row_to_json(row(1,'foo'))</literal></entry>
<entry><literal>{"f1":1,"f2":"foo"}</literal></entry> <entry><literal>{"f1":1,"f2":"foo"}</literal></entry>
...@@ -13813,7 +13813,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); ...@@ -13813,7 +13813,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype);
<entry><literal><function>pg_get_viewdef(<parameter>view_name</parameter>, <parameter>pretty_bool</>)</function></literal></entry> <entry><literal><function>pg_get_viewdef(<parameter>view_name</parameter>, <parameter>pretty_bool</>)</function></literal></entry>
<entry><type>text</type></entry> <entry><type>text</type></entry>
<entry>get underlying <command>SELECT</command> command for view, <entry>get underlying <command>SELECT</command> command for view,
lines with fields are wrapped to 80 columns if pretty_bool is true (<emphasis>deprecated</emphasis>)</entry> lines with fields are wrapped to 80 columns if <parameter>pretty_bool</parameter> is true (<emphasis>deprecated</emphasis>)</entry>
</row> </row>
<row> <row>
<entry><literal><function>pg_get_viewdef(<parameter>view_oid</parameter>)</function></literal></entry> <entry><literal><function>pg_get_viewdef(<parameter>view_oid</parameter>)</function></literal></entry>
...@@ -13824,7 +13824,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); ...@@ -13824,7 +13824,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype);
<entry><literal><function>pg_get_viewdef(<parameter>view_oid</parameter>, <parameter>pretty_bool</>)</function></literal></entry> <entry><literal><function>pg_get_viewdef(<parameter>view_oid</parameter>, <parameter>pretty_bool</>)</function></literal></entry>
<entry><type>text</type></entry> <entry><type>text</type></entry>
<entry>get underlying <command>SELECT</command> command for view, <entry>get underlying <command>SELECT</command> command for view,
lines with fields are wrapped to 80 columns if pretty_bool is true</entry> lines with fields are wrapped to 80 columns if <parameter>pretty_bool</parameter> is true</entry>
</row> </row>
<row> <row>
<entry><literal><function>pg_get_viewdef(<parameter>view_oid</parameter>, <parameter>wrap_int</>)</function></literal></entry> <entry><literal><function>pg_get_viewdef(<parameter>view_oid</parameter>, <parameter>wrap_int</>)</function></literal></entry>
...@@ -13845,7 +13845,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); ...@@ -13845,7 +13845,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype);
<row> <row>
<entry><literal><function>pg_tablespace_location(<parameter>tablespace_oid</parameter>)</function></literal></entry> <entry><literal><function>pg_tablespace_location(<parameter>tablespace_oid</parameter>)</function></literal></entry>
<entry><type>text</type></entry> <entry><type>text</type></entry>
<entry>get the path in the filesystem that this tablespace is located in</entry> <entry>get the path in the file system that this tablespace is located in</entry>
</row> </row>
<row> <row>
<entry><literal><function>pg_typeof(<parameter>any</parameter>)</function></literal></entry> <entry><literal><function>pg_typeof(<parameter>any</parameter>)</function></literal></entry>
......
...@@ -107,9 +107,9 @@ ...@@ -107,9 +107,9 @@
If any of the keys can be null, also palloc an array of If any of the keys can be null, also palloc an array of
<literal>*nkeys</> booleans, store its address at <literal>*nkeys</> booleans, store its address at
<literal>*nullFlags</>, and set these null flags as needed. <literal>*nullFlags</>, and set these null flags as needed.
<literal>*nullFlags</> can be left NULL (its initial value) <literal>*nullFlags</> can be left <symbol>NULL</symbol> (its initial value)
if all keys are non-null. if all keys are non-null.
The return value can be NULL if the item contains no keys. The return value can be <symbol>NULL</symbol> if the item contains no keys.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -313,7 +313,7 @@ ...@@ -313,7 +313,7 @@
</para> </para>
<para> <para>
Multi-column <acronym>GIN</acronym> indexes are implemented by building Multicolumn <acronym>GIN</acronym> indexes are implemented by building
a single B-tree over composite values (column number, key value). The a single B-tree over composite values (column number, key value). The
key values for different columns can be of different types. key values for different columns can be of different types.
</para> </para>
......
...@@ -665,7 +665,7 @@ my_distance(PG_FUNCTION_ARGS) ...@@ -665,7 +665,7 @@ my_distance(PG_FUNCTION_ARGS)
a lot of random I/O. Beginning in version 9.2, PostgreSQL supports a more a lot of random I/O. Beginning in version 9.2, PostgreSQL supports a more
efficient method to build GiST indexes based on buffering, which can efficient method to build GiST indexes based on buffering, which can
dramatically reduce the number of random I/Os needed for non-ordered data dramatically reduce the number of random I/Os needed for non-ordered data
sets. For well-ordered datasets the benefit is smaller or non-existent, sets. For well-ordered data sets the benefit is smaller or non-existent,
because only a small number of pages receive new tuples at a time, and because only a small number of pages receive new tuples at a time, and
those pages fit in cache even if the index as whole does not. those pages fit in cache even if the index as whole does not.
</para> </para>
......
...@@ -363,7 +363,7 @@ amrescan (IndexScanDesc scan, ...@@ -363,7 +363,7 @@ amrescan (IndexScanDesc scan,
ScanKey orderbys, ScanKey orderbys,
int norderbys); int norderbys);
</programlisting> </programlisting>
Start or restart an indexscan, possibly with new scan keys. (To restart Start or restart an index scan, possibly with new scan keys. (To restart
using previously-passed keys, NULL is passed for <literal>keys</> and/or using previously-passed keys, NULL is passed for <literal>keys</> and/or
<literal>orderbys</>.) Note that it is not allowed for <literal>orderbys</>.) Note that it is not allowed for
the number of keys or order-by operators to be larger than the number of keys or order-by operators to be larger than
......
...@@ -805,7 +805,7 @@ postgresql:///mydb?host=localhost&amp;port=5433 ...@@ -805,7 +805,7 @@ postgresql:///mydb?host=localhost&amp;port=5433
</para> </para>
<para> <para>
The host part may be either hostname or an IP address. To specify an The host part may be either host name or an IP address. To specify an
IPv6 host address, enclose it in square brackets: IPv6 host address, enclose it in square brackets:
<synopsis> <synopsis>
postgresql://[2001:db8::1234]/database postgresql://[2001:db8::1234]/database
......
...@@ -865,10 +865,10 @@ pg_ctl start | rotatelogs /var/log/pgsql_log 86400 ...@@ -865,10 +865,10 @@ pg_ctl start | rotatelogs /var/log/pgsql_log 86400
</para> </para>
<para> <para>
<ulink url="http://pgfouine.projects.postgresql.org/">pgFouine</ulink> <ulink url="http://pgfouine.projects.postgresql.org/"><productname>pgFouine</productname></ulink>
is an external project that does sophisticated log file analysis. is an external project that does sophisticated log file analysis.
<ulink <ulink
url="http://bucardo.org/wiki/Check_postgres">check_postgres</ulink> url="http://bucardo.org/wiki/Check_postgres"><productname>check_postgres</productname></ulink>
provides Nagios alerts when important messages appear in the log provides Nagios alerts when important messages appear in the log
files, as well as detection of many other extraordinary conditions. files, as well as detection of many other extraordinary conditions.
</para> </para>
......
...@@ -210,7 +210,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -210,7 +210,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
the collector just before going idle; so a query or transaction still in the collector just before going idle; so a query or transaction still in
progress does not affect the displayed totals. Also, the collector itself progress does not affect the displayed totals. Also, the collector itself
emits a new report at most once per <varname>PGSTAT_STAT_INTERVAL</varname> emits a new report at most once per <varname>PGSTAT_STAT_INTERVAL</varname>
milliseconds (500 msec unless altered while building the server). So the milliseconds (500 ms unless altered while building the server). So the
displayed information lags behind actual activity. However, current-query displayed information lags behind actual activity. However, current-query
information collected by <varname>track_activities</varname> is information collected by <varname>track_activities</varname> is
always up-to-date. always up-to-date.
...@@ -472,7 +472,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -472,7 +472,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
<table id="pg-stat-activity-view" xreflabel="pg_stat_activity"> <table id="pg-stat-activity-view" xreflabel="pg_stat_activity">
<title>pg_stat_activity view</title> <title><structname>pg_stat_activity</structname> View</title>
<tgroup cols="3"> <tgroup cols="3">
<thead> <thead>
...@@ -649,7 +649,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -649,7 +649,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
</note> </note>
<table id="pg-stat-bgwriter-view" xreflabel="pg_stat_bgwriter"> <table id="pg-stat-bgwriter-view" xreflabel="pg_stat_bgwriter">
<title>pg_stat_bgwriter view</title> <title><structname>pg_stat_bgwriter</structname> View</title>
<tgroup cols="3"> <tgroup cols="3">
<thead> <thead>
...@@ -736,7 +736,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -736,7 +736,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
</para> </para>
<table id="pg-stat-database-view" xreflabel="pg_stat_database"> <table id="pg-stat-database-view" xreflabel="pg_stat_database">
<title>pg_stat_database view</title> <title><structname>pg_stat_database</structname> View</title>
<tgroup cols="3"> <tgroup cols="3">
<thead> <thead>
<row> <row>
...@@ -787,7 +787,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -787,7 +787,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
<entry><type>bigint</></entry> <entry><type>bigint</></entry>
<entry>Number of times disk blocks were found already in the buffer <entry>Number of times disk blocks were found already in the buffer
cache, so that a read was not necessary (this only includes hits in the cache, so that a read was not necessary (this only includes hits in the
PostgreSQL buffer cache, not the operating system's filesystem cache) PostgreSQL buffer cache, not the operating system's file system cache)
</entry> </entry>
</row> </row>
<row> <row>
...@@ -873,7 +873,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -873,7 +873,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
</para> </para>
<table id="pg-stat-all-tables-view" xreflabel="pg_stat_all_tables"> <table id="pg-stat-all-tables-view" xreflabel="pg_stat_all_tables">
<title>pg_stat_all_tables view</title> <title><structname>pg_stat_all_tables</structname> View</title>
<tgroup cols="3"> <tgroup cols="3">
<thead> <thead>
<row> <row>
...@@ -1011,7 +1011,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -1011,7 +1011,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
</para> </para>
<table id="pg-stat-all-indexes-view" xreflabel="pg_stat_all_indexes"> <table id="pg-stat-all-indexes-view" xreflabel="pg_stat_all_indexes">
<title>pg_stat_all_indexes view</title> <title><structname>pg_stat_all_indexes</structname> View</title>
<tgroup cols="3"> <tgroup cols="3">
<thead> <thead>
<row> <row>
...@@ -1104,7 +1104,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -1104,7 +1104,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
</note> </note>
<table id="pg-statio-all-tables-view" xreflabel="pg_statio_all_tables"> <table id="pg-statio-all-tables-view" xreflabel="pg_statio_all_tables">
<title>pg_statio_all_tables view</title> <title><structname>pg_statio_all_tables</structname> View</title>
<tgroup cols="3"> <tgroup cols="3">
<thead> <thead>
<row> <row>
...@@ -1185,7 +1185,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -1185,7 +1185,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
</para> </para>
<table id="pg-statio-all-indexes-view" xreflabel="pg_statio_all_indexes"> <table id="pg-statio-all-indexes-view" xreflabel="pg_statio_all_indexes">
<title>pg_statio_all_indexes view</title> <title><structname>pg_statio_all_indexes</structname> View</title>
<tgroup cols="3"> <tgroup cols="3">
<thead> <thead>
<row> <row>
...@@ -1246,7 +1246,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -1246,7 +1246,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
</para> </para>
<table id="pg-statio-all-sequences-view" xreflabel="pg_statio_all_sequences"> <table id="pg-statio-all-sequences-view" xreflabel="pg_statio_all_sequences">
<title>pg_statio_all_sequences view</title> <title><structname>pg_statio_all_sequences</structname> View</title>
<tgroup cols="3"> <tgroup cols="3">
<thead> <thead>
<row> <row>
...@@ -1293,7 +1293,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -1293,7 +1293,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
</para> </para>
<table id="pg-stat-user-functions-view" xreflabel="pg_stat_user_functions"> <table id="pg-stat-user-functions-view" xreflabel="pg_stat_user_functions">
<title>pg_stat_user_functions view</title> <title><structname>pg_stat_user_functions</structname> View</title>
<tgroup cols="3"> <tgroup cols="3">
<thead> <thead>
<row> <row>
...@@ -1348,7 +1348,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -1348,7 +1348,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
</para> </para>
<table id="pg-stat-replication-view" xreflabel="pg_stat_replication"> <table id="pg-stat-replication-view" xreflabel="pg_stat_replication">
<title>pg_stat_replication view</title> <title><structname>pg_stat_replication</structname> View</title>
<tgroup cols="3"> <tgroup cols="3">
<thead> <thead>
<row> <row>
...@@ -1462,7 +1462,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -1462,7 +1462,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
</para> </para>
<table id="pg-stat-database-conflicts-view" xreflabel="pg_stat_database_conflicts"> <table id="pg-stat-database-conflicts-view" xreflabel="pg_stat_database_conflicts">
<title>pg_stat_database_conflicts view</title> <title><structname>pg_stat_database_conflicts</structname> View</title>
<tgroup cols="3"> <tgroup cols="3">
<thead> <thead>
<row> <row>
......
...@@ -265,7 +265,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 100 AND stringu1 = 'xxx'; ...@@ -265,7 +265,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 100 AND stringu1 = 'xxx';
</screen> </screen>
The added condition <literal>stringu1 = 'xxx'</literal> reduces the The added condition <literal>stringu1 = 'xxx'</literal> reduces the
output-rowcount estimate, but not the cost because we still have to visit output row count estimate, but not the cost because we still have to visit
the same set of rows. Notice that the <literal>stringu1</> clause the same set of rows. Notice that the <literal>stringu1</> clause
cannot be applied as an index condition, since this index is only on cannot be applied as an index condition, since this index is only on
the <literal>unique1</> column. Instead it is applied as a filter on the <literal>unique1</> column. Instead it is applied as a filter on
...@@ -385,7 +385,7 @@ WHERE t1.unique1 &lt; 10 AND t1.unique2 = t2.unique2; ...@@ -385,7 +385,7 @@ WHERE t1.unique1 &lt; 10 AND t1.unique2 = t2.unique2;
<literal>SELECT ... WHERE t2.unique2 = <replaceable>constant</></> case. <literal>SELECT ... WHERE t2.unique2 = <replaceable>constant</></> case.
(The estimated cost is actually a bit lower than what was seen above, (The estimated cost is actually a bit lower than what was seen above,
as a result of caching that's expected to occur during the repeated as a result of caching that's expected to occur during the repeated
indexscans on <literal>t2</>.) The index scans on <literal>t2</>.) The
costs of the loop node are then set on the basis of the cost of the outer costs of the loop node are then set on the basis of the cost of the outer
scan, plus one repetition of the inner scan for each outer row (10 * 7.87, scan, plus one repetition of the inner scan for each outer row (10 * 7.87,
here), plus a little CPU time for join processing. here), plus a little CPU time for join processing.
...@@ -489,8 +489,8 @@ WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2; ...@@ -489,8 +489,8 @@ WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2;
the rows in the correct order, but a sequential scan and sort is preferred the rows in the correct order, but a sequential scan and sort is preferred
for <literal>onek</>, because there are many more rows to be visited in for <literal>onek</>, because there are many more rows to be visited in
that table. that table.
(Seqscan-and-sort frequently beats an indexscan for sorting many rows, (Sequential-scan-and-sort frequently beats an index scan for sorting many rows,
because of the nonsequential disk access required by the indexscan.) because of the nonsequential disk access required by the index scan.)
</para> </para>
<para> <para>
...@@ -499,7 +499,7 @@ WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2; ...@@ -499,7 +499,7 @@ WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2;
flags described in <xref linkend="runtime-config-query-enable">. flags described in <xref linkend="runtime-config-query-enable">.
(This is a crude tool, but useful. See (This is a crude tool, but useful. See
also <xref linkend="explicit-joins">.) also <xref linkend="explicit-joins">.)
For example, if we're unconvinced that seqscan-and-sort is the best way to For example, if we're unconvinced that sequential-scan-and-sort is the best way to
deal with table <literal>onek</> in the previous example, we could try deal with table <literal>onek</> in the previous example, we could try
<screen> <screen>
...@@ -519,7 +519,7 @@ WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2; ...@@ -519,7 +519,7 @@ WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2;
</screen> </screen>
which shows that the planner thinks that sorting <literal>onek</> by which shows that the planner thinks that sorting <literal>onek</> by
indexscanning is about 12% more expensive than seqscan-and-sort. index-scanning is about 12% more expensive than sequential-scan-and-sort.
Of course, the next question is whether it's right about that. Of course, the next question is whether it's right about that.
We can investigate that using <command>EXPLAIN ANALYZE</>, as discussed We can investigate that using <command>EXPLAIN ANALYZE</>, as discussed
below. below.
...@@ -573,7 +573,7 @@ WHERE t1.unique1 &lt; 10 AND t1.unique2 = t2.unique2; ...@@ -573,7 +573,7 @@ WHERE t1.unique1 &lt; 10 AND t1.unique2 = t2.unique2;
comparable with the way that the cost estimates are shown. Multiply by comparable with the way that the cost estimates are shown. Multiply by
the <literal>loops</> value to get the total time actually spent in the <literal>loops</> value to get the total time actually spent in
the node. In the above example, we spent a total of 0.480 milliseconds the node. In the above example, we spent a total of 0.480 milliseconds
executing the indexscans on <literal>tenk2</>. executing the index scans on <literal>tenk2</>.
</para> </para>
<para> <para>
...@@ -634,7 +634,7 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE ten &lt; 7; ...@@ -634,7 +634,7 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE ten &lt; 7;
<para> <para>
A case similar to filter conditions occurs with <quote>lossy</> A case similar to filter conditions occurs with <quote>lossy</>
indexscans. For example, consider this search for polygons containing a index scans. For example, consider this search for polygons containing a
specific point: specific point:
<screen> <screen>
...@@ -649,9 +649,9 @@ EXPLAIN ANALYZE SELECT * FROM polygon_tbl WHERE f1 @&gt; polygon '(0.5,2.0)'; ...@@ -649,9 +649,9 @@ EXPLAIN ANALYZE SELECT * FROM polygon_tbl WHERE f1 @&gt; polygon '(0.5,2.0)';
</screen> </screen>
The planner thinks (quite correctly) that this sample table is too small The planner thinks (quite correctly) that this sample table is too small
to bother with an indexscan, so we have a plain sequential scan in which to bother with an index scan, so we have a plain sequential scan in which
all the rows got rejected by the filter condition. But if we force an all the rows got rejected by the filter condition. But if we force an
indexscan to be used, we see: index scan to be used, we see:
<screen> <screen>
SET enable_seqscan TO off; SET enable_seqscan TO off;
...@@ -808,9 +808,9 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE unique1 &lt; 100 AND unique2 &gt; 9000 ...@@ -808,9 +808,9 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE unique1 &lt; 100 AND unique2 &gt; 9000
Total runtime: 2.857 ms Total runtime: 2.857 ms
</screen> </screen>
the estimated cost and rowcount for the Index Scan node are shown as the estimated cost and row count for the Index Scan node are shown as
though it were run to completion. But in reality the Limit node stopped though it were run to completion. But in reality the Limit node stopped
requesting rows after it got two, so the actual rowcount is only 2 and requesting rows after it got two, so the actual row count is only 2 and
the runtime is less than the cost estimate would suggest. This is not the runtime is less than the cost estimate would suggest. This is not
an estimation error, only a discrepancy in the way the estimates and true an estimation error, only a discrepancy in the way the estimates and true
values are displayed. values are displayed.
...@@ -827,13 +827,13 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE unique1 &lt; 100 AND unique2 &gt; 9000 ...@@ -827,13 +827,13 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE unique1 &lt; 100 AND unique2 &gt; 9000
the inner (second) child is backed up and rescanned for the portion of its the inner (second) child is backed up and rescanned for the portion of its
rows matching that key value. <command>EXPLAIN ANALYZE</> counts these rows matching that key value. <command>EXPLAIN ANALYZE</> counts these
repeated emissions of the same inner rows as if they were real additional repeated emissions of the same inner rows as if they were real additional
rows. When there are many outer duplicates, the reported actual rowcount rows. When there are many outer duplicates, the reported actual row count
for the inner child plan node can be significantly larger than the number for the inner child plan node can be significantly larger than the number
of rows that are actually in the inner relation. of rows that are actually in the inner relation.
</para> </para>
<para> <para>
BitmapAnd and BitmapOr nodes always report their actual rowcounts as zero, BitmapAnd and BitmapOr nodes always report their actual row counts as zero,
due to implementation limitations. due to implementation limitations.
</para> </para>
</sect2> </sect2>
......
...@@ -52,22 +52,22 @@ pgrowlocks(text) returns setof record ...@@ -52,22 +52,22 @@ pgrowlocks(text) returns setof record
<row> <row>
<entry><structfield>locker</structfield></entry> <entry><structfield>locker</structfield></entry>
<entry><type>xid</type></entry> <entry><type>xid</type></entry>
<entry>Transaction ID of locker, or multixact ID if multi-transaction</entry> <entry>Transaction ID of locker, or multixact ID if multitransaction</entry>
</row> </row>
<row> <row>
<entry><structfield>multi</structfield></entry> <entry><structfield>multi</structfield></entry>
<entry><type>boolean</type></entry> <entry><type>boolean</type></entry>
<entry>True if locker is a multi-transaction</entry> <entry>True if locker is a multitransaction</entry>
</row> </row>
<row> <row>
<entry><structfield>xids</structfield></entry> <entry><structfield>xids</structfield></entry>
<entry><type>xid[]</type></entry> <entry><type>xid[]</type></entry>
<entry>Transaction IDs of lockers (more than one if multi-transaction)</entry> <entry>Transaction IDs of lockers (more than one if multitransaction)</entry>
</row> </row>
<row> <row>
<entry><structfield>pids</structfield></entry> <entry><structfield>pids</structfield></entry>
<entry><type>integer[]</type></entry> <entry><type>integer[]</type></entry>
<entry>Process IDs of locking backends (more than one if multi-transaction)</entry> <entry>Process IDs of locking backends (more than one if multitransaction)</entry>
</row> </row>
</tbody> </tbody>
......
...@@ -31,7 +31,7 @@ ...@@ -31,7 +31,7 @@
idea of what the fastest <xref linkend="guc-wal-sync-method"> is on your idea of what the fastest <xref linkend="guc-wal-sync-method"> is on your
specific system, specific system,
as well as supplying diagnostic information in the event of an as well as supplying diagnostic information in the event of an
identified I/O problem. However, differences shown by pg_test_fsync identified I/O problem. However, differences shown by <application>pg_test_fsync</application>
might not make any difference in real database throughput, especially might not make any difference in real database throughput, especially
since many database servers are not speed-limited by their transaction since many database servers are not speed-limited by their transaction
logs. logs.
......
...@@ -101,7 +101,7 @@ Histogram of timing durations: ...@@ -101,7 +101,7 @@ Histogram of timing durations:
When the query executor is running a statement using When the query executor is running a statement using
<command>EXPLAIN ANALYZE</command>, individual operations are timed as well <command>EXPLAIN ANALYZE</command>, individual operations are timed as well
as showing a summary. The overhead of your system can be checked by as showing a summary. The overhead of your system can be checked by
counting rows with the psql program: counting rows with the <application>psql</application> program:
<screen> <screen>
CREATE TABLE t AS SELECT * FROM generate_series(1,100000); CREATE TABLE t AS SELECT * FROM generate_series(1,100000);
...@@ -226,7 +226,7 @@ Histogram of timing durations: ...@@ -226,7 +226,7 @@ Histogram of timing durations:
reliable. There are several ways that TSC can fail to provide an accurate reliable. There are several ways that TSC can fail to provide an accurate
timing source, making it unreliable. Older systems can have a TSC clock that timing source, making it unreliable. Older systems can have a TSC clock that
varies based on the CPU temperature, making it unusable for timing. Trying varies based on the CPU temperature, making it unusable for timing. Trying
to use TSC on some older multi-core CPUs can give a reported time that's to use TSC on some older multicore CPUs can give a reported time that's
inconsistent among multiple cores. This can result in the time going inconsistent among multiple cores. This can result in the time going
backwards, a problem this program checks for. And even the newest systems backwards, a problem this program checks for. And even the newest systems
can fail to provide accurate TSC timing with very aggressive power saving can fail to provide accurate TSC timing with very aggressive power saving
......
...@@ -1783,13 +1783,13 @@ The commands accepted in walsender mode are: ...@@ -1783,13 +1783,13 @@ The commands accepted in walsender mode are:
<listitem> <listitem>
<para> <para>
<filename>pg_xlog</>, including subdirectories. If the backup is run <filename>pg_xlog</>, including subdirectories. If the backup is run
with wal files included, a synthesized version of pg_xlog will be with WAL files included, a synthesized version of <filename>pg_xlog</filename> will be
included, but it will only contain the files necessary for the included, but it will only contain the files necessary for the
backup to work, not the rest of the contents. backup to work, not the rest of the contents.
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
Owner, group and file mode are set if the underlying filesystem on Owner, group and file mode are set if the underlying file system on
the server supports it. the server supports it.
</para> </para>
<para> <para>
......
...@@ -473,7 +473,7 @@ with existing key (during)=([ 2010-01-01 14:30:00, 2010-01-01 15:30:00 )). ...@@ -473,7 +473,7 @@ with existing key (during)=([ 2010-01-01 14:30:00, 2010-01-01 15:30:00 )).
<para> <para>
You can use the <link linkend="btree-gist"><literal>btree_gist</></link> You can use the <link linkend="btree-gist"><literal>btree_gist</></link>
extension to define exclusion constraints on plain scalar datatypes, which extension to define exclusion constraints on plain scalar data types, which
can then be combined with range exclusions for maximum flexibility. For can then be combined with range exclusions for maximum flexibility. For
example, after <literal>btree_gist</literal> is installed, the following example, after <literal>btree_gist</literal> is installed, the following
constraint will reject overlapping ranges only if the meeting room numbers constraint will reject overlapping ranges only if the meeting room numbers
......
...@@ -249,7 +249,7 @@ PostgreSQL documentation ...@@ -249,7 +249,7 @@ PostgreSQL documentation
<term><option>--no-readline</></term> <term><option>--no-readline</></term>
<listitem> <listitem>
<para> <para>
Do not use readline for line editing and do not use the history. Do not use <application>readline</application> for line editing and do not use the history.
This can be useful to turn off tab expansion when cutting and pasting. This can be useful to turn off tab expansion when cutting and pasting.
</para> </para>
</listitem> </listitem>
...@@ -289,7 +289,7 @@ PostgreSQL documentation ...@@ -289,7 +289,7 @@ PostgreSQL documentation
Specifies printing options, in the style of Specifies printing options, in the style of
<command>\pset</command>. Note that here you <command>\pset</command>. Note that here you
have to separate name and value with an equal sign instead of a have to separate name and value with an equal sign instead of a
space. For example, to set the output format to LaTeX, you could write space. For example, to set the output format to <application>LaTeX</application>, you could write
<literal>-P format=latex</literal>. <literal>-P format=latex</literal>.
</para> </para>
</listitem> </listitem>
...@@ -607,7 +607,7 @@ PostgreSQL documentation ...@@ -607,7 +607,7 @@ PostgreSQL documentation
$ <userinput>psql "service=myservice sslmode=require"</userinput> $ <userinput>psql "service=myservice sslmode=require"</userinput>
$ <userinput>psql postgresql://dbmaster:5433/mydb?sslmode=require</userinput> $ <userinput>psql postgresql://dbmaster:5433/mydb?sslmode=require</userinput>
</programlisting> </programlisting>
This way you can also use LDAP for connection parameter lookup as This way you can also use <acronym>LDAP</acronym> for connection parameter lookup as
described in <xref linkend="libpq-ldap">. described in <xref linkend="libpq-ldap">.
See <xref linkend="libpq-connect"> for more information on all the See <xref linkend="libpq-connect"> for more information on all the
available connection options. available connection options.
...@@ -1670,9 +1670,9 @@ Tue Oct 26 21:40:57 CEST 1999 ...@@ -1670,9 +1670,9 @@ Tue Oct 26 21:40:57 CEST 1999
<listitem> <listitem>
<para> <para>
The <literal>\ir</> command is similar to <literal>\i</>, but resolves The <literal>\ir</> command is similar to <literal>\i</>, but resolves
relative pathnames differently. When executing in interactive mode, relative file names differently. When executing in interactive mode,
the two commands behave identically. However, when invoked from a the two commands behave identically. However, when invoked from a
script, <literal>\ir</literal> interprets pathnames relative to the script, <literal>\ir</literal> interprets file names relative to the
directory in which the script is located, rather than the current directory in which the script is located, rather than the current
working directory. working directory.
</para> </para>
...@@ -2001,7 +2001,7 @@ lo_import 152801 ...@@ -2001,7 +2001,7 @@ lo_import 152801
formats put out tables that are intended to formats put out tables that are intended to
be included in documents using the respective mark-up be included in documents using the respective mark-up
language. They are not complete documents! (This might not be language. They are not complete documents! (This might not be
so dramatic in <acronym>HTML</acronym>, but in LaTeX you must so dramatic in <acronym>HTML</acronym>, but in <application>LaTeX</application> you must
have a complete document wrapper.) have a complete document wrapper.)
</para> </para>
</listitem> </listitem>
...@@ -3031,7 +3031,7 @@ testdb=&gt; <userinput>\set content `cat my_file.txt`</userinput> ...@@ -3031,7 +3031,7 @@ testdb=&gt; <userinput>\set content `cat my_file.txt`</userinput>
testdb=&gt; <userinput>INSERT INTO my_table VALUES (:'content');</userinput> testdb=&gt; <userinput>INSERT INTO my_table VALUES (:'content');</userinput>
</programlisting> </programlisting>
(Note that this still won't work if <filename>my_file.txt</filename> contains NUL bytes. (Note that this still won't work if <filename>my_file.txt</filename> contains NUL bytes.
psql does not support embedded NUL bytes in variable values.) <application>psql</application> does not support embedded NUL bytes in variable values.)
</para> </para>
<para> <para>
...@@ -3370,7 +3370,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' ...@@ -3370,7 +3370,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line '
<listitem> <listitem>
<para> <para>
Alternative location for the command history file. Tilde ("~") expansion is performed. Alternative location for the command history file. Tilde (<literal>~</literal>) expansion is performed.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -3380,7 +3380,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' ...@@ -3380,7 +3380,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line '
<listitem> <listitem>
<para> <para>
Alternative location of the user's .psqlrc file. Tilde ("~") expansion is performed. Alternative location of the user's <filename>.psqlrc</filename> file. Tilde (<literal>~</literal>) expansion is performed.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -3445,7 +3445,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' ...@@ -3445,7 +3445,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line '
<listitem> <listitem>
<para> <para>
Both the system-wide <filename>psqlrc</filename> file and the user's Both the system-wide <filename>psqlrc</filename> file and the user's
<filename>~/.psqlrc</filename> file can be made psql-version-specific <filename>~/.psqlrc</filename> file can be made <application>psql</application>-version-specific
by appending a dash and the <productname>PostgreSQL</productname> by appending a dash and the <productname>PostgreSQL</productname>
major or minor <application>psql</application> release number, major or minor <application>psql</application> release number,
for example <filename>~/.psqlrc-9.2</filename> or for example <filename>~/.psqlrc-9.2</filename> or
......
...@@ -1892,7 +1892,7 @@ CREATE VIEW phone_number WITH (security_barrier) AS ...@@ -1892,7 +1892,7 @@ CREATE VIEW phone_number WITH (security_barrier) AS
<para> <para>
The query planner has more flexibility when dealing with functions that The query planner has more flexibility when dealing with functions that
have no side effects. Such functions are referred to as LEAKPROOF, and have no side effects. Such functions are referred to as <literal>LEAKPROOF</literal>, and
include many simple, commonly used operators, such as many equality include many simple, commonly used operators, such as many equality
operators. The query planner can safely allow such functions to be evaluated operators. The query planner can safely allow such functions to be evaluated
at any point in the query execution process, since invoking them on rows at any point in the query execution process, since invoking them on rows
...@@ -1910,11 +1910,11 @@ CREATE VIEW phone_number WITH (security_barrier) AS ...@@ -1910,11 +1910,11 @@ CREATE VIEW phone_number WITH (security_barrier) AS
in the limited sense that the contents of the invisible tuples will not be in the limited sense that the contents of the invisible tuples will not be
passed to possibly-insecure functions. The user may well have other means passed to possibly-insecure functions. The user may well have other means
of making inferences about the unseen data; for example, they can see the of making inferences about the unseen data; for example, they can see the
query plan using <command>EXPLAIN</command>, or measure the runtime of query plan using <command>EXPLAIN</command>, or measure the run time of
queries against the view. A malicious attacker might be able to infer queries against the view. A malicious attacker might be able to infer
something about the amount of unseen data, or even gain some information something about the amount of unseen data, or even gain some information
about the data distribution or most common values (since these things may about the data distribution or most common values (since these things may
affect the runtime of the plan; or even, since they are also reflected in affect the run time of the plan; or even, since they are also reflected in
the optimizer statistics, the choice of plan). If these types of "covert the optimizer statistics, the choice of plan). If these types of "covert
channel" attacks are of concern, it is probably unwise to grant any access channel" attacks are of concern, it is probably unwise to grant any access
to the data at all. to the data at all.
......
...@@ -178,7 +178,7 @@ $ for DBNAME in template0 template1 postgres; do ...@@ -178,7 +178,7 @@ $ for DBNAME in template0 template1 postgres; do
Once built, install this policy package using the Once built, install this policy package using the
<command>semodule</> command, which loads supplied policy packages <command>semodule</> command, which loads supplied policy packages
into the kernel. If the package is correctly installed, into the kernel. If the package is correctly installed,
<literal><command>semodule</> -l</> should list sepgsql-regtest as an <literal><command>semodule</> -l</> should list <literal>sepgsql-regtest</literal> as an
available policy package: available policy package:
</para> </para>
...@@ -467,7 +467,7 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100; ...@@ -467,7 +467,7 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100;
<sect3> <sect3>
<title>Trusted Procedures</title> <title>Trusted Procedures</title>
<para> <para>
Trusted procedures are similar to security definer functions or set-uid Trusted procedures are similar to security definer functions or setuid
commands. <productname>SELinux</> provides a feature to allow trusted commands. <productname>SELinux</> provides a feature to allow trusted
code to run using a security label different from that of the client, code to run using a security label different from that of the client,
generally for the purpose of providing highly controlled access to generally for the purpose of providing highly controlled access to
......
...@@ -102,12 +102,12 @@ ...@@ -102,12 +102,12 @@
<note> <note>
<para> <para>
The <acronym>SP-GiST</acronym> core code takes care of NULL entries. The <acronym>SP-GiST</acronym> core code takes care of null entries.
Although <acronym>SP-GiST</acronym> indexes do store entries for nulls Although <acronym>SP-GiST</acronym> indexes do store entries for nulls
in indexed columns, this is hidden from the index operator class code: in indexed columns, this is hidden from the index operator class code:
no null index entries or search conditions will ever be passed to the no null index entries or search conditions will ever be passed to the
operator class methods. (It is assumed that <acronym>SP-GiST</acronym> operator class methods. (It is assumed that <acronym>SP-GiST</acronym>
operators are strict and so cannot succeed for NULL values.) NULLs operators are strict and so cannot succeed for null values.) Null values
are therefore not discussed further here. are therefore not discussed further here.
</para> </para>
</note> </note>
...@@ -136,7 +136,7 @@ ...@@ -136,7 +136,7 @@
<listitem> <listitem>
<para> <para>
Returns static information about the index implementation, including Returns static information about the index implementation, including
the datatype OIDs of the prefix and node label data types. the data type OIDs of the prefix and node label data types.
</para> </para>
<para> <para>
The <acronym>SQL</> declaration of the function must look like this: The <acronym>SQL</> declaration of the function must look like this:
...@@ -163,7 +163,7 @@ typedef struct spgConfigOut ...@@ -163,7 +163,7 @@ typedef struct spgConfigOut
</programlisting> </programlisting>
<structfield>attType</> is passed in order to support polymorphic <structfield>attType</> is passed in order to support polymorphic
index operator classes; for ordinary fixed-data-type opclasses, it index operator classes; for ordinary fixed-data-type operator classes, it
will always have the same value and so can be ignored. will always have the same value and so can be ignored.
</para> </para>
...@@ -626,7 +626,7 @@ typedef struct spgLeafConsistentOut ...@@ -626,7 +626,7 @@ typedef struct spgLeafConsistentOut
<para> <para>
This section covers implementation details and other tricks that are This section covers implementation details and other tricks that are
useful for implementors of <acronym>SP-GiST</acronym> operator classes to useful for implementers of <acronym>SP-GiST</acronym> operator classes to
know. know.
</para> </para>
......
...@@ -2292,8 +2292,8 @@ SELECT ts_lexize('public.simple_dict','The'); ...@@ -2292,8 +2292,8 @@ SELECT ts_lexize('public.simple_dict','The');
word with a synonym. Phrases are not supported (use the thesaurus word with a synonym. Phrases are not supported (use the thesaurus
template (<xref linkend="textsearch-thesaurus">) for that). A synonym template (<xref linkend="textsearch-thesaurus">) for that). A synonym
dictionary can be used to overcome linguistic problems, for example, to dictionary can be used to overcome linguistic problems, for example, to
prevent an English stemmer dictionary from reducing the word 'Paris' to prevent an English stemmer dictionary from reducing the word <quote>Paris</quote> to
'pari'. It is enough to have a <literal>Paris paris</literal> line in the <quote>pari</quote>. It is enough to have a <literal>Paris paris</literal> line in the
synonym dictionary and put it before the <literal>english_stem</> synonym dictionary and put it before the <literal>english_stem</>
dictionary. For example: dictionary. For example:
......
...@@ -1768,7 +1768,7 @@ typedef struct ...@@ -1768,7 +1768,7 @@ typedef struct
Finally, all variable-length types must also be passed Finally, all variable-length types must also be passed
by reference. All variable-length types must begin by reference. All variable-length types must begin
with an opaque length field of exactly 4 bytes, which will be set with an opaque length field of exactly 4 bytes, which will be set
by SET_VARSIZE; never set this field directly! All data to by <symbol>SET_VARSIZE</symbol>; never set this field directly! All data to
be stored within that type must be located in the memory be stored within that type must be located in the memory
immediately following that length field. The immediately following that length field. The
length field contains the total length of the structure, length field contains the total length of the structure,
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment