Commit 256f6ba7 authored by Peter Eisentraut's avatar Peter Eisentraut

Documentation spell checking and markup improvements

parent 30b5ede7
...@@ -223,7 +223,7 @@ include 'filename' ...@@ -223,7 +223,7 @@ include 'filename'
<secondary>in configuration file</secondary> <secondary>in configuration file</secondary>
</indexterm> </indexterm>
The <filename>postgresql.conf</> file can also contain The <filename>postgresql.conf</> file can also contain
<firstterm>include_dir directives</>, which specify an entire directory <literal>include_dir</literal> directives, which specify an entire directory
of configuration files to include. It is used similarly: of configuration files to include. It is used similarly:
<programlisting> <programlisting>
include_dir 'directory' include_dir 'directory'
...@@ -234,7 +234,7 @@ include 'filename' ...@@ -234,7 +234,7 @@ include 'filename'
names end with the suffix <literal>.conf</literal> will be included. File names end with the suffix <literal>.conf</literal> will be included. File
names that start with the <literal>.</literal> character are also excluded, names that start with the <literal>.</literal> character are also excluded,
to prevent mistakes as they are hidden on some platforms. Multiple files to prevent mistakes as they are hidden on some platforms. Multiple files
within an include directory are processed in filename order. The filenames within an include directory are processed in file name order. The file names
are ordered by C locale rules, ie. numbers before letters, and uppercase are ordered by C locale rules, ie. numbers before letters, and uppercase
letters before lowercase ones. letters before lowercase ones.
</para> </para>
...@@ -1211,7 +1211,7 @@ include 'filename' ...@@ -1211,7 +1211,7 @@ include 'filename'
Specifies the maximum amount of disk space that a session can use Specifies the maximum amount of disk space that a session can use
for temporary files, such as sort and hash temporary files, or the for temporary files, such as sort and hash temporary files, or the
storage file for a held cursor. A transaction attempting to exceed storage file for a held cursor. A transaction attempting to exceed
this limit will be cancelled. this limit will be canceled.
The value is specified in kilobytes, and <literal>-1</> (the The value is specified in kilobytes, and <literal>-1</> (the
default) means no limit. default) means no limit.
Only superusers can change this setting. Only superusers can change this setting.
...@@ -3358,7 +3358,7 @@ local0.* /var/log/postgresql ...@@ -3358,7 +3358,7 @@ local0.* /var/log/postgresql
<para> <para>
When <varname>logging_collector</varname> is enabled, When <varname>logging_collector</varname> is enabled,
this parameter sets the file names of the created log files. The value this parameter sets the file names of the created log files. The value
is treated as a <systemitem>strftime</systemitem> pattern, is treated as a <function>strftime</function> pattern,
so <literal>%</literal>-escapes can be used to specify time-varying so <literal>%</literal>-escapes can be used to specify time-varying
file names. (Note that if there are file names. (Note that if there are
any time-zone-dependent <literal>%</literal>-escapes, the computation any time-zone-dependent <literal>%</literal>-escapes, the computation
......
...@@ -4098,7 +4098,7 @@ SET xmloption TO { DOCUMENT | CONTENT }; ...@@ -4098,7 +4098,7 @@ SET xmloption TO { DOCUMENT | CONTENT };
representations of XML values, such as in the above examples. representations of XML values, such as in the above examples.
This would ordinarily mean that encoding declarations contained in This would ordinarily mean that encoding declarations contained in
XML data can become invalid as the character data is converted XML data can become invalid as the character data is converted
to other encodings while travelling between client and server, to other encodings while traveling between client and server,
because the embedded encoding declaration is not changed. To cope because the embedded encoding declaration is not changed. To cope
with this behavior, encoding declarations contained in with this behavior, encoding declarations contained in
character strings presented for input to the <type>xml</type> type character strings presented for input to the <type>xml</type> type
......
...@@ -450,7 +450,7 @@ ExecForeignInsert (EState *estate, ...@@ -450,7 +450,7 @@ ExecForeignInsert (EState *estate,
query has a <literal>RETURNING</> clause. Hence, the FDW could choose query has a <literal>RETURNING</> clause. Hence, the FDW could choose
to optimize away returning some or all columns depending on the contents to optimize away returning some or all columns depending on the contents
of the <literal>RETURNING</> clause. However, some slot must be of the <literal>RETURNING</> clause. However, some slot must be
returned to indicate success, or the query's reported rowcount will be returned to indicate success, or the query's reported row count will be
wrong. wrong.
</para> </para>
...@@ -495,7 +495,7 @@ ExecForeignUpdate (EState *estate, ...@@ -495,7 +495,7 @@ ExecForeignUpdate (EState *estate,
query has a <literal>RETURNING</> clause. Hence, the FDW could choose query has a <literal>RETURNING</> clause. Hence, the FDW could choose
to optimize away returning some or all columns depending on the contents to optimize away returning some or all columns depending on the contents
of the <literal>RETURNING</> clause. However, some slot must be of the <literal>RETURNING</> clause. However, some slot must be
returned to indicate success, or the query's reported rowcount will be returned to indicate success, or the query's reported row count will be
wrong. wrong.
</para> </para>
...@@ -538,7 +538,7 @@ ExecForeignDelete (EState *estate, ...@@ -538,7 +538,7 @@ ExecForeignDelete (EState *estate,
query has a <literal>RETURNING</> clause. Hence, the FDW could choose query has a <literal>RETURNING</> clause. Hence, the FDW could choose
to optimize away returning some or all columns depending on the contents to optimize away returning some or all columns depending on the contents
of the <literal>RETURNING</> clause. However, some slot must be of the <literal>RETURNING</> clause. However, some slot must be
returned to indicate success, or the query's reported rowcount will be returned to indicate success, or the query's reported row count will be
wrong. wrong.
</para> </para>
......
...@@ -9928,7 +9928,7 @@ table2-mapping ...@@ -9928,7 +9928,7 @@ table2-mapping
</indexterm> </indexterm>
<literal>array_to_json(anyarray [, pretty_bool])</literal> <literal>array_to_json(anyarray [, pretty_bool])</literal>
</entry> </entry>
<entry>json</entry> <entry><type>json</type></entry>
<entry> <entry>
Returns the array as JSON. A PostgreSQL multidimensional array Returns the array as JSON. A PostgreSQL multidimensional array
becomes a JSON array of arrays. Line feeds will be added between becomes a JSON array of arrays. Line feeds will be added between
...@@ -9944,7 +9944,7 @@ table2-mapping ...@@ -9944,7 +9944,7 @@ table2-mapping
</indexterm> </indexterm>
<literal>row_to_json(record [, pretty_bool])</literal> <literal>row_to_json(record [, pretty_bool])</literal>
</entry> </entry>
<entry>json</entry> <entry><type>json</type></entry>
<entry> <entry>
Returns the row as JSON. Line feeds will be added between level Returns the row as JSON. Line feeds will be added between level
1 elements if <parameter>pretty_bool</parameter> is true. 1 elements if <parameter>pretty_bool</parameter> is true.
...@@ -9959,12 +9959,12 @@ table2-mapping ...@@ -9959,12 +9959,12 @@ table2-mapping
</indexterm> </indexterm>
<literal>to_json(anyelement)</literal> <literal>to_json(anyelement)</literal>
</entry> </entry>
<entry>json</entry> <entry><type>json</type></entry>
<entry> <entry>
Returns the value as JSON. If the data type is not builtin, and there Returns the value as JSON. If the data type is not built in, and there
is a cast from the type to json, the cast function will be used to is a cast from the type to <type>json</type>, the cast function will be used to
perform the conversion. Otherwise, for any value other than a number, perform the conversion. Otherwise, for any value other than a number,
a boolean or NULL, the text representation will be used, escaped and a Boolean, or a null value, the text representation will be used, escaped and
quoted so that it is legal JSON. quoted so that it is legal JSON.
</entry> </entry>
<entry><literal>to_json('Fred said "Hi."'::text)</literal></entry> <entry><literal>to_json('Fred said "Hi."'::text)</literal></entry>
...@@ -9977,9 +9977,9 @@ table2-mapping ...@@ -9977,9 +9977,9 @@ table2-mapping
</indexterm> </indexterm>
<literal>json_array_length(json)</literal> <literal>json_array_length(json)</literal>
</entry> </entry>
<entry>int</entry> <entry><type>int</type></entry>
<entry> <entry>
Returns the number of elements in the outermost json array. Returns the number of elements in the outermost JSON array.
</entry> </entry>
<entry><literal>json_array_length('[1,2,3,{"f1":1,"f2":[5,6]},4]')</literal></entry> <entry><literal>json_array_length('[1,2,3,{"f1":1,"f2":[5,6]},4]')</literal></entry>
<entry><literal>5</literal></entry> <entry><literal>5</literal></entry>
...@@ -9991,9 +9991,9 @@ table2-mapping ...@@ -9991,9 +9991,9 @@ table2-mapping
</indexterm> </indexterm>
<literal>json_each(json)</literal> <literal>json_each(json)</literal>
</entry> </entry>
<entry>SETOF key text, value json</entry> <entry><type>SETOF key text, value json</type></entry>
<entry> <entry>
Expands the outermost json object into a set of key/value pairs. Expands the outermost JSON object into a set of key/value pairs.
</entry> </entry>
<entry><literal>select * from json_each('{"a":"foo", "b":"bar"}')</literal></entry> <entry><literal>select * from json_each('{"a":"foo", "b":"bar"}')</literal></entry>
<entry> <entry>
...@@ -10012,9 +10012,9 @@ table2-mapping ...@@ -10012,9 +10012,9 @@ table2-mapping
</indexterm> </indexterm>
<literal>json_each_text(from_json json)</literal> <literal>json_each_text(from_json json)</literal>
</entry> </entry>
<entry>SETOF key text, value text</entry> <entry><type>SETOF key text, value text</type></entry>
<entry> <entry>
Expands the outermost json object into a set of key/value pairs. The Expands the outermost JSON object into a set of key/value pairs. The
returned value will be of type text. returned value will be of type text.
</entry> </entry>
<entry><literal>select * from json_each_text('{"a":"foo", "b":"bar"}')</literal></entry> <entry><literal>select * from json_each_text('{"a":"foo", "b":"bar"}')</literal></entry>
...@@ -10034,9 +10034,9 @@ table2-mapping ...@@ -10034,9 +10034,9 @@ table2-mapping
</indexterm> </indexterm>
<literal>json_extract_path(from_json json, VARIADIC path_elems text[])</literal> <literal>json_extract_path(from_json json, VARIADIC path_elems text[])</literal>
</entry> </entry>
<entry>json</entry> <entry><type>json</type></entry>
<entry> <entry>
Returns json object pointed to by <parameter>path_elems</parameter>. Returns JSON object pointed to by <parameter>path_elems</parameter>.
</entry> </entry>
<entry><literal>json_extract_path('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4')</literal></entry> <entry><literal>json_extract_path('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4')</literal></entry>
<entry><literal>{"f5":99,"f6":"foo"}</literal></entry> <entry><literal>{"f5":99,"f6":"foo"}</literal></entry>
...@@ -10048,9 +10048,9 @@ table2-mapping ...@@ -10048,9 +10048,9 @@ table2-mapping
</indexterm> </indexterm>
<literal>json_extract_path_text(from_json json, VARIADIC path_elems text[])</literal> <literal>json_extract_path_text(from_json json, VARIADIC path_elems text[])</literal>
</entry> </entry>
<entry>text</entry> <entry><type>text</type></entry>
<entry> <entry>
Returns json object pointed to by <parameter>path_elems</parameter>. Returns JSON object pointed to by <parameter>path_elems</parameter>.
</entry> </entry>
<entry><literal>json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4', 'f6')</literal></entry> <entry><literal>json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4', 'f6')</literal></entry>
<entry><literal>foo</literal></entry> <entry><literal>foo</literal></entry>
...@@ -10062,9 +10062,9 @@ table2-mapping ...@@ -10062,9 +10062,9 @@ table2-mapping
</indexterm> </indexterm>
<literal>json_object_keys(json)</literal> <literal>json_object_keys(json)</literal>
</entry> </entry>
<entry>SETOF text</entry> <entry><type>SETOF text</type></entry>
<entry> <entry>
Returns set of keys in the json object. Only the "outer" object will be displayed. Returns set of keys in the JSON object. Only the <quote>outer</quote> object will be displayed.
</entry> </entry>
<entry><literal>json_object_keys('{"f1":"abc","f2":{"f3":"a", "f4":"b"}}')</literal></entry> <entry><literal>json_object_keys('{"f1":"abc","f2":{"f3":"a", "f4":"b"}}')</literal></entry>
<entry> <entry>
...@@ -10083,11 +10083,11 @@ table2-mapping ...@@ -10083,11 +10083,11 @@ table2-mapping
</indexterm> </indexterm>
<literal>json_populate_record(base anyelement, from_json json, [, use_json_as_text bool=false]</literal> <literal>json_populate_record(base anyelement, from_json json, [, use_json_as_text bool=false]</literal>
</entry> </entry>
<entry>anyelement</entry> <entry><type>anyelement</type></entry>
<entry> <entry>
Expands the object in from_json to a row whose columns match Expands the object in <replaceable>from_json</replaceable> to a row whose columns match
the record type defined by base. Conversion will be best the record type defined by base. Conversion will be best
effort; columns in base with no corresponding key in from_json effort; columns in base with no corresponding key in <replaceable>from_json</replaceable>
will be left null. A column may only be specified once. will be left null. A column may only be specified once.
</entry> </entry>
<entry><literal>select * from json_populate_record(null::x, '{"a":1,"b":2}')</literal></entry> <entry><literal>select * from json_populate_record(null::x, '{"a":1,"b":2}')</literal></entry>
...@@ -10106,12 +10106,12 @@ table2-mapping ...@@ -10106,12 +10106,12 @@ table2-mapping
</indexterm> </indexterm>
<literal>json_populate_recordset(base anyelement, from_json json, [, use_json_as_text bool=false]</literal> <literal>json_populate_recordset(base anyelement, from_json json, [, use_json_as_text bool=false]</literal>
</entry> </entry>
<entry>SETOF anyelement</entry> <entry><type>SETOF anyelement</type></entry>
<entry> <entry>
Expands the outermost set of objects in from_json to a set Expands the outermost set of objects in <replaceable>from_json</replaceable> to a set
whose columns match the record type defined by base. whose columns match the record type defined by base.
Conversion will be best effort; columns in base with no Conversion will be best effort; columns in base with no
corresponding key in from_json will be left null. A column corresponding key in <replaceable>from_json</replaceable> will be left null. A column
may only be specified once. may only be specified once.
</entry> </entry>
<entry><literal>select * from json_populate_recordset(null::x, '[{"a":1,"b":2},{"a":3,"b":4}]')</literal></entry> <entry><literal>select * from json_populate_recordset(null::x, '[{"a":1,"b":2},{"a":3,"b":4}]')</literal></entry>
...@@ -10131,9 +10131,9 @@ table2-mapping ...@@ -10131,9 +10131,9 @@ table2-mapping
</indexterm> </indexterm>
<literal>json_array_elements(json)</literal> <literal>json_array_elements(json)</literal>
</entry> </entry>
<entry>SETOF json</entry> <entry><type>SETOF json</type></entry>
<entry> <entry>
Expands a json array to a set of json elements. Expands a JSON array to a set of JSON elements.
</entry> </entry>
<entry><literal>json_array_elements('[1,true, [2,false]]')</literal></entry> <entry><literal>json_array_elements('[1,true, [2,false]]')</literal></entry>
<entry> <entry>
...@@ -10152,8 +10152,8 @@ table2-mapping ...@@ -10152,8 +10152,8 @@ table2-mapping
<note> <note>
<para> <para>
The <xref linkend="hstore"> extension has a cast from hstore to The <xref linkend="hstore"> extension has a cast from <type>hstore</type> to
json, so that converted hstore values are represented as json objects, <type>json</type>, so that converted <type>hstore</type> values are represented as JSON objects,
not as string values. not as string values.
</para> </para>
</note> </note>
...@@ -10161,7 +10161,7 @@ table2-mapping ...@@ -10161,7 +10161,7 @@ table2-mapping
<para> <para>
See also <xref linkend="functions-aggregate"> about the aggregate See also <xref linkend="functions-aggregate"> about the aggregate
function <function>json_agg</function> which aggregates record function <function>json_agg</function> which aggregates record
values as json efficiently. values as JSON efficiently.
</para> </para>
</sect1> </sect1>
...@@ -11546,7 +11546,7 @@ SELECT NULLIF(value, '(none)') ... ...@@ -11546,7 +11546,7 @@ SELECT NULLIF(value, '(none)') ...
<entry> <entry>
<type>json</type> <type>json</type>
</entry> </entry>
<entry>aggregates records as a json array of objects</entry> <entry>aggregates records as a JSON array of objects</entry>
</row> </row>
<row> <row>
...@@ -14904,7 +14904,7 @@ SELECT set_config('log_statement_stats', 'off', false); ...@@ -14904,7 +14904,7 @@ SELECT set_config('log_statement_stats', 'off', false);
</sect2> </sect2>
<sect2 id="functions-admin-signal"> <sect2 id="functions-admin-signal">
<title>Server Signalling Functions</title> <title>Server Signaling Functions</title>
<indexterm> <indexterm>
<primary>pg_cancel_backend</primary> <primary>pg_cancel_backend</primary>
...@@ -14932,7 +14932,7 @@ SELECT set_config('log_statement_stats', 'off', false); ...@@ -14932,7 +14932,7 @@ SELECT set_config('log_statement_stats', 'off', false);
</para> </para>
<table id="functions-admin-signal-table"> <table id="functions-admin-signal-table">
<title>Server Signalling Functions</title> <title>Server Signaling Functions</title>
<tgroup cols="3"> <tgroup cols="3">
<thead> <thead>
<row><entry>Name</entry> <entry>Return Type</entry> <entry>Description</entry> <row><entry>Name</entry> <entry>Return Type</entry> <entry>Description</entry>
......
...@@ -105,7 +105,7 @@ ...@@ -105,7 +105,7 @@
Returns a palloc'd array of keys given an item to be indexed. The Returns a palloc'd array of keys given an item to be indexed. The
number of returned keys must be stored into <literal>*nkeys</>. number of returned keys must be stored into <literal>*nkeys</>.
If any of the keys can be null, also palloc an array of If any of the keys can be null, also palloc an array of
<literal>*nkeys</> booleans, store its address at <literal>*nkeys</> <type>bool</type> fields, store its address at
<literal>*nullFlags</>, and set these null flags as needed. <literal>*nullFlags</>, and set these null flags as needed.
<literal>*nullFlags</> can be left <symbol>NULL</symbol> (its initial value) <literal>*nullFlags</> can be left <symbol>NULL</symbol> (its initial value)
if all keys are non-null. if all keys are non-null.
...@@ -130,11 +130,11 @@ ...@@ -130,11 +130,11 @@
<literal>query</> and the method it should use to extract key values. <literal>query</> and the method it should use to extract key values.
The number of returned keys must be stored into <literal>*nkeys</>. The number of returned keys must be stored into <literal>*nkeys</>.
If any of the keys can be null, also palloc an array of If any of the keys can be null, also palloc an array of
<literal>*nkeys</> booleans, store its address at <literal>*nkeys</> <type>bool</type> fields, store its address at
<literal>*nullFlags</>, and set these null flags as needed. <literal>*nullFlags</>, and set these null flags as needed.
<literal>*nullFlags</> can be left NULL (its initial value) <literal>*nullFlags</> can be left <symbol>NULL</symbol> (its initial value)
if all keys are non-null. if all keys are non-null.
The return value can be NULL if the <literal>query</> contains no keys. The return value can be <symbol>NULL</symbol> if the <literal>query</> contains no keys.
</para> </para>
<para> <para>
...@@ -168,8 +168,8 @@ ...@@ -168,8 +168,8 @@
an array of <literal>*nkeys</> booleans and store its address at an array of <literal>*nkeys</> booleans and store its address at
<literal>*pmatch</>. Each element of the array should be set to TRUE <literal>*pmatch</>. Each element of the array should be set to TRUE
if the corresponding key requires partial match, FALSE if not. if the corresponding key requires partial match, FALSE if not.
If <literal>*pmatch</> is set to NULL then GIN assumes partial match If <literal>*pmatch</> is set to <symbol>NULL</symbol> then GIN assumes partial match
is not required. The variable is initialized to NULL before call, is not required. The variable is initialized to <symbol>NULL</symbol> before call,
so this argument can simply be ignored by operator classes that do so this argument can simply be ignored by operator classes that do
not support partial match. not support partial match.
</para> </para>
...@@ -181,7 +181,7 @@ ...@@ -181,7 +181,7 @@
To use it, <function>extractQuery</> must allocate To use it, <function>extractQuery</> must allocate
an array of <literal>*nkeys</> Pointers and store its address at an array of <literal>*nkeys</> Pointers and store its address at
<literal>*extra_data</>, then store whatever it wants to into the <literal>*extra_data</>, then store whatever it wants to into the
individual pointers. The variable is initialized to NULL before individual pointers. The variable is initialized to <symbol>NULL</symbol> before
call, so this argument can simply be ignored by operator classes that call, so this argument can simply be ignored by operator classes that
do not require extra data. If <literal>*extra_data</> is set, the do not require extra data. If <literal>*extra_data</> is set, the
whole array is passed to the <function>consistent</> method, and whole array is passed to the <function>consistent</> method, and
...@@ -215,7 +215,7 @@ ...@@ -215,7 +215,7 @@
and so are the <literal>queryKeys[]</> and <literal>nullFlags[]</> and so are the <literal>queryKeys[]</> and <literal>nullFlags[]</>
arrays previously returned by <function>extractQuery</>. arrays previously returned by <function>extractQuery</>.
<literal>extra_data</> is the extra-data array returned by <literal>extra_data</> is the extra-data array returned by
<function>extractQuery</>, or NULL if none. <function>extractQuery</>, or <symbol>NULL</symbol> if none.
</para> </para>
<para> <para>
...@@ -261,7 +261,7 @@ ...@@ -261,7 +261,7 @@
that generated the partial match query is provided, in case its that generated the partial match query is provided, in case its
semantics are needed to determine when to end the scan. Also, semantics are needed to determine when to end the scan. Also,
<literal>extra_data</> is the corresponding element of the extra-data <literal>extra_data</> is the corresponding element of the extra-data
array made by <function>extractQuery</>, or NULL if none. array made by <function>extractQuery</>, or <symbol>NULL</symbol> if none.
Null keys are never passed to this function. Null keys are never passed to this function.
</para> </para>
</listitem> </listitem>
...@@ -305,9 +305,9 @@ ...@@ -305,9 +305,9 @@
</para> </para>
<para> <para>
As of <productname>PostgreSQL</productname> 9.1, NULL key values can be As of <productname>PostgreSQL</productname> 9.1, null key values can be
included in the index. Also, placeholder NULLs are included in the index included in the index. Also, placeholder nulls are included in the index
for indexed items that are NULL or contain no keys according to for indexed items that are null or contain no keys according to
<function>extractValue</>. This allows searches that should find empty <function>extractValue</>. This allows searches that should find empty
items to do so. items to do so.
</para> </para>
...@@ -471,11 +471,11 @@ ...@@ -471,11 +471,11 @@
<para> <para>
<acronym>GIN</acronym> assumes that indexable operators are strict. This <acronym>GIN</acronym> assumes that indexable operators are strict. This
means that <function>extractValue</> will not be called at all on a NULL means that <function>extractValue</> will not be called at all on a null
item value (instead, a placeholder index entry is created automatically), item value (instead, a placeholder index entry is created automatically),
and <function>extractQuery</function> will not be called on a NULL query and <function>extractQuery</function> will not be called on a null query
value either (instead, the query is presumed to be unsatisfiable). Note value either (instead, the query is presumed to be unsatisfiable). Note
however that NULL key values contained within a non-null composite item however that null key values contained within a non-null composite item
or query value are supported. or query value are supported.
</para> </para>
</sect1> </sect1>
......
...@@ -325,7 +325,7 @@ b ...@@ -325,7 +325,7 @@ b
<row> <row>
<entry><function>hstore_to_json(hstore)</function></entry> <entry><function>hstore_to_json(hstore)</function></entry>
<entry><type>json</type></entry> <entry><type>json</type></entry>
<entry>get <type>hstore</type> as a json value</entry> <entry>get <type>hstore</type> as a <type>json</type> value</entry>
<entry><literal>hstore_to_json('"a key"=&gt;1, b=&gt;t, c=&gt;null, d=&gt;12345, e=&gt;012345, f=&gt;1.234, g=&gt;2.345e+4')</literal></entry> <entry><literal>hstore_to_json('"a key"=&gt;1, b=&gt;t, c=&gt;null, d=&gt;12345, e=&gt;012345, f=&gt;1.234, g=&gt;2.345e+4')</literal></entry>
<entry><literal>{"a key": "1", "b": "t", "c": null, "d": "12345", "e": "012345", "f": "1.234", "g": "2.345e+4"}</literal></entry> <entry><literal>{"a key": "1", "b": "t", "c": null, "d": "12345", "e": "012345", "f": "1.234", "g": "2.345e+4"}</literal></entry>
</row> </row>
...@@ -333,7 +333,7 @@ b ...@@ -333,7 +333,7 @@ b
<row> <row>
<entry><function>hstore_to_json_loose(hstore)</function></entry> <entry><function>hstore_to_json_loose(hstore)</function></entry>
<entry><type>json</type></entry> <entry><type>json</type></entry>
<entry>get <type>hstore</type> as a json value, but attempting to distinguish numerical and boolean values so they are unquoted in the json</entry> <entry>get <type>hstore</type> as a <type>json</type> value, but attempting to distinguish numerical and Boolean values so they are unquoted in the JSON</entry>
<entry><literal>hstore_to_json_loose('"a key"=&gt;1, b=&gt;t, c=&gt;null, d=&gt;12345, e=&gt;012345, f=&gt;1.234, g=&gt;2.345e+4')</literal></entry> <entry><literal>hstore_to_json_loose('"a key"=&gt;1, b=&gt;t, c=&gt;null, d=&gt;12345, e=&gt;012345, f=&gt;1.234, g=&gt;2.345e+4')</literal></entry>
<entry><literal>{"a key": 1, "b": true, "c": null, "d": 12345, "e": "012345", "f": 1.234, "g": 2.345e+4}</literal></entry> <entry><literal>{"a key": 1, "b": true, "c": null, "d": 12345, "e": "012345", "f": 1.234, "g": 2.345e+4}</literal></entry>
</row> </row>
......
...@@ -113,8 +113,8 @@ ...@@ -113,8 +113,8 @@
<structfield>amoptionalkey</structfield> false. <structfield>amoptionalkey</structfield> false.
One reason that an index AM might set One reason that an index AM might set
<structfield>amoptionalkey</structfield> false is if it doesn't index <structfield>amoptionalkey</structfield> false is if it doesn't index
NULLs. Since most indexable operators are null values. Since most indexable operators are
strict and hence cannot return TRUE for NULL inputs, strict and hence cannot return true for null inputs,
it is at first sight attractive to not store index entries for null values: it is at first sight attractive to not store index entries for null values:
they could never be returned by an index scan anyway. However, this they could never be returned by an index scan anyway. However, this
argument fails when an index scan has no restriction clause for a given argument fails when an index scan has no restriction clause for a given
......
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
information schema is defined in the SQL standard and can therefore information schema is defined in the SQL standard and can therefore
be expected to be portable and remain stable &mdash; unlike the system be expected to be portable and remain stable &mdash; unlike the system
catalogs, which are specific to catalogs, which are specific to
<productname>PostgreSQL</productname> and are modelled after <productname>PostgreSQL</productname> and are modeled after
implementation concerns. The information schema views do not, implementation concerns. The information schema views do not,
however, contain information about however, contain information about
<productname>PostgreSQL</productname>-specific features; to inquire <productname>PostgreSQL</productname>-specific features; to inquire
......
...@@ -233,7 +233,7 @@ $ENV{PATH}=$ENV{PATH} . ';c:\some\where\bison\bin'; ...@@ -233,7 +233,7 @@ $ENV{PATH}=$ENV{PATH} . ';c:\some\where\bison\bin';
spaces in the name, such as the default location on English spaces in the name, such as the default location on English
installations <filename>C:\Program Files\GnuWin32</filename>. installations <filename>C:\Program Files\GnuWin32</filename>.
Consider installing into <filename>C:\GnuWin32</filename> or use the Consider installing into <filename>C:\GnuWin32</filename> or use the
NTFS shortname path to GnuWin32 in your PATH environment setting NTFS short name path to GnuWin32 in your PATH environment setting
(e.g. <filename>C:\PROGRA~1\GnuWin32</filename>). (e.g. <filename>C:\PROGRA~1\GnuWin32</filename>).
</para> </para>
</note> </note>
......
...@@ -2734,9 +2734,9 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); ...@@ -2734,9 +2734,9 @@ char *PQresultErrorField(const PGresult *res, int fieldcode);
<term><symbol>PG_DIAG_DATATYPE_NAME</></term> <term><symbol>PG_DIAG_DATATYPE_NAME</></term>
<listitem> <listitem>
<para> <para>
If the error was associated with a specific datatype, the name If the error was associated with a specific data type, the name
of the datatype. (When this field is present, the schema name of the data type. (When this field is present, the schema name
field provides the name of the datatype's schema.) field provides the name of the data type's schema.)
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -2787,7 +2787,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); ...@@ -2787,7 +2787,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode);
<note> <note>
<para> <para>
The fields for schema name, table name, column name, datatype The fields for schema name, table name, column name, data type
name, and constraint name are supplied only for a limited number name, and constraint name are supplied only for a limited number
of error types; see <xref linkend="errcodes-appendix">. of error types; see <xref linkend="errcodes-appendix">.
</para> </para>
......
...@@ -33,7 +33,7 @@ ...@@ -33,7 +33,7 @@
a path from the root of a hierarchical tree to a particular node. The a path from the root of a hierarchical tree to a particular node. The
length of a label path must be less than 65Kb, but keeping it under 2Kb is length of a label path must be less than 65Kb, but keeping it under 2Kb is
preferable. In practice this is not a major limitation; for example, preferable. In practice this is not a major limitation; for example,
the longest label path in the DMOZ catalogue (<ulink the longest label path in the DMOZ catalog (<ulink
url="http://www.dmoz.org"></ulink>) is about 240 bytes. url="http://www.dmoz.org"></ulink>) is about 240 bytes.
</para> </para>
......
...@@ -263,9 +263,9 @@ ...@@ -263,9 +263,9 @@
<important> <important>
<para> <para>
Some <productname>PostgreSQL</productname> data types and functions have Some <productname>PostgreSQL</productname> data types and functions have
special rules regarding transactional behaviour. In particular, changes special rules regarding transactional behavior. In particular, changes
made to a <literal>SEQUENCE</literal> (and therefore the counter of a made to a sequence (and therefore the counter of a
column declared using <literal>SERIAL</literal>) are immediately visible column declared using <type>serial</type>) are immediately visible
to all other transactions and are not rolled back if the transaction to all other transactions and are not rolled back if the transaction
that made the changes aborts. See <xref linkend="functions-sequence"> that made the changes aborts. See <xref linkend="functions-sequence">
and <xref linkend="datatype-serial">. and <xref linkend="datatype-serial">.
......
...@@ -675,7 +675,7 @@ EXPLAIN ANALYZE SELECT * FROM polygon_tbl WHERE f1 @&gt; polygon '(0.5,2.0)'; ...@@ -675,7 +675,7 @@ EXPLAIN ANALYZE SELECT * FROM polygon_tbl WHERE f1 @&gt; polygon '(0.5,2.0)';
<para> <para>
<command>EXPLAIN</> has a <literal>BUFFERS</> option that can be used with <command>EXPLAIN</> has a <literal>BUFFERS</> option that can be used with
<literal>ANALYZE</> to get even more runtime statistics: <literal>ANALYZE</> to get even more run time statistics:
<screen> <screen>
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM tenk1 WHERE unique1 &lt; 100 AND unique2 &gt; 9000; EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM tenk1 WHERE unique1 &lt; 100 AND unique2 &gt; 9000;
...@@ -735,7 +735,7 @@ ROLLBACK; ...@@ -735,7 +735,7 @@ ROLLBACK;
So above, we see the same sort of bitmap table scan we've seen already, So above, we see the same sort of bitmap table scan we've seen already,
and its output is fed to an Update node that stores the updated rows. and its output is fed to an Update node that stores the updated rows.
It's worth noting that although the data-modifying node can take a It's worth noting that although the data-modifying node can take a
considerable amount of runtime (here, it's consuming the lion's share considerable amount of run time (here, it's consuming the lion's share
of the time), the planner does not currently add anything to the cost of the time), the planner does not currently add anything to the cost
estimates to account for that work. That's because the work to be done is estimates to account for that work. That's because the work to be done is
the same for every correct query plan, so it doesn't affect planning the same for every correct query plan, so it doesn't affect planning
...@@ -811,7 +811,7 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE unique1 &lt; 100 AND unique2 &gt; 9000 ...@@ -811,7 +811,7 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE unique1 &lt; 100 AND unique2 &gt; 9000
the estimated cost and row count for the Index Scan node are shown as the estimated cost and row count for the Index Scan node are shown as
though it were run to completion. But in reality the Limit node stopped though it were run to completion. But in reality the Limit node stopped
requesting rows after it got two, so the actual row count is only 2 and requesting rows after it got two, so the actual row count is only 2 and
the runtime is less than the cost estimate would suggest. This is not the run time is less than the cost estimate would suggest. This is not
an estimation error, only a discrepancy in the way the estimates and true an estimation error, only a discrepancy in the way the estimates and true
values are displayed. values are displayed.
</para> </para>
......
...@@ -36,7 +36,7 @@ ...@@ -36,7 +36,7 @@
difference in real database throughput, especially since many database servers difference in real database throughput, especially since many database servers
are not speed-limited by their transaction logs. are not speed-limited by their transaction logs.
<application>pg_test_fsync</application> reports average file sync operation <application>pg_test_fsync</application> reports average file sync operation
time in microseconds for each wal_sync_method, which can also be used to time in microseconds for each <literal>wal_sync_method</literal>, which can also be used to
inform efforts to optimize the value of <xref linkend="guc-commit-delay">. inform efforts to optimize the value of <xref linkend="guc-commit-delay">.
</para> </para>
</refsect1> </refsect1>
......
...@@ -432,7 +432,7 @@ rows = (outer_cardinality * inner_cardinality) * selectivity ...@@ -432,7 +432,7 @@ rows = (outer_cardinality * inner_cardinality) * selectivity
<structname>tenk2</>. But this is not the case: the join relation size <structname>tenk2</>. But this is not the case: the join relation size
is estimated before any particular join plan has been considered. If is estimated before any particular join plan has been considered. If
everything is working well then the two ways of estimating the join everything is working well then the two ways of estimating the join
size will produce about the same answer, but due to roundoff error and size will produce about the same answer, but due to round-off error and
other factors they sometimes diverge significantly. other factors they sometimes diverge significantly.
</para> </para>
......
...@@ -201,7 +201,7 @@ select returns_array(); ...@@ -201,7 +201,7 @@ select returns_array();
<para> <para>
Perl passes <productname>PostgreSQL</productname> arrays as a blessed Perl passes <productname>PostgreSQL</productname> arrays as a blessed
PostgreSQL::InServer::ARRAY object. This object may be treated as an array <type>PostgreSQL::InServer::ARRAY</type> object. This object may be treated as an array
reference or a string, allowing for backward compatibility with Perl reference or a string, allowing for backward compatibility with Perl
code written for <productname>PostgreSQL</productname> versions below 9.1 to code written for <productname>PostgreSQL</productname> versions below 9.1 to
run. For example: run. For example:
...@@ -228,7 +228,7 @@ SELECT concat_array_elements(ARRAY['PL','/','Perl']); ...@@ -228,7 +228,7 @@ SELECT concat_array_elements(ARRAY['PL','/','Perl']);
<note> <note>
<para> <para>
Multi-dimensional arrays are represented as references to Multidimensional arrays are represented as references to
lower-dimensional arrays of references in a way common to every Perl lower-dimensional arrays of references in a way common to every Perl
programmer. programmer.
</para> </para>
...@@ -278,7 +278,7 @@ SELECT * FROM perl_row(); ...@@ -278,7 +278,7 @@ SELECT * FROM perl_row();
<para> <para>
PL/Perl functions can also return sets of either scalar or PL/Perl functions can also return sets of either scalar or
composite types. Usually you'll want to return rows one at a composite types. Usually you'll want to return rows one at a
time, both to speed up startup time and to keep from queueing up time, both to speed up startup time and to keep from queuing up
the entire result set in memory. You can do this with the entire result set in memory. You can do this with
<function>return_next</function> as illustrated below. Note that <function>return_next</function> as illustrated below. Note that
after the last <function>return_next</function>, you must put after the last <function>return_next</function>, you must put
......
...@@ -1292,7 +1292,7 @@ EXECUTE 'UPDATE tbl SET ' ...@@ -1292,7 +1292,7 @@ EXECUTE 'UPDATE tbl SET '
</para> </para>
<para> <para>
Because <function>quote_literal</function> is labelled Because <function>quote_literal</function> is labeled
<literal>STRICT</literal>, it will always return null when called with a <literal>STRICT</literal>, it will always return null when called with a
null argument. In the above example, if <literal>newvalue</> or null argument. In the above example, if <literal>newvalue</> or
<literal>keyvalue</> were null, the entire dynamic query string would <literal>keyvalue</> were null, the entire dynamic query string would
...@@ -2107,11 +2107,11 @@ EXIT <optional> <replaceable>label</replaceable> </optional> <optional> WHEN <re ...@@ -2107,11 +2107,11 @@ EXIT <optional> <replaceable>label</replaceable> </optional> <optional> WHEN <re
When used with a When used with a
<literal>BEGIN</literal> block, <literal>EXIT</literal> passes <literal>BEGIN</literal> block, <literal>EXIT</literal> passes
control to the next statement after the end of the block. control to the next statement after the end of the block.
Note that a label must be used for this purpose; an unlabelled Note that a label must be used for this purpose; an unlabeled
<literal>EXIT</literal> is never considered to match a <literal>EXIT</literal> is never considered to match a
<literal>BEGIN</literal> block. (This is a change from <literal>BEGIN</literal> block. (This is a change from
pre-8.4 releases of <productname>PostgreSQL</productname>, which pre-8.4 releases of <productname>PostgreSQL</productname>, which
would allow an unlabelled <literal>EXIT</literal> to match would allow an unlabeled <literal>EXIT</literal> to match
a <literal>BEGIN</literal> block.) a <literal>BEGIN</literal> block.)
</para> </para>
......
...@@ -236,11 +236,11 @@ ...@@ -236,11 +236,11 @@
<para> <para>
When <literal>use_remote_estimate</literal> is true, When <literal>use_remote_estimate</literal> is true,
<filename>postgres_fdw</> obtains rowcount and cost estimates from the <filename>postgres_fdw</> obtains row count and cost estimates from the
remote server and then adds <literal>fdw_startup_cost</literal> and remote server and then adds <literal>fdw_startup_cost</literal> and
<literal>fdw_tuple_cost</literal> to the cost estimates. When <literal>fdw_tuple_cost</literal> to the cost estimates. When
<literal>use_remote_estimate</literal> is false, <literal>use_remote_estimate</literal> is false,
<filename>postgres_fdw</> performs local rowcount and cost estimation <filename>postgres_fdw</> performs local row count and cost estimation
and then adds <literal>fdw_startup_cost</literal> and and then adds <literal>fdw_startup_cost</literal> and
<literal>fdw_tuple_cost</literal> to the cost estimates. This local <literal>fdw_tuple_cost</literal> to the cost estimates. This local
estimation is unlikely to be very accurate unless local copies of the estimation is unlikely to be very accurate unless local copies of the
......
...@@ -4813,9 +4813,9 @@ message. ...@@ -4813,9 +4813,9 @@ message.
</term> </term>
<listitem> <listitem>
<para> <para>
Datatype name: if the error was associated with a specific datatype, Data type name: if the error was associated with a specific data type,
the name of the datatype. (When this field is present, the schema the name of the data type. (When this field is present, the schema
name field provides the name of the datatype's schema.) name field provides the name of the data type's schema.)
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -4874,7 +4874,7 @@ message. ...@@ -4874,7 +4874,7 @@ message.
<note> <note>
<para> <para>
The fields for schema name, table name, column name, datatype name, and The fields for schema name, table name, column name, data type name, and
constraint name are supplied only for a limited number of error types; constraint name are supplied only for a limited number of error types;
see <xref linkend="errcodes-appendix">. see <xref linkend="errcodes-appendix">.
</para> </para>
......
...@@ -121,8 +121,8 @@ COPY { <replaceable class="parameter">table_name</replaceable> [ ( <replaceable ...@@ -121,8 +121,8 @@ COPY { <replaceable class="parameter">table_name</replaceable> [ ( <replaceable
<term><replaceable class="parameter">filename</replaceable></term> <term><replaceable class="parameter">filename</replaceable></term>
<listitem> <listitem>
<para> <para>
The path name of the input or output file. An input filename can be The path name of the input or output file. An input file name can be
an absolute or relative path, but an output filename must be an absolute an absolute or relative path, but an output file name must be an absolute
path. Windows users might need to use an <literal>E''</> string and path. Windows users might need to use an <literal>E''</> string and
double any backslashes used in the path name. double any backslashes used in the path name.
</para> </para>
......
...@@ -364,7 +364,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI ...@@ -364,7 +364,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
constraints copied by <literal>LIKE</> are not merged with similarly constraints copied by <literal>LIKE</> are not merged with similarly
named columns and constraints. named columns and constraints.
If the same name is specified explicitly or in another If the same name is specified explicitly or in another
<literal>LIKE</literal> clause, an error is signalled. <literal>LIKE</literal> clause, an error is signaled.
</para> </para>
<para> <para>
The <literal>LIKE</literal> clause can also be used to copy columns from The <literal>LIKE</literal> clause can also be used to copy columns from
......
...@@ -136,7 +136,7 @@ CREATE TYPE <replaceable class="parameter">name</replaceable> ...@@ -136,7 +136,7 @@ CREATE TYPE <replaceable class="parameter">name</replaceable>
be any type with an associated b-tree operator class (to determine the be any type with an associated b-tree operator class (to determine the
ordering of values for the range type). Normally the subtype's default ordering of values for the range type). Normally the subtype's default
b-tree operator class is used to determine ordering; to use a non-default b-tree operator class is used to determine ordering; to use a non-default
opclass, specify its name with <replaceable operator class, specify its name with <replaceable
class="parameter">subtype_opclass</replaceable>. If the subtype is class="parameter">subtype_opclass</replaceable>. If the subtype is
collatable, and you want to use a non-default collation in the range's collatable, and you want to use a non-default collation in the range's
ordering, specify the desired collation with the <replaceable ordering, specify the desired collation with the <replaceable
......
...@@ -75,7 +75,7 @@ EXPLAIN [ ANALYZE ] [ VERBOSE ] <replaceable class="parameter">statement</replac ...@@ -75,7 +75,7 @@ EXPLAIN [ ANALYZE ] [ VERBOSE ] <replaceable class="parameter">statement</replac
<para> <para>
The <literal>ANALYZE</literal> option causes the statement to be actually The <literal>ANALYZE</literal> option causes the statement to be actually
executed, not only planned. Then actual runtime statistics are added to executed, not only planned. Then actual run time statistics are added to
the display, including the total elapsed time expended within each plan the display, including the total elapsed time expended within each plan
node (in milliseconds) and the total number of rows it actually returned. node (in milliseconds) and the total number of rows it actually returned.
This is useful for seeing whether the planner's estimates This is useful for seeing whether the planner's estimates
......
...@@ -183,7 +183,7 @@ LOCK [ TABLE ] [ ONLY ] <replaceable class="PARAMETER">name</replaceable> [ * ] ...@@ -183,7 +183,7 @@ LOCK [ TABLE ] [ ONLY ] <replaceable class="PARAMETER">name</replaceable> [ * ]
the mode names involving <literal>ROW</> are all misnomers. These the mode names involving <literal>ROW</> are all misnomers. These
mode names should generally be read as indicating the intention of mode names should generally be read as indicating the intention of
the user to acquire row-level locks within the locked table. Also, the user to acquire row-level locks within the locked table. Also,
<literal>ROW EXCLUSIVE</> mode is a sharable table lock. Keep in <literal>ROW EXCLUSIVE</> mode is a shareable table lock. Keep in
mind that all the lock modes have identical semantics so far as mind that all the lock modes have identical semantics so far as
<command>LOCK TABLE</> is concerned, differing only in the rules <command>LOCK TABLE</> is concerned, differing only in the rules
about which modes conflict with which. For information on how to about which modes conflict with which. For information on how to
......
...@@ -194,7 +194,7 @@ PostgreSQL documentation ...@@ -194,7 +194,7 @@ PostgreSQL documentation
<listitem> <listitem>
<para> <para>
Write a minimal recovery.conf in the output directory (or into Write a minimal <filename>recovery.conf</filename> in the output directory (or into
the base archive file when using tar format) to ease setting the base archive file when using tar format) to ease setting
up a standby server. up a standby server.
</para> </para>
......
...@@ -323,10 +323,10 @@ PostgreSQL documentation ...@@ -323,10 +323,10 @@ PostgreSQL documentation
<para> <para>
For a consistent backup, the database server needs to support synchronized snapshots, For a consistent backup, the database server needs to support synchronized snapshots,
a feature that was introduced in <productname>PostgreSQL</productname> 9.2. With this a feature that was introduced in <productname>PostgreSQL</productname> 9.2. With this
feature, database clients can ensure they see the same dataset even though they use feature, database clients can ensure they see the same data set even though they use
different connections. <command>pg_dump -j</command> uses multiple database different connections. <command>pg_dump -j</command> uses multiple database
connections; it connects to the database once with the master process and connections; it connects to the database once with the master process and
once again for each worker job. Without the sychronized snapshot feature, the once again for each worker job. Without the synchronized snapshot feature, the
different worker jobs wouldn't be guaranteed to see the same data in each connection, different worker jobs wouldn't be guaranteed to see the same data in each connection,
which could lead to an inconsistent backup. which could lead to an inconsistent backup.
</para> </para>
......
...@@ -156,7 +156,7 @@ gmake installcheck ...@@ -156,7 +156,7 @@ gmake installcheck
<para> <para>
The source distribution also contains regression tests of the static The source distribution also contains regression tests of the static
behaviour of Hot Standby. These tests require a running primary server behavior of Hot Standby. These tests require a running primary server
and a running standby server that is accepting new WAL changes from the and a running standby server that is accepting new WAL changes from the
primary using either file-based log shipping or streaming replication. primary using either file-based log shipping or streaming replication.
Those servers are not automatically created for you, nor is the setup Those servers are not automatically created for you, nor is the setup
...@@ -185,9 +185,9 @@ gmake standbycheck ...@@ -185,9 +185,9 @@ gmake standbycheck
</para> </para>
<para> <para>
Some extreme behaviours can also be generated on the primary using the Some extreme behaviors can also be generated on the primary using the
script: <filename>src/test/regress/sql/hs_primary_extremes.sql</filename> script: <filename>src/test/regress/sql/hs_primary_extremes.sql</filename>
to allow the behaviour of the standby to be tested. to allow the behavior of the standby to be tested.
</para> </para>
<para> <para>
......
...@@ -700,7 +700,7 @@ ...@@ -700,7 +700,7 @@
<listitem> <listitem>
<para> <para>
Allow a multi-row <link Allow a multirow <link
linkend="SQL-VALUES"><literal>VALUES</></link> clause in a rule linkend="SQL-VALUES"><literal>VALUES</></link> clause in a rule
to reference <literal>OLD</>/<literal>NEW</> (Tom Lane) to reference <literal>OLD</>/<literal>NEW</> (Tom Lane)
</para> </para>
...@@ -911,7 +911,7 @@ ...@@ -911,7 +911,7 @@
<para> <para>
Allow text <link linkend="datatype-timezones">timezone Allow text <link linkend="datatype-timezones">timezone
designations</link>, e.g. <quote>America/Chicago</> when using designations</link>, e.g. <quote>America/Chicago</> when using
the <acronym>ISO</> <quote>T</> timestamptz format (Bruce Momjian) the <acronym>ISO</> <quote>T</> <type>timestamptz</type> format (Bruce Momjian)
</para> </para>
</listitem> </listitem>
...@@ -1128,7 +1128,7 @@ ...@@ -1128,7 +1128,7 @@
</para> </para>
<para> <para>
This allows plpy.debug(rv) to output something reasonable. This allows <literal>plpy.debug(rv)</literal> to output something reasonable.
</para> </para>
</listitem> </listitem>
...@@ -1538,7 +1538,7 @@ ...@@ -1538,7 +1538,7 @@
<listitem> <listitem>
<para> <para>
Add emacs macro to match <productname>PostgreSQL</> perltidy Add Emacs macro to match <productname>PostgreSQL</> perltidy
formatting (Peter Eisentraut) formatting (Peter Eisentraut)
</para> </para>
</listitem> </listitem>
...@@ -1783,7 +1783,7 @@ ...@@ -1783,7 +1783,7 @@
<listitem> <listitem>
<para> <para>
Have <application>pg_upgrade</> create unix-domain sockets in Have <application>pg_upgrade</> create Unix-domain sockets in
the current directory (Bruce Momjian, Tom Lane) the current directory (Bruce Momjian, Tom Lane)
</para> </para>
......
...@@ -315,7 +315,7 @@ $ sudo semodule -r sepgsql-regtest ...@@ -315,7 +315,7 @@ $ sudo semodule -r sepgsql-regtest
control rules as relationships between a subject entity (typically, control rules as relationships between a subject entity (typically,
a client of the database) and an object entity (such as a database a client of the database) and an object entity (such as a database
object), each of which is object), each of which is
identified by a security label. If access to an unlabelled object is identified by a security label. If access to an unlabeled object is
attempted, the object is treated as if it were assigned the label attempted, the object is treated as if it were assigned the label
<literal>unlabeled_t</>. <literal>unlabeled_t</>.
</para> </para>
...@@ -397,7 +397,7 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100; ...@@ -397,7 +397,7 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100;
user tries to execute a function as a part of query, or using fast-path user tries to execute a function as a part of query, or using fast-path
invocation. If this function is a trusted procedure, it also checks invocation. If this function is a trusted procedure, it also checks
<literal>db_procedure:{entrypoint}</> permission to check whether it <literal>db_procedure:{entrypoint}</> permission to check whether it
can perform as entrypoint of trusted procedure. can perform as entry point of trusted procedure.
</para> </para>
<para> <para>
......
...@@ -148,7 +148,7 @@ ...@@ -148,7 +148,7 @@
there is little it can do to make sure the data has arrived at a truly there is little it can do to make sure the data has arrived at a truly
non-volatile storage area. Rather, it is the non-volatile storage area. Rather, it is the
administrator's responsibility to make certain that all storage components administrator's responsibility to make certain that all storage components
ensure integrity for both data and filesystem metadata. ensure integrity for both data and file-system metadata.
Avoid disk controllers that have non-battery-backed write caches. Avoid disk controllers that have non-battery-backed write caches.
At the drive level, disable write-back caching if the At the drive level, disable write-back caching if the
drive cannot guarantee the data will be written before shutdown. drive cannot guarantee the data will be written before shutdown.
...@@ -200,8 +200,8 @@ ...@@ -200,8 +200,8 @@
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Internal data structures such as pg_clog, pg_subtrans, pg_multixact, Internal data structures such as <filename>pg_clog</filename>, <filename>pg_subtrans</filename>, <filename>pg_multixact</filename>,
pg_serial, pg_notify, pg_stat, pg_snapshots are not directly <filename>pg_serial</filename>, <filename>pg_notify</filename>, <filename>pg_stat</filename>, <filename>pg_snapshots</filename> are not directly
checksummed, nor are pages protected by full page writes. However, where checksummed, nor are pages protected by full page writes. However, where
such data structures are persistent, WAL records are written that allow such data structures are persistent, WAL records are written that allow
recent changes to be accurately rebuilt at crash recovery and those recent changes to be accurately rebuilt at crash recovery and those
...@@ -210,7 +210,7 @@ ...@@ -210,7 +210,7 @@
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Individual state files in pg_twophase are protected by CRC-32. Individual state files in <filename>pg_twophase</filename> are protected by CRC-32.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment