Commit 90fbf7c5 authored by Michael Paquier's avatar Michael Paquier

Fix typos and grammar in docs and comments

This fixes several areas of the documentation and some comments in
matters of style, grammar, or even format.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20201222041153.GK30237@telsasoft.com
parent 6ecf488d
...@@ -191,7 +191,7 @@ typedef struct Counters ...@@ -191,7 +191,7 @@ typedef struct Counters
double usage; /* usage factor */ double usage; /* usage factor */
int64 wal_records; /* # of WAL records generated */ int64 wal_records; /* # of WAL records generated */
int64 wal_fpi; /* # of WAL full page images generated */ int64 wal_fpi; /* # of WAL full page images generated */
uint64 wal_bytes; /* total amount of WAL bytes generated */ uint64 wal_bytes; /* total amount of WAL generated in bytes */
} Counters; } Counters;
/* /*
......
...@@ -525,7 +525,7 @@ SET client_min_messages = DEBUG1; ...@@ -525,7 +525,7 @@ SET client_min_messages = DEBUG1;
designed to diagnose corruption without undue risk. It cannot guard designed to diagnose corruption without undue risk. It cannot guard
against all causes of backend crashes, as even executing the calling against all causes of backend crashes, as even executing the calling
query could be unsafe on a badly corrupted system. Access to <link query could be unsafe on a badly corrupted system. Access to <link
linkend="catalogs-overview">catalog tables</link> are performed and could linkend="catalogs-overview">catalog tables</link> is performed and could
be problematic if the catalogs themselves are corrupted. be problematic if the catalogs themselves are corrupted.
</para> </para>
......
...@@ -4478,7 +4478,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l ...@@ -4478,7 +4478,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
inherited columns are to be arranged. The count starts at 1. inherited columns are to be arranged. The count starts at 1.
</para> </para>
<para> <para>
Indexes can not have multiple inheritance, since they can only inherit Indexes cannot have multiple inheritance, since they can only inherit
when using declarative partitioning. when using declarative partitioning.
</para></entry> </para></entry>
</row> </row>
......
...@@ -321,7 +321,7 @@ SELECT c FROM test ORDER BY c ~&gt; 3 DESC LIMIT 5; ...@@ -321,7 +321,7 @@ SELECT c FROM test ORDER BY c ~&gt; 3 DESC LIMIT 5;
Makes a one dimensional cube. Makes a one dimensional cube.
</para> </para>
<para> <para>
<literal>cube(1,2)</literal> <literal>cube(1, 2)</literal>
<returnvalue>(1),(2)</returnvalue> <returnvalue>(1),(2)</returnvalue>
</para></entry> </para></entry>
</row> </row>
......
...@@ -1274,7 +1274,7 @@ repeat('Pg', 4) <returnvalue>PgPgPgPg</returnvalue> ...@@ -1274,7 +1274,7 @@ repeat('Pg', 4) <returnvalue>PgPgPgPg</returnvalue>
(truncates towards zero) (truncates towards zero)
</para> </para>
<para> <para>
<literal>div(9,4)</literal> <literal>div(9, 4)</literal>
<returnvalue>2</returnvalue> <returnvalue>2</returnvalue>
</para></entry> </para></entry>
</row> </row>
...@@ -1493,7 +1493,7 @@ repeat('Pg', 4) <returnvalue>PgPgPgPg</returnvalue> ...@@ -1493,7 +1493,7 @@ repeat('Pg', 4) <returnvalue>PgPgPgPg</returnvalue>
<type>bigint</type>, and <type>numeric</type> <type>bigint</type>, and <type>numeric</type>
</para> </para>
<para> <para>
<literal>mod(9,4)</literal> <literal>mod(9, 4)</literal>
<returnvalue>1</returnvalue> <returnvalue>1</returnvalue>
</para></entry> </para></entry>
</row> </row>
...@@ -1975,7 +1975,7 @@ repeat('Pg', 4) <returnvalue>PgPgPgPg</returnvalue> ...@@ -1975,7 +1975,7 @@ repeat('Pg', 4) <returnvalue>PgPgPgPg</returnvalue>
result in radians result in radians
</para> </para>
<para> <para>
<literal>atan2(1,0)</literal> <literal>atan2(1, 0)</literal>
<returnvalue>1.5707963267948966</returnvalue> <returnvalue>1.5707963267948966</returnvalue>
</para></entry> </para></entry>
</row> </row>
...@@ -1995,7 +1995,7 @@ repeat('Pg', 4) <returnvalue>PgPgPgPg</returnvalue> ...@@ -1995,7 +1995,7 @@ repeat('Pg', 4) <returnvalue>PgPgPgPg</returnvalue>
result in degrees result in degrees
</para> </para>
<para> <para>
<literal>atan2d(1,0)</literal> <literal>atan2d(1, 0)</literal>
<returnvalue>90</returnvalue> <returnvalue>90</returnvalue>
</para></entry> </para></entry>
</row> </row>
......
...@@ -953,11 +953,11 @@ stream_commit_cb(...); &lt;-- commit of the streamed transaction ...@@ -953,11 +953,11 @@ stream_commit_cb(...); &lt;-- commit of the streamed transaction
<para> <para>
Similar to spill-to-disk behavior, streaming is triggered when the total Similar to spill-to-disk behavior, streaming is triggered when the total
amount of changes decoded from the WAL (for all in-progress transactions) amount of changes decoded from the WAL (for all in-progress transactions)
exceeds limit defined by <varname>logical_decoding_work_mem</varname> setting. exceeds the limit defined by <varname>logical_decoding_work_mem</varname> setting.
At that point the largest toplevel transaction (measured by amount of memory At that point, the largest toplevel transaction (measured by the amount of memory
currently used for decoded changes) is selected and streamed. However, in currently used for decoded changes) is selected and streamed. However, in
some cases we still have to spill to the disk even if streaming is enabled some cases we still have to spill to disk even if streaming is enabled
because if we cross the memory limit but we still have not decoded the because we exceed the memory threshold but still have not decoded the
complete tuple e.g., only decoded toast table insert but not the main table complete tuple e.g., only decoded toast table insert but not the main table
insert. insert.
</para> </para>
......
...@@ -3470,7 +3470,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i ...@@ -3470,7 +3470,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
<structfield>wal_bytes</structfield> <type>numeric</type> <structfield>wal_bytes</structfield> <type>numeric</type>
</para> </para>
<para> <para>
Total amount of WAL bytes generated Total amount of WAL generated in bytes
</para></entry> </para></entry>
</row> </row>
...@@ -3479,7 +3479,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i ...@@ -3479,7 +3479,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
<structfield>wal_buffers_full</structfield> <type>bigint</type> <structfield>wal_buffers_full</structfield> <type>bigint</type>
</para> </para>
<para> <para>
Number of times WAL data was written to the disk because WAL buffers got full Number of times WAL data was written to disk because WAL buffers became full
</para></entry> </para></entry>
</row> </row>
......
...@@ -360,7 +360,7 @@ ...@@ -360,7 +360,7 @@
<structfield>wal_bytes</structfield> <type>numeric</type> <structfield>wal_bytes</structfield> <type>numeric</type>
</para> </para>
<para> <para>
Total amount of WAL bytes generated by the statement Total amount of WAL generated by the statement in bytes
</para></entry> </para></entry>
</row> </row>
</tbody> </tbody>
......
...@@ -29,7 +29,7 @@ ...@@ -29,7 +29,7 @@
<para> <para>
Every range type has a corresponding multirange type. A multirange is Every range type has a corresponding multirange type. A multirange is
an ordered list of non-continguous, non-empty, non-null ranges. Most an ordered list of non-contiguous, non-empty, non-null ranges. Most
range operators also work on multiranges, and they have a few functions range operators also work on multiranges, and they have a few functions
of their own. of their own.
</para> </para>
......
...@@ -180,10 +180,10 @@ CREATE TYPE <replaceable class="parameter">name</replaceable> ...@@ -180,10 +180,10 @@ CREATE TYPE <replaceable class="parameter">name</replaceable>
The optional <replaceable class="parameter">multirange_type_name</replaceable> The optional <replaceable class="parameter">multirange_type_name</replaceable>
parameter specifies the name of the corresponding multirange type. If not parameter specifies the name of the corresponding multirange type. If not
specified, this name is chosen automatically as follows. specified, this name is chosen automatically as follows.
If range type name contains <literal>range</literal> substring, then If the range type name contains the substring <literal>range</literal>, then
multirange type name is formed by replacement of the <literal>range</literal> the multirange type name is formed by replacement of the <literal>range</literal>
substring with <literal>multirange</literal> substring in the range substring with <literal>multirange</literal> in the range
type name. Otherwise, multirange type name is formed by appending type name. Otherwise, the multirange type name is formed by appending a
<literal>_multirange</literal> suffix to the range type name. <literal>_multirange</literal> suffix to the range type name.
</para> </para>
</refsect2> </refsect2>
......
...@@ -198,9 +198,9 @@ ROLLBACK; ...@@ -198,9 +198,9 @@ ROLLBACK;
<listitem> <listitem>
<para> <para>
Include information on WAL record generation. Specifically, include the Include information on WAL record generation. Specifically, include the
number of records, number of full page images (fpi) and amount of WAL number of records, number of full page images (fpi) and the amount of WAL
bytes generated. In text format, only non-zero values are printed. This generated in bytes. In text format, only non-zero values are printed.
parameter may only be used when <literal>ANALYZE</literal> is also This parameter may only be used when <literal>ANALYZE</literal> is also
enabled. It defaults to <literal>FALSE</literal>. enabled. It defaults to <literal>FALSE</literal>.
</para> </para>
</listitem> </listitem>
......
...@@ -621,7 +621,7 @@ PostgreSQL documentation ...@@ -621,7 +621,7 @@ PostgreSQL documentation
<listitem> <listitem>
<para> <para>
Specify the compression level to use. Zero means no compression. Specify the compression level to use. Zero means no compression.
For the custom archive format, this specifies compression of For the custom and directory archive formats, this specifies compression of
individual table-data segments, and the default is to compress individual table-data segments, and the default is to compress
at a moderate level. at a moderate level.
For plain text output, setting a nonzero compression level causes For plain text output, setting a nonzero compression level causes
......
...@@ -40,7 +40,7 @@ PostgreSQL documentation ...@@ -40,7 +40,7 @@ PostgreSQL documentation
<para> <para>
It is important to note that the validation which is performed by It is important to note that the validation which is performed by
<application>pg_verifybackup</application> does not and can not include <application>pg_verifybackup</application> does not and cannot include
every check which will be performed by a running server when attempting every check which will be performed by a running server when attempting
to make use of the backup. Even if you use this tool, you should still to make use of the backup. Even if you use this tool, you should still
perform test restores and verify that the resulting databases work as perform test restores and verify that the resulting databases work as
......
...@@ -184,7 +184,7 @@ EXPLAIN EXECUTE <replaceable>name</replaceable>(<replaceable>parameter_values</r ...@@ -184,7 +184,7 @@ EXPLAIN EXECUTE <replaceable>name</replaceable>(<replaceable>parameter_values</r
analysis and planning of the statement, <productname>PostgreSQL</productname> will analysis and planning of the statement, <productname>PostgreSQL</productname> will
force re-analysis and re-planning of the statement before using it force re-analysis and re-planning of the statement before using it
whenever database objects used in the statement have undergone whenever database objects used in the statement have undergone
definitional (DDL) changes or the planner statistics of them have definitional (DDL) changes or their planner statistics have
been updated since the previous use of the prepared been updated since the previous use of the prepared
statement. Also, if the value of <xref linkend="guc-search-path"/> changes statement. Also, if the value of <xref linkend="guc-search-path"/> changes
from one use to the next, the statement will be re-parsed using the new from one use to the next, the statement will be re-parsed using the new
......
...@@ -103,7 +103,7 @@ less -x4 ...@@ -103,7 +103,7 @@ less -x4
message text. In addition there are optional elements, the most message text. In addition there are optional elements, the most
common of which is an error identifier code that follows the SQL spec's common of which is an error identifier code that follows the SQL spec's
SQLSTATE conventions. SQLSTATE conventions.
<function>ereport</function> itself is just a shell macro, that exists <function>ereport</function> itself is just a shell macro that exists
mainly for the syntactic convenience of making message generation mainly for the syntactic convenience of making message generation
look like a single function call in the C source code. The only parameter look like a single function call in the C source code. The only parameter
accepted directly by <function>ereport</function> is the severity level. accepted directly by <function>ereport</function> is the severity level.
......
...@@ -580,7 +580,7 @@ ...@@ -580,7 +580,7 @@
Independently of <varname>max_wal_size</varname>, Independently of <varname>max_wal_size</varname>,
the most recent <xref linkend="guc-wal-keep-size"/> megabytes of the most recent <xref linkend="guc-wal-keep-size"/> megabytes of
WAL files plus one additional WAL file are WAL files plus one additional WAL file are
kept at all times. Also, if WAL archiving is used, old segments can not be kept at all times. Also, if WAL archiving is used, old segments cannot be
removed or recycled until they are archived. If WAL archiving cannot keep up removed or recycled until they are archived. If WAL archiving cannot keep up
with the pace that WAL is generated, or if <varname>archive_command</varname> with the pace that WAL is generated, or if <varname>archive_command</varname>
fails repeatedly, old WAL files will accumulate in <filename>pg_wal</filename> fails repeatedly, old WAL files will accumulate in <filename>pg_wal</filename>
......
...@@ -10418,7 +10418,7 @@ get_sync_bit(int method) ...@@ -10418,7 +10418,7 @@ get_sync_bit(int method)
* *
* Never use O_DIRECT in walreceiver process for similar reasons; the WAL * Never use O_DIRECT in walreceiver process for similar reasons; the WAL
* written by walreceiver is normally read by the startup process soon * written by walreceiver is normally read by the startup process soon
* after its written. Also, walreceiver performs unaligned writes, which * after it's written. Also, walreceiver performs unaligned writes, which
* don't work with O_DIRECT, so it is required for correctness too. * don't work with O_DIRECT, so it is required for correctness too.
*/ */
if (!XLogIsNeeded() && !AmWalReceiverProcess()) if (!XLogIsNeeded() && !AmWalReceiverProcess())
......
...@@ -3119,7 +3119,7 @@ get_matching_range_bounds(PartitionPruneContext *context, ...@@ -3119,7 +3119,7 @@ get_matching_range_bounds(PartitionPruneContext *context,
/* /*
* If the smallest partition to return has MINVALUE (negative infinity) as * If the smallest partition to return has MINVALUE (negative infinity) as
* its lower bound, increment it to point to the next finite bound * its lower bound, increment it to point to the next finite bound
* (supposedly its upper bound), so that we don't advertently end up * (supposedly its upper bound), so that we don't inadvertently end up
* scanning the default partition. * scanning the default partition.
*/ */
if (minoff < boundinfo->ndatums && partindices[minoff] < 0) if (minoff < boundinfo->ndatums && partindices[minoff] < 0)
...@@ -3138,7 +3138,7 @@ get_matching_range_bounds(PartitionPruneContext *context, ...@@ -3138,7 +3138,7 @@ get_matching_range_bounds(PartitionPruneContext *context,
* If the previous greatest partition has MAXVALUE (positive infinity) as * If the previous greatest partition has MAXVALUE (positive infinity) as
* its upper bound (something only possible to do with multi-column range * its upper bound (something only possible to do with multi-column range
* partitioning), we scan switch to it as the greatest partition to * partitioning), we scan switch to it as the greatest partition to
* return. Again, so that we don't advertently end up scanning the * return. Again, so that we don't inadvertently end up scanning the
* default partition. * default partition.
*/ */
if (maxoff >= 1 && partindices[maxoff] < 0) if (maxoff >= 1 && partindices[maxoff] < 0)
......
...@@ -329,10 +329,15 @@ struct _archiveHandle ...@@ -329,10 +329,15 @@ struct _archiveHandle
DumpId *tableDataId; /* TABLE DATA ids, indexed by table dumpId */ DumpId *tableDataId; /* TABLE DATA ids, indexed by table dumpId */
struct _tocEntry *currToc; /* Used when dumping data */ struct _tocEntry *currToc; /* Used when dumping data */
int compression; /* Compression requested on open Possible int compression; /*---------
* values for compression: -1 * Compression requested on open().
* Z_DEFAULT_COMPRESSION 0 COMPRESSION_NONE * Possible values for compression:
* 1-9 levels for gzip compression */ * -2 ZSTD_COMPRESSION
* -1 Z_DEFAULT_COMPRESSION
* 0 COMPRESSION_NONE
* 1-9 levels for gzip compression
*---------
*/
bool dosync; /* data requested to be synced on sight */ bool dosync; /* data requested to be synced on sight */
ArchiveMode mode; /* File mode - r or w */ ArchiveMode mode; /* File mode - r or w */
void *formatData; /* Header data specific to file format */ void *formatData; /* Header data specific to file format */
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
* *
* A directory format dump is a directory, which contains a "toc.dat" file * A directory format dump is a directory, which contains a "toc.dat" file
* for the TOC, and a separate file for each data entry, named "<oid>.dat". * for the TOC, and a separate file for each data entry, named "<oid>.dat".
* Large objects (BLOBs) are stored in separate files named "blob_<uid>.dat", * Large objects (BLOBs) are stored in separate files named "blob_<oid>.dat",
* and there's a plain-text TOC file for them called "blobs.toc". If * and there's a plain-text TOC file for them called "blobs.toc". If
* compression is used, each data file is individually compressed and the * compression is used, each data file is individually compressed and the
* ".gz" suffix is added to the filenames. The TOC files are never * ".gz" suffix is added to the filenames. The TOC files are never
......
...@@ -7018,10 +7018,7 @@ getInherits(Archive *fout, int *numInherits) ...@@ -7018,10 +7018,7 @@ getInherits(Archive *fout, int *numInherits)
int i_inhrelid; int i_inhrelid;
int i_inhparent; int i_inhparent;
/* /* find all the inheritance information */
* Find all the inheritance information, excluding implicit inheritance
* via partitioning.
*/
appendPQExpBufferStr(query, "SELECT inhrelid, inhparent FROM pg_inherits"); appendPQExpBufferStr(query, "SELECT inhrelid, inhparent FROM pg_inherits");
res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK); res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
......
...@@ -247,8 +247,8 @@ output_completion_banner(char *deletion_script_file_name) ...@@ -247,8 +247,8 @@ output_completion_banner(char *deletion_script_file_name)
} }
pg_log(PG_REPORT, pg_log(PG_REPORT,
"Optimizer statistics are not transferred by pg_upgrade so,\n" "Optimizer statistics are not transferred by pg_upgrade.\n"
"once you start the new server, consider running:\n" "Once you start the new server, consider running:\n"
" %s/vacuumdb %s--all --analyze-in-stages\n\n", new_cluster.bindir, user_specification.data); " %s/vacuumdb %s--all --analyze-in-stages\n\n", new_cluster.bindir, user_specification.data);
if (deletion_script_file_name) if (deletion_script_file_name)
......
...@@ -81,7 +81,7 @@ VSObjectFactory.pm factory module providing the code to create the ...@@ -81,7 +81,7 @@ VSObjectFactory.pm factory module providing the code to create the
Description of the internals of the Visual Studio build process Description of the internals of the Visual Studio build process
--------------------------------------------------------------- ---------------------------------------------------------------
By typing 'build' the user starts the build.bat wrapper which simply passes By typing 'build' the user starts the build.bat wrapper which simply passes
it's arguments to build.pl. its arguments to build.pl.
In build.pl the user's buildenv.pl is used to set up the build environment In build.pl the user's buildenv.pl is used to set up the build environment
(i. e. path to bison and flex). In addition his config.pl file is merged into (i. e. path to bison and flex). In addition his config.pl file is merged into
config_default.pl to create the configuration arguments. config_default.pl to create the configuration arguments.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment