Commit 7ef8b52c authored by Michael Paquier's avatar Michael Paquier

Fix typos and grammar in comments and docs

Author: Justin Pryzby
Discussion: https://postgr.es/m/20210416070310.GG3315@telsasoft.com
parent c731f918
......@@ -930,7 +930,7 @@ check_tuple_visibility(HeapCheckContext *ctx)
* If xmin_status happens to be XID_IS_CURRENT_XID, then in theory
* any such DDL changes ought to be visible to us, so perhaps
* we could check anyway in that case. But, for now, let's be
* conservate and treat this like any other uncommitted insert.
* conservative and treat this like any other uncommitted insert.
*/
return false;
}
......
......@@ -730,7 +730,7 @@ LOG: request for BRIN range summarization for index "brin_wi_idx" page 128 was
for <xref linkend="sql-altertable"/>. When set to a positive value,
each block range is assumed to contain this number of distinct non-null
values. When set to a negative value, which must be greater than or
equal to -1, the number of distinct non-null is assumed linear with
equal to -1, the number of distinct non-null values is assumed to grow linearly with
the maximum possible number of tuples in the block range (about 290
rows per block). The default value is <literal>-0.1</literal>, and
the minimum number of distinct non-null values is <literal>16</literal>.
......@@ -833,7 +833,7 @@ typedef struct BrinOpcInfo
Returns whether all the ScanKey entries are consistent with the given
indexed values for a range.
The attribute number to use is passed as part of the scan key.
Multiple scan keys for the same attribute may be passed at once, the
Multiple scan keys for the same attribute may be passed at once; the
number of entries is determined by the <literal>nkeys</literal> parameter.
</para>
</listitem>
......@@ -1214,7 +1214,7 @@ typedef struct BrinOpcInfo
<para>
The minmax-multi operator class is also intended for data types implementing
a totally ordered sets, and may be seen as a simple extension of the minmax
a totally ordered set, and may be seen as a simple extension of the minmax
operator class. While minmax operator class summarizes values from each block
range into a single contiguous interval, minmax-multi allows summarization
into multiple smaller intervals to improve handling of outlier values.
......
......@@ -354,15 +354,15 @@ current=testdb1 (should be testdb1)
</para>
<para>
The third option is to declare sql identifier linked to
The third option is to declare a SQL identifier linked to
the connection, for example:
<programlisting>
EXEC SQL AT <replaceable>connection-name</replaceable> DECLARE <replaceable>statement-name</replaceable> STATEMENT;
EXEC SQL PREPARE <replaceable>statement-name</replaceable> FROM :<replaceable>dyn-string</replaceable>;
</programlisting>
Once you link a sql identifier to a connection, you execute a dynamic SQL
without AT clause. Note that this option behaves like preprocessor directives,
therefore the link is enabled only in the file.
Once you link a SQL identifier to a connection, you execute dynamic SQL
without an AT clause. Note that this option behaves like preprocessor
directives, therefore the link is enabled only in the file.
</para>
<para>
Here is an example program using this option:
......@@ -6911,15 +6911,15 @@ EXEC SQL [ AT <replaceable class="parameter">connection_name</replaceable> ] DEC
<title>Description</title>
<para>
<command>DECLARE STATEMENT</command> declares SQL statement identifier.
<command>DECLARE STATEMENT</command> declares a SQL statement identifier.
SQL statement identifier can be associated with the connection.
When the identifier is used by dynamic SQL statements, these SQLs are executed
by using the associated connection.
The namespace of the declaration is the precompile unit, and multiple declarations to
the same SQL statement identifier is not allowed.
Note that if the precompiler run in the Informix compatibility mode and some SQL statement
is declared, "database" can not be used as a cursor name.
When the identifier is used by dynamic SQL statements, the statements
are executed using the associated connection.
The namespace of the declaration is the precompile unit, and multiple
declarations to the same SQL statement identifier are not allowed.
Note that if the precompiler runs in Informix compatibility mode and
some SQL statement is declared, "database" can not be used as a cursor
name.
</para>
</refsect1>
......
......@@ -596,7 +596,7 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
* and if we're violating them. In that case we can
* terminate early, without invoking the support function.
*
* As there may be more keys, we can only detemine
* As there may be more keys, we can only determine
* mismatch within this loop.
*/
if (bdesc->bd_info[attno - 1]->oi_regular_nulls &&
......@@ -636,7 +636,7 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
/*
* Collation from the first key (has to be the same for
* all keys for the same attribue).
* all keys for the same attribute).
*/
collation = keys[attno - 1][0]->sk_collation;
......
......@@ -409,7 +409,7 @@ typedef struct BloomOpaque
{
/*
* XXX At this point we only need a single proc (to compute the hash), but
* let's keep the array just like inclusion and minman opclasses, for
* let's keep the array just like inclusion and minmax opclasses, for
* consistency. We may need additional procs in the future.
*/
FmgrInfo extra_procinfos[BLOOM_MAX_PROCNUMS];
......
......@@ -248,7 +248,7 @@ typedef struct DistanceValue
} DistanceValue;
/* Cache for support and strategy procesures. */
/* Cache for support and strategy procedures. */
static FmgrInfo *minmax_multi_get_procinfo(BrinDesc *bdesc, uint16 attno,
uint16 procnum);
......@@ -1311,7 +1311,7 @@ compare_distances(const void *a, const void *b)
}
/*
* Given an array of expanded ranges, compute distance of the gaps betwen
* Given an array of expanded ranges, compute distance of the gaps between
* the ranges - for ncranges there are (ncranges-1) gaps.
*
* We simply call the "distance" function to compute the (max-min) for pairs
......@@ -1623,7 +1623,7 @@ ensure_free_space_in_buffer(BrinDesc *bdesc, Oid colloid,
*
* We don't simply check against range->maxvalues again. The deduplication
* might have freed very little space (e.g. just one value), forcing us to
* do depuplication very often. In that case it's better to do compaction
* do deduplication very often. In that case it's better to do compaction
* and reduce more space.
*/
if (2 * range->nranges + range->nvalues <= range->maxvalues * MINMAX_BUFFER_LOAD_FACTOR)
......
......@@ -115,7 +115,7 @@ typedef struct
/*
* In sorted build, we use a stack of these structs, one for each level,
* to hold an in-memory buffer of the righmost page at the level. When the
* to hold an in-memory buffer of the rightmost page at the level. When the
* page fills up, it is written out and a new page is allocated.
*/
typedef struct GistSortedBuildPageState
......
......@@ -633,7 +633,7 @@ systable_endscan(SysScanDesc sysscan)
* Currently we do not support non-index-based scans here. (In principle
* we could do a heapscan and sort, but the uses are in places that
* probably don't need to still work with corrupted catalog indexes.)
* For the moment, therefore, these functions are merely the thinnest of
* For the moment, therefore, these functions are merely the thinest of
* wrappers around index_beginscan/index_getnext_slot. The main reason for
* their existence is to centralize possible future support of lossy operators
* in catalog scans.
......
......@@ -1398,7 +1398,7 @@ _bt_delitems_delete(Relation rel, Buffer buf, TransactionId latestRemovedXid,
* _bt_delitems_delete. These steps must take place before each function's
* critical section begins.
*
* updatabable and nupdatable are inputs, though note that we will use
* updatable and nupdatable are inputs, though note that we will use
* _bt_update_posting() to replace the original itup with a pointer to a final
* version in palloc()'d memory. Caller should free the tuples when its done.
*
......@@ -1504,7 +1504,7 @@ _bt_delitems_cmp(const void *a, const void *b)
* some extra index tuples that were practically free for tableam to check in
* passing (when they actually turn out to be safe to delete). It probably
* only makes sense for the tableam to go ahead with these extra checks when
* it is block-orientated (otherwise the checks probably won't be practically
* it is block-oriented (otherwise the checks probably won't be practically
* free, which we rely on). The tableam interface requires the tableam side
* to handle the problem, though, so this is okay (we as an index AM are free
* to make the simplifying assumption that all tableams must be block-based).
......
......@@ -997,7 +997,7 @@ makeMultirangeTypeName(const char *rangeTypeName, Oid typeNamespace)
* makeUniqueTypeName
* Generate a unique name for a prospective new type
*
* Given a typeName, return a new palloc'ed name by preprending underscores
* Given a typeName, return a new palloc'ed name by prepending underscores
* until a non-conflicting name results.
*
* If tryOriginal, first try with zero underscores.
......
......@@ -660,7 +660,7 @@ do_analyze_rel(Relation onerel, VacuumParams *params,
{
/*
* Partitioned tables don't have storage, so we don't set any fields in
* their pg_class entries except for relpages, which is necessary for
* their pg_class entries except for reltuples, which is necessary for
* auto-analyze to work properly.
*/
vac_update_relstats(onerel, -1, totalrows,
......
......@@ -661,9 +661,9 @@ ExecIncrementalSort(PlanState *pstate)
/*
* We're in full sort mode accumulating a minimum number of tuples
* and not checking for prefix key equality yet, so we can't
* assume the group pivot tuple will reamin the same -- unless
* assume the group pivot tuple will remain the same -- unless
* we're using a minimum group size of 1, in which case the pivot
* is obviously still the pviot.
* is obviously still the pivot.
*/
if (nTuples != minGroupSize)
ExecClearTuple(node->group_pivot);
......@@ -1162,7 +1162,7 @@ ExecReScanIncrementalSort(IncrementalSortState *node)
}
/*
* If chgParam of subnode is not null, theni the plan will be re-scanned
* If chgParam of subnode is not null, then the plan will be re-scanned
* by the first ExecProcNode.
*/
if (outerPlan->chgParam == NULL)
......
......@@ -59,7 +59,7 @@
* SQL standard actually does it in that more complicated way), but the
* internal representation allows us to construct it this way.)
*
* With a search caluse
* With a search clause
*
* SEARCH DEPTH FIRST BY col1, col2 SET sqc
*
......
......@@ -972,7 +972,7 @@ find_strongest_dependency(MVDependencies **dependencies, int ndependencies,
/*
* clauselist_apply_dependencies
* Apply the specified functional dependencies to a list of clauses and
* return the estimated selecvitity of the clauses that are compatible
* return the estimated selectivity of the clauses that are compatible
* with any of the given dependencies.
*
* This will estimate all not-already-estimated clauses that are compatible
......@@ -1450,7 +1450,7 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
if (!bms_is_member(listidx, *estimatedclauses))
{
/*
* If it's a simple column refrence, just extract the attnum. If
* If it's a simple column reference, just extract the attnum. If
* it's an expression, assign a negative attnum as if it was a
* system attribute.
*/
......
......@@ -358,7 +358,7 @@ statext_compute_stattarget(int stattarget, int nattrs, VacAttrStats **stats)
*/
for (i = 0; i < nattrs; i++)
{
/* keep the maximmum statistics target */
/* keep the maximum statistics target */
if (stats[i]->attr->attstattarget > stattarget)
stattarget = stats[i]->attr->attstattarget;
}
......
......@@ -2131,7 +2131,7 @@ GetSnapshotDataReuse(Snapshot snapshot)
* older than this are known not running any more.
*
* And try to advance the bounds of GlobalVis{Shared,Catalog,Data,Temp}Rels
* for the benefit of theGlobalVisTest* family of functions.
* for the benefit of the GlobalVisTest* family of functions.
*
* Note: this function should probably not be called with an argument that's
* not statically allocated (see xip allocation below).
......
......@@ -2020,7 +2020,7 @@ NISortAffixes(IspellDict *Conf)
(const unsigned char *) Affix->repl,
(ptr - 1)->len))
{
/* leave only unique and minimals suffixes */
/* leave only unique and minimal suffixes */
ptr->affix = Affix->repl;
ptr->len = Affix->replen;
ptr->issuffix = issuffix;
......
......@@ -1032,10 +1032,10 @@ pgstat_get_my_queryid(void)
if (!MyBEEntry)
return 0;
/* There's no need for a look around pgstat_begin_read_activity /
/* There's no need for a lock around pgstat_begin_read_activity /
* pgstat_end_read_activity here as it's only called from
* pg_stat_get_activity which is already protected, or from the same
* backend which mean that there won't be concurrent write.
* backend which means that there won't be concurrent writes.
*/
return MyBEEntry->st_queryid;
}
......
......@@ -553,7 +553,7 @@ multirange_get_typcache(FunctionCallInfo fcinfo, Oid mltrngtypid)
/*
* Estimate size occupied by serialized multirage.
* Estimate size occupied by serialized multirange.
*/
static Size
multirange_size_estimate(TypeCacheEntry *rangetyp, int32 range_count,
......
......@@ -4039,7 +4039,7 @@ estimate_multivariate_ndistinct(PlannerInfo *root, RelOptInfo *rel,
/*
* Process a simple Var expression, by matching it to keys
* directly. If there's a matchine expression, we'll try
* directly. If there's a matching expression, we'll try
* matching it later.
*/
if (IsA(varinfo->var, Var))
......
......@@ -605,7 +605,7 @@ perform_rewind(filemap_t *filemap, rewind_source *source,
* and the target. But if the source is a standby server, it's possible
* that the last common checkpoint is *after* the standby's restartpoint.
* That implies that the source server has applied the checkpoint record,
* but hasn't perfomed a corresponding restartpoint yet. Make sure we
* but hasn't performed a corresponding restartpoint yet. Make sure we
* start at the restartpoint's redo point in that case.
*
* Use the old version of the source's control file for this. The server
......
......@@ -323,7 +323,7 @@ WALDumpCloseSegment(XLogReaderState *state)
}
/*
* pg_waldump's WAL page rader
* pg_waldump's WAL page reader
*
* timeline and startptr specifies the LSN, and reads up to endptr.
*/
......
......@@ -34,7 +34,7 @@
/*
* In backend, use an allocation in TopMemoryContext to count for resowner
* cleanup handling if necesary. For versions of OpenSSL where HMAC_CTX is
* cleanup handling if necessary. For versions of OpenSSL where HMAC_CTX is
* known, just use palloc(). In frontend, use malloc to be able to return
* a failure status back to the caller.
*/
......
......@@ -147,7 +147,7 @@
*
* For each subsequent entry in the history list, the "good_match"
* is lowered by 10%. So the compressor will be more happy with
* short matches the farer it has to go back in the history.
* short matches the further it has to go back in the history.
* Another "speed against ratio" preference characteristic of
* the algorithm.
*
......
......@@ -375,7 +375,7 @@ main(int argc, char *const argv[])
}
cur = NULL;
/* remove old delared statements if any are still there */
/* remove old declared statements if any are still there */
for (list = g_declared_list; list != NULL;)
{
struct declared_list *this = list;
......
......@@ -43,7 +43,7 @@
* is odd, moving left simply involves halving lim: e.g., when lim
* is 5 we look at item 2, so we change lim to 2 so that we will
* look at items 0 & 1. If lim is even, the same applies. If lim
* is odd, moving right again involes halving lim, this time moving
* is odd, moving right again involves halving lim, this time moving
* the base up one item past p: e.g., when lim is 5 we change base
* to item 3 and make lim 2 so that we will look at items 3 and 4.
* If lim is even, however, we have to shrink it by one before
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment