Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
P
Postgres FD Implementation
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Abuhujair Javed
Postgres FD Implementation
Commits
f5678e8e
Commit
f5678e8e
authored
Dec 28, 2007
by
Tom Lane
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Update examples in planstats.sgml for 8.3, and improve some aspects of
that discussion. Add a link from perform.sgml.
parent
45c9be3c
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
296 additions
and
194 deletions
+296
-194
doc/src/sgml/perform.sgml
doc/src/sgml/perform.sgml
+19
-17
doc/src/sgml/planstats.sgml
doc/src/sgml/planstats.sgml
+277
-177
No files found.
doc/src/sgml/perform.sgml
View file @
f5678e8e
<!-- $PostgreSQL: pgsql/doc/src/sgml/perform.sgml,v 1.6
7 2007/11/28 15:42:31 petere
Exp $ -->
<!-- $PostgreSQL: pgsql/doc/src/sgml/perform.sgml,v 1.6
8 2007/12/28 21:03:31 tgl
Exp $ -->
<chapter id="performance-tips">
<chapter id="performance-tips">
<title>Performance Tips</title>
<title>Performance Tips</title>
...
@@ -8,7 +8,7 @@
...
@@ -8,7 +8,7 @@
</indexterm>
</indexterm>
<para>
<para>
Query performance can be affected by many things. Some of these can
Query performance can be affected by many things. Some of these can
be manipulated by the user, while others are fundamental to the underlying
be manipulated by the user, while others are fundamental to the underlying
design of the system. This chapter provides some hints about understanding
design of the system. This chapter provides some hints about understanding
and tuning <productname>PostgreSQL</productname> performance.
and tuning <productname>PostgreSQL</productname> performance.
...
@@ -138,7 +138,7 @@ EXPLAIN SELECT * FROM tenk1;
...
@@ -138,7 +138,7 @@ EXPLAIN SELECT * FROM tenk1;
Rows output is a little tricky because it is <emphasis>not</emphasis> the
Rows output is a little tricky because it is <emphasis>not</emphasis> the
number of rows processed or scanned by the plan node. It is usually less,
number of rows processed or scanned by the plan node. It is usually less,
reflecting the estimated selectivity of any <literal>WHERE</>-clause
reflecting the estimated selectivity of any <literal>WHERE</>-clause
conditions that are being
conditions that are being
applied at the node. Ideally the top-level rows estimate will
applied at the node. Ideally the top-level rows estimate will
approximate the number of rows actually returned, updated, or deleted
approximate the number of rows actually returned, updated, or deleted
by the query.
by the query.
...
@@ -469,8 +469,8 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 < 100 AND t
...
@@ -469,8 +469,8 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 < 100 AND t
One component of the statistics is the total number of entries in
One component of the statistics is the total number of entries in
each table and index, as well as the number of disk blocks occupied
each table and index, as well as the number of disk blocks occupied
by each table and index. This information is kept in the table
by each table and index. This information is kept in the table
<link linkend="catalog-pg-class"><structname>pg_class</structname></link>,
in
<link linkend="catalog-pg-class"><structname>pg_class</structname></link>,
the columns <structfield>reltuples</structfield> and
in
the columns <structfield>reltuples</structfield> and
<structfield>relpages</structfield>. We can look at it with
<structfield>relpages</structfield>. We can look at it with
queries similar to this one:
queries similar to this one:
...
@@ -493,7 +493,7 @@ SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE relname LIKE 't
...
@@ -493,7 +493,7 @@ SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE relname LIKE 't
</para>
</para>
<para>
<para>
For efficiency reasons, <structfield>reltuples</structfield>
For efficiency reasons, <structfield>reltuples</structfield>
and <structfield>relpages</structfield> are not updated on-the-fly,
and <structfield>relpages</structfield> are not updated on-the-fly,
and so they usually contain somewhat out-of-date values.
and so they usually contain somewhat out-of-date values.
They are updated by <command>VACUUM</>, <command>ANALYZE</>, and a
They are updated by <command>VACUUM</>, <command>ANALYZE</>, and a
...
@@ -517,7 +517,8 @@ SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE relname LIKE 't
...
@@ -517,7 +517,8 @@ SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE relname LIKE 't
<firstterm>selectivity</> of <literal>WHERE</> clauses, that is,
<firstterm>selectivity</> of <literal>WHERE</> clauses, that is,
the fraction of rows that match each condition in the
the fraction of rows that match each condition in the
<literal>WHERE</> clause. The information used for this task is
<literal>WHERE</> clause. The information used for this task is
stored in the <link linkend="catalog-pg-statistic"><structname>pg_statistic</structname></link>
stored in the
<link linkend="catalog-pg-statistic"><structname>pg_statistic</structname></link>
system catalog. Entries in <structname>pg_statistic</structname>
system catalog. Entries in <structname>pg_statistic</structname>
are updated by the <command>ANALYZE</> and <command>VACUUM
are updated by the <command>ANALYZE</> and <command>VACUUM
ANALYZE</> commands, and are always approximate even when freshly
ANALYZE</> commands, and are always approximate even when freshly
...
@@ -530,7 +531,8 @@ SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE relname LIKE 't
...
@@ -530,7 +531,8 @@ SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE relname LIKE 't
<para>
<para>
Rather than look at <structname>pg_statistic</structname> directly,
Rather than look at <structname>pg_statistic</structname> directly,
it's better to look at its view <structname>pg_stats</structname>
it's better to look at its view
<link linkend="view-pg-stats"><structname>pg_stats</structname></link>
when examining the statistics manually. <structname>pg_stats</structname>
when examining the statistics manually. <structname>pg_stats</structname>
is designed to be more easily readable. Furthermore,
is designed to be more easily readable. Furthermore,
<structname>pg_stats</structname> is readable by all, whereas
<structname>pg_stats</structname> is readable by all, whereas
...
@@ -553,13 +555,8 @@ SELECT attname, n_distinct, most_common_vals FROM pg_stats WHERE tablename = 'ro
...
@@ -553,13 +555,8 @@ SELECT attname, n_distinct, most_common_vals FROM pg_stats WHERE tablename = 'ro
</para>
</para>
<para>
<para>
<structname>pg_stats</structname> is described in detail in
The amount of information stored in <structname>pg_statistic</structname>
<xref linkend="view-pg-stats">.
by <command>ANALYZE</>, in particular the maximum number of entries in the
</para>
<para>
The amount of information stored in <structname>pg_statistic</structname>,
in particular the maximum number of entries in the
<structfield>most_common_vals</> and <structfield>histogram_bounds</>
<structfield>most_common_vals</> and <structfield>histogram_bounds</>
arrays for each column, can be set on a
arrays for each column, can be set on a
column-by-column basis using the <command>ALTER TABLE SET STATISTICS</>
column-by-column basis using the <command>ALTER TABLE SET STATISTICS</>
...
@@ -570,7 +567,12 @@ SELECT attname, n_distinct, most_common_vals FROM pg_stats WHERE tablename = 'ro
...
@@ -570,7 +567,12 @@ SELECT attname, n_distinct, most_common_vals FROM pg_stats WHERE tablename = 'ro
columns with irregular data distributions, at the price of consuming
columns with irregular data distributions, at the price of consuming
more space in <structname>pg_statistic</structname> and slightly more
more space in <structname>pg_statistic</structname> and slightly more
time to compute the estimates. Conversely, a lower limit might be
time to compute the estimates. Conversely, a lower limit might be
appropriate for columns with simple data distributions.
sufficient for columns with simple data distributions.
</para>
<para>
Further details about the planner's use of statistics can be found in
<xref linkend="planner-stats-details">.
</para>
</para>
</sect1>
</sect1>
...
@@ -913,7 +915,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
...
@@ -913,7 +915,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
are designed not to write WAL at all if <varname>archive_mode</varname>
are designed not to write WAL at all if <varname>archive_mode</varname>
is off. (They can guarantee crash safety more cheaply by doing an
is off. (They can guarantee crash safety more cheaply by doing an
<function>fsync</> at the end than by writing WAL.)
<function>fsync</> at the end than by writing WAL.)
This applies to the following commands:
This applies to the following commands:
<itemizedlist>
<itemizedlist>
<listitem>
<listitem>
<para>
<para>
...
...
doc/src/sgml/planstats.sgml
View file @
f5678e8e
<!-- $PostgreSQL: pgsql/doc/src/sgml/planstats.sgml,v 1.
8 2007/01/31 20:56:18 momjian
Exp $ -->
<!-- $PostgreSQL: pgsql/doc/src/sgml/planstats.sgml,v 1.
9 2007/12/28 21:03:31 tgl
Exp $ -->
<chapter id="planner-stats-details">
<chapter id="planner-stats-details">
<title>How the Planner Uses Statistics</title>
<title>How the Planner Uses Statistics</title>
<para>
<para>
This chapter builds on the material covered in <xref linkend="using-explain">
This chapter builds on the material covered in <xref
and <xref linkend="planner-stats">, and shows how the planner uses the
linkend="using-explain"> and <xref linkend="planner-stats"> to show some
system statistics to estimate the number of rows each stage in a query might
additional details about how the planner uses the
return. This is a significant part of the planning / optimizing process,
system statistics to estimate the number of rows each part of a query might
return. This is a significant part of the planning process,
providing much of the raw material for cost calculation.
providing much of the raw material for cost calculation.
</para>
</para>
<para>
<para>
The intent of this chapter is not to document the code —
The intent of this chapter is not to document the code in detail,
better done in the code itself, but to present an overview of how it works.
but to present an overview of how it works.
This will perhaps ease the learning curve for someone who subsequently
This will perhaps ease the learning curve for someone who subsequently
wishes to read the code. As a consequence, the approach chosen is to analyze
wishes to read the code.
a series of incrementally more complex examples.
</para>
<para>
The outputs and algorithms shown below are taken from version 8.0.
The behavior of earlier (or later) versions might vary.
</para>
</para>
<sect1 id="row-estimation-examples">
<sect1 id="row-estimation-examples">
...
@@ -33,160 +28,163 @@
...
@@ -33,160 +28,163 @@
</indexterm>
</indexterm>
<para>
<para>
Using examples drawn from the regression test database, let's start with a
The examples shown below use tables in the <productname>PostgreSQL</>
very simple query:
regression test database.
The outputs shown are taken from version 8.3.
The behavior of earlier (or later) versions might vary.
Note also that since <command>ANALYZE</> uses random sampling
while producing statistics, the results will change slightly after
any new <command>ANALYZE</>.
</para>
<para>
Let's start with a very simple query:
<programlisting>
<programlisting>
EXPLAIN SELECT * FROM tenk1;
EXPLAIN SELECT * FROM tenk1;
QUERY PLAN
QUERY PLAN
-------------------------------------------------------------
-------------------------------------------------------------
Seq Scan on tenk1 (cost=0.00..4
45
.00 rows=10000 width=244)
Seq Scan on tenk1 (cost=0.00..4
58
.00 rows=10000 width=244)
</programlisting>
</programlisting>
How the planner determines the cardinality of <
classname>tenk1</class
name>
How the planner determines the cardinality of <
structname>tenk1</struct
name>
is covered in <xref linkend="
using-explain
">, but is repeated here for
is covered in <xref linkend="
planner-stats
">, but is repeated here for
completeness. The number of
rows is looked up from
completeness. The number of
pages and rows is looked up in
<
classname>pg_class</class
name>:
<
structname>pg_class</struct
name>:
<programlisting>
<programlisting>
SELECT rel
tuples, relpag
es FROM pg_class WHERE relname = 'tenk1';
SELECT rel
pages, reltupl
es FROM pg_class WHERE relname = 'tenk1';
relpages | reltuples
relpages | reltuples
----------+-----------
----------+-----------
345 | 10000
358 | 10000
</programlisting>
The planner will check the <structfield>relpages</structfield>
estimate (this is a cheap operation) and if incorrect might scale
<structfield>reltuples</structfield> to obtain a row estimate. In this
case it does not, thus:
<programlisting>
rows = 10000
</programlisting>
</programlisting>
These numbers are current as of the last <command>VACUUM</> or
<command>ANALYZE</> on the table. The planner then fetches the
actual current number of pages in the table (this is a cheap operation,
not requiring a table scan). If that is different from
<structfield>relpages</structfield> then
<structfield>reltuples</structfield> is scaled accordingly to
arrive at a current number-of-rows estimate. In this case the values
are correct so the rows estimate is the same as
<structfield>reltuples</structfield>.
</para>
</para>
<para>
<para>
let's move on to an example with a range condition in its
Let's move on to an example with a range condition in its
<literal>WHERE</literal> clause:
<literal>WHERE</literal> clause:
<programlisting>
<programlisting>
EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 1000;
EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 1000;
QUERY PLAN
QUERY PLAN
------------------------------------------------------------
--------------------------------------------------------------------------------
Seq Scan on tenk1 (cost=0.00..470.00 rows=1031 width=244)
Bitmap Heap Scan on tenk1 (cost=24.06..394.64 rows=1007 width=244)
Filter: (unique1 < 1000)
Recheck Cond: (unique1 < 1000)
</programlisting>
-> Bitmap Index Scan on tenk1_unique1 (cost=0.00..23.80 rows=1007 width=0)
Index Cond: (unique1 < 1000)
The planner examines the <literal>WHERE</literal> clause condition:
<programlisting>
unique1 < 1000
</programlisting>
</programlisting>
and looks up the restriction function for the operator
The planner examines the <literal>WHERE</literal> clause condition
<literal><</literal> in <classname>pg_operator</classname>.
and looks up the selectivity function for the operator
This is held in the column <structfield>oprrest</structfield>,
<literal><</literal> in <structname>pg_operator</structname>.
and the result in this case is <function>scalarltsel</function>.
This is held in the column <structfield>oprrest</structfield>,
The <function>scalarltsel</function> function retrieves the histogram for
and the entry in this case is <function>scalarltsel</function>.
<structfield>unique1</structfield> from <classname>pg_statistics</classname>
The <function>scalarltsel</function> function retrieves the histogram for
- we can follow this by using the simpler <classname>pg_stats</classname>
<structfield>unique1</structfield> from
<structname>pg_statistics</structname>. For manual queries it is more
convenient to look in the simpler <structname>pg_stats</structname>
view:
view:
<programlisting>
<programlisting>
SELECT histogram_bounds FROM pg_stats
SELECT histogram_bounds FROM pg_stats
WHERE tablename='tenk1' AND attname='unique1';
WHERE tablename='tenk1' AND attname='unique1';
histogram_bounds
histogram_bounds
------------------------------------------------------
------------------------------------------------------
{
1,970,1943,2958,3971,5069,6028,7007,7919,8982
,9995}
{
0,993,1997,3050,4040,5036,5957,7057,8029,9016
,9995}
</programlisting>
</programlisting>
Next the fraction of the histogram occupied by <quote>< 1000</quote>
Next the fraction of the histogram occupied by <quote>< 1000</quote>
is worked out. This is the selectivity. The histogram divides the range
is worked out. This is the selectivity. The histogram divides the range
into equal frequency buckets, so all we have to do is locate the bucket
into equal frequency buckets, so all we have to do is locate the bucket
that our value is in and count <emphasis>part</emphasis> of it and
that our value is in and count <emphasis>part</emphasis> of it and
<emphasis>all</emphasis> of the ones before. The value 1000 is clearly in
<emphasis>all</emphasis> of the ones before. The value 1000 is clearly in
the second
(970 - 1943) bucket, so by assuming a linear distribution of
the second
bucket (993-1997). Assuming a linear distribution of
values inside each bucket we can calculate the selectivity as:
values inside each bucket
,
we can calculate the selectivity as:
<programlisting>
<programlisting>
selectivity = (1 + (1000 - b
ckt[2].min)/(bckt[2].max - bckt[2].min))/num_bck
ts
selectivity = (1 + (1000 - b
ucket[2].min)/(bucket[2].max - bucket[2].min))/num_bucke
ts
= (1 + (1000 - 9
70)/(1943 - 970
))/10
= (1 + (1000 - 9
93)/(1997 - 993
))/10
= 0.10
31
= 0.10
0697
</programlisting>
</programlisting>
that is, one whole bucket plus a linear fraction of the second, divided by
that is, one whole bucket plus a linear fraction of the second, divided by
the number of buckets. The estimated number of rows can now be calculated as
the number of buckets. The estimated number of rows can now be calculated as
the product of the selectivity and the cardinality of
the product of the selectivity and the cardinality of
<
classname>tenk1</class
name>:
<
structname>tenk1</struct
name>:
<programlisting>
<programlisting>
rows = rel_cardinality * selectivity
rows = rel_cardinality * selectivity
= 10000 * 0.10
31
= 10000 * 0.10
0697
= 10
31
= 10
07 (rounding off)
</programlisting>
</programlisting>
</para>
</para>
<para>
<para>
Next let's consider an example with
equality condition in its
Next let's consider an example with
an equality condition in its
<literal>WHERE</literal> clause:
<literal>WHERE</literal> clause:
<programlisting>
<programlisting>
EXPLAIN SELECT * FROM tenk1 WHERE stringu1 = '
AT
AAAA';
EXPLAIN SELECT * FROM tenk1 WHERE stringu1 = '
CR
AAAA';
QUERY PLAN
QUERY PLAN
----------------------------------------------------------
----------------------------------------------------------
Seq Scan on tenk1 (cost=0.00..4
70.00 rows=31
width=244)
Seq Scan on tenk1 (cost=0.00..4
83.00 rows=30
width=244)
Filter: (stringu1 = '
AT
AAAA'::name)
Filter: (stringu1 = '
CR
AAAA'::name)
</programlisting>
</programlisting>
Again the planner examines the <literal>WHERE</literal> clause condition:
Again the planner examines the <literal>WHERE</literal> clause condition
and looks up the selectivity function for <literal>=</literal>, which is
<function>eqsel</function>. For equality estimation the histogram is
not useful; instead the list of <firstterm>most
common values</> (<acronym>MCV</acronym>s) is used to determine the
selectivity. Let's have a look at the MCVs, with some additional columns
that will be useful later:
<programlisting>
<programlisting>
stringu1 = 'ATAAAA'
SELECT null_frac, n_distinct, most_common_vals, most_common_freqs FROM pg_stats
</programlisting>
and looks up the restriction function for <literal>=</literal>, which is
<function>eqsel</function>. This case is a bit different, as the most
common values — <acronym>MCV</acronym>s, are used to determine the
selectivity. Let's have a look at these, with some extra columns that will
be useful later:
<programlisting>
SELECT null_frac, n_distinct, most_common_vals, most_common_freqs FROM pg_stats
WHERE tablename='tenk1' AND attname='stringu1';
WHERE tablename='tenk1' AND attname='stringu1';
null_frac | 0
null_frac | 0
n_distinct | 672
n_distinct | 676
most_common_vals | {FDAAAA,NHAAAA,ATAAAA,BGAAAA,EBAAAA,MOAAAA,NDAAAA,OWAAAA,BHAAAA,BJAAAA}
most_common_vals | {EJAAAA,BBAAAA,CRAAAA,FCAAAA,FEAAAA,GSAAAA,JOAAAA,MCAAAA,NAAAAA,WGAAAA}
most_common_freqs | {0.00333333,0.00333333,0.003,0.003,0.003,0.003,0.003,0.003,0.00266667,0.00266667}
most_common_freqs | {0.00333333,0.003,0.003,0.003,0.003,0.003,0.003,0.003,0.003,0.003}
</programlisting>
</programlisting>
The selectivity is merely the most common frequency (<acronym>MCF</acronym>)
Since <literal>CRAAAA</> appears in the list of MCVs, the selectivity is
corresponding to the third <acronym>MCV</acronym> — 'ATAAAA':
merely the corresponding entry in the list of most common frequencies
(<acronym>MCF</acronym>s):
<programlisting>
<programlisting>
selectivity = mcf[3]
selectivity = mcf[3]
= 0.003
= 0.003
</programlisting>
</programlisting>
The estimated number of rows is just the product of this with the
As before, the estimated number of rows is just the product of this with the
cardinality of <
classname>tenk1</classname> as before
:
cardinality of <
structname>tenk1</structname>
:
<programlisting>
<programlisting>
rows = 10000 * 0.003
rows = 10000 * 0.003
= 30
= 30
</programlisting>
</programlisting>
The number displayed by <command>EXPLAIN</command> is one more than this,
due to some post estimation checks.
</para>
</para>
<para>
<para>
Now consider the same query, but with a constant that is not in the
Now consider the same query, but with a constant that is not in the
<acronym>MCV</acronym> list:
<acronym>MCV</acronym> list:
<programlisting>
<programlisting>
...
@@ -194,116 +192,197 @@ EXPLAIN SELECT * FROM tenk1 WHERE stringu1 = 'xxx';
...
@@ -194,116 +192,197 @@ EXPLAIN SELECT * FROM tenk1 WHERE stringu1 = 'xxx';
QUERY PLAN
QUERY PLAN
----------------------------------------------------------
----------------------------------------------------------
Seq Scan on tenk1 (cost=0.00..4
70
.00 rows=15 width=244)
Seq Scan on tenk1 (cost=0.00..4
83
.00 rows=15 width=244)
Filter: (stringu1 = 'xxx'::name)
Filter: (stringu1 = 'xxx'::name)
</programlisting>
</programlisting>
This is quite a different problem
,
how to estimate the selectivity when the
This is quite a different problem
:
how to estimate the selectivity when the
value is <emphasis>not</emphasis> in the <acronym>MCV</acronym> list.
value is <emphasis>not</emphasis> in the <acronym>MCV</acronym> list.
The approach is to use the fact that the value is not in the list,
The approach is to use the fact that the value is not in the list,
combined with the knowledge of the frequencies for all of the
combined with the knowledge of the frequencies for all of the
<acronym>MCV</acronym>s:
<acronym>MCV</acronym>s:
<programlisting>
<programlisting>
selectivity = (1 - sum(mvf))/(num_distinct - num_mcv)
selectivity = (1 - sum(mvf))/(num_distinct - num_mcv)
= (1 - (0.00333333 + 0.003
33333 + 0.003 + 0.003 + 0.003
= (1 - (0.00333333 + 0.003
+ 0.003 + 0.003 + 0.003 + 0.003 +
+ 0.003 + 0.003 + 0.003 + 0.00266667 + 0.00266667))/(672
- 10)
0.003 + 0.003 + 0.003 + 0.003))/(676
- 10)
= 0.0014
65
= 0.0014
559
</programlisting>
</programlisting>
That is, add up all the frequencies for the <acronym>MCV</acronym>s and
That is, add up all the frequencies for the <acronym>MCV</acronym>s and
subtract them from one — because it is <emphasis>not</emphasis> one
subtract them from one, then
of these, and divide by the <emphasis>remaining</emphasis> distinct values.
divide by the number of <emphasis>other</emphasis> distinct values.
Notice that there are no null values so we don't have to worry about those.
This amounts to assuming that the fraction of the column that is not any
The estimated number of rows is calculated as usual:
of the MCVs is evenly distributed among all the other distinct values.
Notice that there are no null values so we don't have to worry about those
(otherwise we'd subtract the null fraction from the numerator as well).
The estimated number of rows is then calculated as usual:
<programlisting>
<programlisting>
rows = 10000 * 0.0014
65
rows = 10000 * 0.0014
559
= 15
= 15
(rounding off)
</programlisting>
</programlisting>
</para>
</para>
<para>
<para>
Let's increase the complexity to consider a case with more than one
The previous example with <literal>unique1 < 1000</> was an
condition in the <literal>WHERE</literal> clause:
oversimplification of what <function>scalarltsel</function> really does;
now that we have seen an example of the use of MCVs, we can fill in some
more detail. The example was correct as far as it went, because since
<structfield>unique1</> is a unique column it has no MCVs (obviously, no
value is any more common than any other value). For a non-unique
column, there will normally be both a histogram and an MCV list, and
<emphasis>the histogram does not include the portion of the column
population represented by the MCVs</>. We do things this way because
it allows more precise estimation. In this situation
<function>scalarltsel</function> directly applies the condition (e.g.,
<quote>< 1000</>) to each value of the MCV list, and adds up the
frequencies of the MCVs for which the condition is true. This gives
an exact estimate of the selectivity within the portion of the table
that is MCVs. The histogram is then used in the same way as above
to estimate the selectivity in the portion of the table that is not
MCVs, and then the two numbers are combined to estimate the overall
selectivity. For example, consider
<programlisting>
<programlisting>
EXPLAIN SELECT * FROM tenk1 WHERE
unique1 < 1000 AND stringu1 = 'xxx
';
EXPLAIN SELECT * FROM tenk1 WHERE
stringu1 < 'IAAAAA
';
QUERY PLAN
QUERY PLAN
-----------------------------------------------------------
-----------------------------------------------------------
-
Seq Scan on tenk1 (cost=0.00..4
95.00 rows=2
width=244)
Seq Scan on tenk1 (cost=0.00..4
83.00 rows=3077
width=244)
Filter: (
(unique1 < 1000) AND (stringu1 = 'xxx'::name)
)
Filter: (
stringu1 < 'IAAAAA'::name
)
</programlisting>
</programlisting>
An assumption of independence is made and the selectivities of the
We already saw the MCV information for <structfield>stringu1</>,
individual restrictions are multiplied together
:
and here is its histogram
:
<programlisting>
<programlisting>
selectivity = selectivity(unique1 < 1000) * selectivity(stringu1 = 'xxx')
SELECT histogram_bounds FROM pg_stats
= 0.1031 * 0.001465
WHERE tablename='tenk1' AND attname='stringu1';
= 0.00015104
histogram_bounds
--------------------------------------------------------------------------------
{AAAAAA,CQAAAA,FRAAAA,IBAAAA,KRAAAA,NFAAAA,PSAAAA,SGAAAA,VAAAAA,XLAAAA,ZZAAAA}
</programlisting>
</programlisting>
The row estimates are calculated as before:
Checking the MCV list, we find that the condition <literal>stringu1 <
'IAAAAA'</> is satisfied by the first six entries and not the last four,
so the selectivity within the MCV part of the population is
<programlisting>
<programlisting>
rows = 10000 * 0.00015104
selectivity = sum(relevant mvfs)
= 2
= 0.00333333 + 0.003 + 0.003 + 0.003 + 0.003 + 0.003
= 0.01833333
</programlisting>
</programlisting>
Summing all the MCFs also tells us that the total fraction of the
population represented by MCVs is 0.03033333, and therefore the
fraction represented by the histogram is 0.96966667 (again, there
are no nulls, else we'd have to exclude them here). We can see
that the value <literal>IAAAAA</> falls nearly at the end of the
third histogram bucket. Using some rather cheesy assumptions
about the frequency of different characters, the planner arrives
at the estimate 0.298387 for the portion of the histogram population
that is less than <literal>IAAAAA</>. We then combine the estimates
for the MCV and non-MCV populations:
<programlisting>
selectivity = mcv_selectivity + histogram_selectivity * histogram_fraction
= 0.01833333 + 0.298387 * 0.96966667
= 0.307669
rows = 10000 * 0.307669
= 3077 (rounding off)
</programlisting>
In this particular example, the correction from the MCV list is fairly
small, because the column distribution is actually quite flat (the
statistics showing these particular values as being more common than
others are mostly due to sampling error). In a more typical case where
some values are significantly more common than others, this complicated
process gives a useful improvement in accuracy because the selectivity
for the most common values is found exactly.
</para>
</para>
<para>
<para>
Finally we will examine a query that includes a <literal>JOIN</literal>
Now let's consider a case with more than one
together with a
<literal>WHERE</literal> clause:
condition in the
<literal>WHERE</literal> clause:
<programlisting>
<programlisting>
EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2
EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 1000 AND stringu1 = 'xxx';
WHERE t1.unique1 < 50 AND t1.unique2 = t2.unique2;
QUERY PLAN
QUERY PLAN
--------------------------------------------------------------------------------
---------
--------------------------------------------------------------------------------
Nested Loop (cost=0.00..346.90 rows=51 width=488
)
Bitmap Heap Scan on tenk1 (cost=23.80..396.91 rows=1 width=244
)
-> Index Scan using tenk1_unique1 on tenk1 t1 (cost=0.00..192.57 rows=51 width=244
)
Recheck Cond: (unique1 < 1000
)
Index Cond: (unique1 < 50
)
Filter: (stringu1 = 'xxx'::name
)
-
> Index Scan using tenk2_unique2 on tenk2 t2 (cost=0.00..3.01 rows=1 width=244
)
-
> Bitmap Index Scan on tenk1_unique1 (cost=0.00..23.80 rows=1007 width=0
)
Index Cond: (
"outer".unique2 = t2.unique2
)
Index Cond: (
unique1 < 1000
)
</programlisting>
</programlisting>
The restriction on <classname>tenk1</classname>
The planner assumes that the two conditions are independent, so that
<quote>unique1 < 50</quote> is evaluated before the nested-loop join.
the individual selectivities of the clauses can be multiplied together:
This is handled analogously to the previous range example. The restriction
operator for <literal><</literal> is <function>scalarlteqsel</function>
<programlisting>
as before, but this time the value 50 is in the first bucket of the
selectivity = selectivity(unique1 < 1000) * selectivity(stringu1 = 'xxx')
<structfield>unique1</structfield> histogram:
= 0.100697 * 0.0014559
= 0.0001466
rows = 10000 * 0.0001466
= 1 (rounding off)
</programlisting>
Notice that the number of rows estimated to be returned from the bitmap
index scan reflects only the condition used with the index; this is
important since it affects the cost estimate for the subsequent heap
fetches.
</para>
<para>
Finally we will examine a query that involves a join:
<programlisting>
<programlisting>
selectivity = (0 + (50 - bckt[1].min)/(bckt[1].max - bckt[1].min))/num_bckts
EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2
= (0 + (50 - 1)/(970 - 1))/10
WHERE t1.unique1 < 50 AND t1.unique2 = t2.unique2;
= 0.005057
rows = 10000 * 0.005057
QUERY PLAN
= 51
--------------------------------------------------------------------------------------
Nested Loop (cost=4.64..456.23 rows=50 width=488)
-> Bitmap Heap Scan on tenk1 t1 (cost=4.64..142.17 rows=50 width=244)
Recheck Cond: (unique1 < 50)
-> Bitmap Index Scan on tenk1_unique1 (cost=0.00..4.63 rows=50 width=0)
Index Cond: (unique1 < 50)
-> Index Scan using tenk2_unique2 on tenk2 t2 (cost=0.00..6.27 rows=1 width=244)
Index Cond: (t2.unique2 = t1.unique2)
</programlisting>
</programlisting>
The restriction for the join is:
The restriction on <structname>tenk1</structname>,
<literal>unique1 < 50</literal>,
is evaluated before the nested-loop join.
This is handled analogously to the previous range example. This time the
value 50 falls into the first bucket of the
<structfield>unique1</structfield> histogram:
<programlisting>
<programlisting>
t2.unique2 = t1.unique2
selectivity = (0 + (50 - bucket[1].min)/(bucket[1].max - bucket[1].min))/num_buckets
= (0 + (50 - 0)/(993 - 0))/10
= 0.005035
rows = 10000 * 0.005035
= 50 (rounding off)
</programlisting>
</programlisting>
Th
is is due to the join method being nested-loop, with
Th
e restriction for the join is <literal>t2.unique2 = t1.unique2</>.
<classname>tenk1</classname> being in the outer loop. The operator is just
The operator is just
our familiar <literal>=</literal>, however the
restriction function is
our familiar <literal>=</literal>, however the
selectivity function is
obtained from the <structfield>oprjoin</structfield> column of
obtained from the <structfield>oprjoin</structfield> column of
<
classname>pg_operator</classname> -
and is <function>eqjoinsel</function>.
<
structname>pg_operator</structname>,
and is <function>eqjoinsel</function>.
Additionally we use the statistical information for both
<function>eqjoinsel</function> looks up the statistical information for both
<
classname>tenk2</classname> and <classname>tenk1</class
name>:
<
structname>tenk2</structname> and <structname>tenk1</struct
name>:
<programlisting>
<programlisting>
SELECT tablename, null_frac,n_distinct, most_common_vals FROM pg_stats
SELECT tablename, null_frac,n_distinct, most_common_vals FROM pg_stats
WHERE tablename IN ('tenk1', 'tenk2') AND attname='unique2';
WHERE tablename IN ('tenk1', 'tenk2') AND attname='unique2';
tablename | null_frac | n_distinct | most_common_vals
tablename | null_frac | n_distinct | most_common_vals
-----------+-----------+------------+------------------
-----------+-----------+------------+------------------
...
@@ -311,41 +390,62 @@ tablename | null_frac | n_distinct | most_common_vals
...
@@ -311,41 +390,62 @@ tablename | null_frac | n_distinct | most_common_vals
tenk2 | 0 | -1 |
tenk2 | 0 | -1 |
</programlisting>
</programlisting>
In this case there is no <acronym>MCV</acronym> information for
In this case there is no <acronym>MCV</acronym> information for
<structfield>unique2</structfield> because all the values appear to be
<structfield>unique2</structfield> because all the values appear to be
unique, so we
can use an algorithm that relies only on the number of
unique, so we
use an algorithm that relies only on the number of
distinct values for both relations together with their null fractions:
distinct values for both relations together with their null fractions:
<programlisting>
<programlisting>
selectivity = (1 - null_frac1) * (1 - null_frac2) * min(1/num_distinct1, 1/num_distinct2)
selectivity = (1 - null_frac1) * (1 - null_frac2) * min(1/num_distinct1, 1/num_distinct2)
= (1 - 0) * (1 - 0)
* min(1/10000, 1/1
000)
= (1 - 0) * (1 - 0)
/ max(10000, 10
000)
= 0.0001
= 0.0001
</programlisting>
</programlisting>
This is, subtract the null fraction from one for each of the relations,
This is, subtract the null fraction from one for each of the relations,
and divide by the maximum of the two distinct values. The number of rows
and divide by the maximum of the numbers of distinct values.
that the join is likely to emit is calculated as the cardinality of
The number of rows
Cartesian product of the two nodes in the nested-loop, multiplied by the
that the join is likely to emit is calculated as the cardinality of the
Cartesian product of the two inputs, multiplied by the
selectivity:
selectivity:
<programlisting>
<programlisting>
rows = (outer_cardinality * inner_cardinality) * selectivity
rows = (outer_cardinality * inner_cardinality) * selectivity
= (5
1
* 10000) * 0.0001
= (5
0
* 10000) * 0.0001
= 5
1
= 5
0
</programlisting>
</programlisting>
</para>
</para>
<para>
<para>
For those interested in further details, estimation of the number of rows in
Had there been MCV lists for the two columns,
a relation is covered in
<function>eqjoinsel</function> would have used direct comparison of the MCV
<filename>src/backend/optimizer/util/plancat.c</filename>. The calculation
lists to determine the join selectivity within the part of the column
logic for clause selectivities is in
populations represented by the MCVs. The estimate for the remainder of the
<filename>src/backend/optimizer/path/clausesel.c</filename>. The actual
populations follows the same approach shown here.
implementations of the operator and join restriction functions can be found
</para>
<para>
Notice that we showed <literal>inner_cardinality</> as 10000, that is,
the unmodified size of <structname>tenk2</>. It might appear from
inspection of the <command>EXPLAIN</> output that the estimate of
join rows comes from 50 * 1, that is, the number of outer rows times
the estimated number of rows obtained by each inner indexscan on
<structname>tenk2</>. But this is not the case: the join relation size
is estimated before any particular join plan has been considered. If
everything is working well then the two ways of estimating the join
size will produce about the same answer, but due to roundoff error and
other factors they sometimes diverge significantly.
</para>
<para>
For those interested in further details, estimation of the size of
a table (before any <literal>WHERE</> clauses) is done in
<filename>src/backend/optimizer/util/plancat.c</filename>. The generic
logic for clause selectivities is in
<filename>src/backend/optimizer/path/clausesel.c</filename>. The
operator-specific selectivity functions are mostly found
in <filename>src/backend/utils/adt/selfuncs.c</filename>.
in <filename>src/backend/utils/adt/selfuncs.c</filename>.
</para>
</para>
</sect1>
</sect1>
</chapter>
</chapter>
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment