Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
P
Postgres FD Implementation
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Abuhujair Javed
Postgres FD Implementation
Commits
cdc84adb
Commit
cdc84adb
authored
Oct 07, 2004
by
Bruce Momjian
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Indent comment pushed to new line by else so it is indented by BSD
indent.
parent
914ff1ea
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
329 additions
and
389 deletions
+329
-389
src/backend/commands/vacuum.c
src/backend/commands/vacuum.c
+326
-385
src/tools/pgindent/pgindent
src/tools/pgindent/pgindent
+3
-4
No files found.
src/backend/commands/vacuum.c
View file @
cdc84adb
...
@@ -13,7 +13,7 @@
...
@@ -13,7 +13,7 @@
*
*
*
*
* IDENTIFICATION
* IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/commands/vacuum.c,v 1.29
2 2004/09/30 23:21:19 tgl
Exp $
* $PostgreSQL: pgsql/src/backend/commands/vacuum.c,v 1.29
3 2004/10/07 14:15:50 momjian
Exp $
*
*
*-------------------------------------------------------------------------
*-------------------------------------------------------------------------
*/
*/
...
@@ -240,20 +240,27 @@ vacuum(VacuumStmt *vacstmt)
...
@@ -240,20 +240,27 @@ vacuum(VacuumStmt *vacstmt)
if
(
vacstmt
->
verbose
)
if
(
vacstmt
->
verbose
)
elevel
=
INFO
;
elevel
=
INFO
;
else
else
/* bjm comment */
elevel
=
DEBUG2
;
elevel
=
DEBUG2
;
if
(
1
==
0
)
func
();
else
/* bjm comment */
{
elevel
=
DEBUG2
;
}
/*
/*
* We cannot run VACUUM inside a user transaction block; if we were
* We cannot run VACUUM inside a user transaction block; if we were inside
* inside a transaction, then our commit- and
* a transaction, then our commit- and start-transaction-command calls
* start-transaction-command calls would not have the intended effect!
* would not have the intended effect! Furthermore, the forced commit that
* Furthermore, the forced commit that occurs before truncating the
* occurs before truncating the relation's file would have the effect of
* relation's file would have the effect of committing the rest of the
* committing the rest of the user's transaction too, which would
* user's transaction too, which would certainly not be the desired
* certainly not be the desired behavior. (This only applies to VACUUM
* behavior. (This only applies to VACUUM FULL, though. We could in
* FULL, though. We could in theory run lazy VACUUM inside a transaction
* theory run lazy VACUUM inside a transaction block, but we choose to
* block, but we choose to disallow that case because we'd rather commit
* disallow that case because we'd rather commit as soon as possible
* as soon as possible after finishing the vacuum. This is mainly so
* after finishing the vacuum. This is mainly so that we can let go
* that we can let go the AccessExclusiveLock that we may be holding.)
* the AccessExclusiveLock that we may be holding.)
*
*
* ANALYZE (without VACUUM) can run either way.
* ANALYZE (without VACUUM) can run either way.
*/
*/
...
@@ -265,18 +272,15 @@ vacuum(VacuumStmt *vacstmt)
...
@@ -265,18 +272,15 @@ vacuum(VacuumStmt *vacstmt)
else
else
in_outer_xact
=
IsInTransactionChain
((
void
*
)
vacstmt
);
in_outer_xact
=
IsInTransactionChain
((
void
*
)
vacstmt
);
/*
/* Send info about dead objects to the statistics collector */
* Send info about dead objects to the statistics collector
*/
if
(
vacstmt
->
vacuum
)
if
(
vacstmt
->
vacuum
)
pgstat_vacuum_tabstat
();
pgstat_vacuum_tabstat
();
/*
/*
* Create special memory context for cross-transaction storage.
* Create special memory context for cross-transaction storage.
*
*
* Since it is a child of PortalContext, it will go away eventually even
* Since it is a child of PortalContext, it will go away eventually even if
* if we suffer an error; there's no need for special abort cleanup
* we suffer an error; there's no need for special abort cleanup logic.
* logic.
*/
*/
vac_context
=
AllocSetContextCreate
(
PortalContext
,
vac_context
=
AllocSetContextCreate
(
PortalContext
,
"Vacuum"
,
"Vacuum"
,
...
@@ -295,21 +299,21 @@ vacuum(VacuumStmt *vacstmt)
...
@@ -295,21 +299,21 @@ vacuum(VacuumStmt *vacstmt)
/*
/*
* It's a database-wide VACUUM.
* It's a database-wide VACUUM.
*
*
* Compute the initially applicable OldestXmin and FreezeLimit XIDs,
* Compute the initially applicable OldestXmin and FreezeLimit XIDs,
so
*
so that we can record these values at the end of the VACUUM.
*
that we can record these values at the end of the VACUUM. Note that
*
Note that individual tables may well be processed with newer
*
individual tables may well be processed with newer values, but we
*
values, but we can guarantee that no (non-shared) relations are
*
can guarantee that no (non-shared) relations are processed with
*
processed with
older ones.
* older ones.
*
*
* It is okay to record non-shared values in pg_database, even though
* It is okay to record non-shared values in pg_database, even though
we
*
we may vacuum shared relations with older cutoffs, because only
*
may vacuum shared relations with older cutoffs, because only the
*
the minimum of the values present in pg_database matters. W
e
*
minimum of the values present in pg_database matters. We can b
e
*
can be sure that shared relations have at some time been
*
sure that shared relations have at some time been vacuumed with
*
vacuumed with cutoffs no worse than the global minimum; for, if
*
cutoffs no worse than the global minimum; for, if there is a
*
there is a backend in some other DB with xmin = OLDXMIN that's
*
backend in some other DB with xmin = OLDXMIN that's determining the
*
determining the cutoff with which we vacuum shared relations,
*
cutoff with which we vacuum shared relations, it is not possible
*
it is not possible for that database to have a cutoff newer
*
for that database to have a cutoff newer than OLDXMIN recorded in
*
than OLDXMIN recorded in
pg_database.
* pg_database.
*/
*/
vacuum_set_xid_limits
(
vacstmt
,
false
,
vacuum_set_xid_limits
(
vacstmt
,
false
,
&
initialOldestXmin
,
&
initialOldestXmin
,
...
@@ -319,16 +323,15 @@ vacuum(VacuumStmt *vacstmt)
...
@@ -319,16 +323,15 @@ vacuum(VacuumStmt *vacstmt)
/*
/*
* Decide whether we need to start/commit our own transactions.
* Decide whether we need to start/commit our own transactions.
*
*
* For VACUUM (with or without ANALYZE): always do so, so that we can
* For VACUUM (with or without ANALYZE): always do so, so that we can
release
*
release locks as soon as possible. (We could possibly use the
*
locks as soon as possible. (We could possibly use the outer
*
outer transaction for a one-table VACUUM, but handling TOAST tables
*
transaction for a one-table VACUUM, but handling TOAST tables would be
*
would be
problematic.)
* problematic.)
*
*
* For ANALYZE (no VACUUM): if inside a transaction block, we cannot
* For ANALYZE (no VACUUM): if inside a transaction block, we cannot
* start/commit our own transactions. Also, there's no need to do so
* start/commit our own transactions. Also, there's no need to do so if
* if only processing one relation. For multiple relations when not
* only processing one relation. For multiple relations when not within a
* within a transaction block, use own transactions so we can release
* transaction block, use own transactions so we can release locks sooner.
* locks sooner.
*/
*/
if
(
vacstmt
->
vacuum
)
if
(
vacstmt
->
vacuum
)
use_own_xacts
=
true
;
use_own_xacts
=
true
;
...
@@ -344,8 +347,8 @@ vacuum(VacuumStmt *vacstmt)
...
@@ -344,8 +347,8 @@ vacuum(VacuumStmt *vacstmt)
}
}
/*
/*
* If we are running ANALYZE without per-table transactions, we'll
* If we are running ANALYZE without per-table transactions, we'll
need a
*
need a
memory context with table lifetime.
* memory context with table lifetime.
*/
*/
if
(
!
use_own_xacts
)
if
(
!
use_own_xacts
)
anl_context
=
AllocSetContextCreate
(
PortalContext
,
anl_context
=
AllocSetContextCreate
(
PortalContext
,
...
@@ -355,12 +358,12 @@ vacuum(VacuumStmt *vacstmt)
...
@@ -355,12 +358,12 @@ vacuum(VacuumStmt *vacstmt)
ALLOCSET_DEFAULT_MAXSIZE
);
ALLOCSET_DEFAULT_MAXSIZE
);
/*
/*
* vacuum_rel expects to be entered with no transaction active; it
* vacuum_rel expects to be entered with no transaction active; it
will
*
will start and commit its own transaction. But we are called by an
*
start and commit its own transaction. But we are called by an SQL
*
SQL command, and so we are executing inside a transaction already.
*
command, and so we are executing inside a transaction already. We
*
We
commit the transaction started in PostgresMain() here, and start
* commit the transaction started in PostgresMain() here, and start
* another one before exiting to match the commit waiting for us back
* another one before exiting to match the commit waiting for us back
in
*
in
PostgresMain().
* PostgresMain().
*/
*/
if
(
use_own_xacts
)
if
(
use_own_xacts
)
{
{
...
@@ -376,9 +379,7 @@ vacuum(VacuumStmt *vacstmt)
...
@@ -376,9 +379,7 @@ vacuum(VacuumStmt *vacstmt)
VacuumCostActive
=
(
VacuumCostDelay
>
0
);
VacuumCostActive
=
(
VacuumCostDelay
>
0
);
VacuumCostBalance
=
0
;
VacuumCostBalance
=
0
;
/*
/* Loop to process each selected relation. */
* Loop to process each selected relation.
*/
foreach
(
cur
,
relations
)
foreach
(
cur
,
relations
)
{
{
Oid
relid
=
lfirst_oid
(
cur
);
Oid
relid
=
lfirst_oid
(
cur
);
...
@@ -393,11 +394,11 @@ vacuum(VacuumStmt *vacstmt)
...
@@ -393,11 +394,11 @@ vacuum(VacuumStmt *vacstmt)
MemoryContext
old_context
=
NULL
;
MemoryContext
old_context
=
NULL
;
/*
/*
* If using separate xacts, start one for analyze.
* If using separate xacts, start one for analyze.
Otherwise,
*
Otherwise, we can use the outer transaction, but we
*
we can use the outer transaction, but we still need to call
*
still need to call analyze_rel in a memory context that
*
analyze_rel in a memory context that will be cleaned up on
*
will be cleaned up on return (else we leak memory whi
le
*
return (else we leak memory while processing multip
le
*
processing multiple
tables).
* tables).
*/
*/
if
(
use_own_xacts
)
if
(
use_own_xacts
)
{
{
...
@@ -409,8 +410,8 @@ vacuum(VacuumStmt *vacstmt)
...
@@ -409,8 +410,8 @@ vacuum(VacuumStmt *vacstmt)
old_context
=
MemoryContextSwitchTo
(
anl_context
);
old_context
=
MemoryContextSwitchTo
(
anl_context
);
/*
/*
* Tell the buffer replacement strategy that vacuum is
* Tell the buffer replacement strategy that vacuum is
causing
*
causing
the IO
* the IO
*/
*/
StrategyHintVacuum
(
true
);
StrategyHintVacuum
(
true
);
...
@@ -439,9 +440,7 @@ vacuum(VacuumStmt *vacstmt)
...
@@ -439,9 +440,7 @@ vacuum(VacuumStmt *vacstmt)
/* Turn off vacuum cost accounting */
/* Turn off vacuum cost accounting */
VacuumCostActive
=
false
;
VacuumCostActive
=
false
;
/*
/* Finish up processing. */
* Finish up processing.
*/
if
(
use_own_xacts
)
if
(
use_own_xacts
)
{
{
/* here, we are not in a transaction */
/* here, we are not in a transaction */
...
@@ -456,16 +455,16 @@ vacuum(VacuumStmt *vacstmt)
...
@@ -456,16 +455,16 @@ vacuum(VacuumStmt *vacstmt)
if
(
vacstmt
->
vacuum
)
if
(
vacstmt
->
vacuum
)
{
{
/*
/*
* If it was a database-wide VACUUM, print FSM usage statistics
* If it was a database-wide VACUUM, print FSM usage statistics
(we
*
(we
don't make you be superuser to see these).
* don't make you be superuser to see these).
*/
*/
if
(
vacstmt
->
relation
==
NULL
)
if
(
vacstmt
->
relation
==
NULL
)
PrintFreeSpaceMapStatistics
(
elevel
);
PrintFreeSpaceMapStatistics
(
elevel
);
/*
/*
* If we completed a database-wide VACUUM without skipping any
* If we completed a database-wide VACUUM without skipping any
* relations, update the database's pg_database row with info
* relations, update the database's pg_database row with info
about
*
about
the transaction IDs used, and try to truncate pg_clog.
* the transaction IDs used, and try to truncate pg_clog.
*/
*/
if
(
all_rels
)
if
(
all_rels
)
{
{
...
@@ -477,8 +476,8 @@ vacuum(VacuumStmt *vacstmt)
...
@@ -477,8 +476,8 @@ vacuum(VacuumStmt *vacstmt)
/*
/*
* Clean up working storage --- note we must do this after
* Clean up working storage --- note we must do this after
* StartTransactionCommand, else we might be trying to delete the
* StartTransactionCommand, else we might be trying to delete the
active
*
active
context!
* context!
*/
*/
MemoryContextDelete
(
vac_context
);
MemoryContextDelete
(
vac_context
);
vac_context
=
NULL
;
vac_context
=
NULL
;
...
@@ -571,15 +570,11 @@ vacuum_set_xid_limits(VacuumStmt *vacstmt, bool sharedRel,
...
@@ -571,15 +570,11 @@ vacuum_set_xid_limits(VacuumStmt *vacstmt, bool sharedRel,
limit
=
GetCurrentTransactionId
()
-
(
MaxTransactionId
>>
2
);
limit
=
GetCurrentTransactionId
()
-
(
MaxTransactionId
>>
2
);
}
}
/*
/* Be careful not to generate a "permanent" XID */
* Be careful not to generate a "permanent" XID
*/
if
(
!
TransactionIdIsNormal
(
limit
))
if
(
!
TransactionIdIsNormal
(
limit
))
limit
=
FirstNormalTransactionId
;
limit
=
FirstNormalTransactionId
;
/*
/* Ensure sane relationship of limits */
* Ensure sane relationship of limits
*/
if
(
TransactionIdFollows
(
limit
,
*
oldestXmin
))
if
(
TransactionIdFollows
(
limit
,
*
oldestXmin
))
{
{
ereport
(
WARNING
,
ereport
(
WARNING
,
...
@@ -621,9 +616,7 @@ vac_update_relstats(Oid relid, BlockNumber num_pages, double num_tuples,
...
@@ -621,9 +616,7 @@ vac_update_relstats(Oid relid, BlockNumber num_pages, double num_tuples,
Form_pg_class
pgcform
;
Form_pg_class
pgcform
;
Buffer
buffer
;
Buffer
buffer
;
/*
/* update number of tuples and number of pages in pg_class */
* update number of tuples and number of pages in pg_class
*/
rd
=
heap_openr
(
RelationRelationName
,
RowExclusiveLock
);
rd
=
heap_openr
(
RelationRelationName
,
RowExclusiveLock
);
ctup
=
SearchSysCache
(
RELOID
,
ctup
=
SearchSysCache
(
RELOID
,
...
@@ -659,10 +652,10 @@ vac_update_relstats(Oid relid, BlockNumber num_pages, double num_tuples,
...
@@ -659,10 +652,10 @@ vac_update_relstats(Oid relid, BlockNumber num_pages, double num_tuples,
LockBuffer
(
buffer
,
BUFFER_LOCK_UNLOCK
);
LockBuffer
(
buffer
,
BUFFER_LOCK_UNLOCK
);
/*
/*
* Invalidate the tuple in the catcaches; this also arranges to flush
* Invalidate the tuple in the catcaches; this also arranges to flush
the
*
the relation's relcache entry. (If we fail to commit for some
*
relation's relcache entry. (If we fail to commit for some reason, no
*
reason, no flush will occur, but no great harm is done since there
*
flush will occur, but no great harm is done since there are no
*
are no
noncritical state updates here.)
* noncritical state updates here.)
*/
*/
CacheInvalidateHeapTuple
(
rd
,
&
rtup
);
CacheInvalidateHeapTuple
(
rd
,
&
rtup
);
...
@@ -795,8 +788,8 @@ vac_truncate_clog(TransactionId vacuumXID, TransactionId frozenXID)
...
@@ -795,8 +788,8 @@ vac_truncate_clog(TransactionId vacuumXID, TransactionId frozenXID)
heap_close
(
relation
,
AccessShareLock
);
heap_close
(
relation
,
AccessShareLock
);
/*
/*
* Do not truncate CLOG if we seem to have suffered wraparound
* Do not truncate CLOG if we seem to have suffered wraparound
already;
*
already;
the computed minimum XID might be bogus.
* the computed minimum XID might be bogus.
*/
*/
if
(
vacuumAlreadyWrapped
)
if
(
vacuumAlreadyWrapped
)
{
{
...
@@ -881,8 +874,8 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, char expected_relkind)
...
@@ -881,8 +874,8 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, char expected_relkind)
CHECK_FOR_INTERRUPTS
();
CHECK_FOR_INTERRUPTS
();
/*
/*
* Race condition -- if the pg_class tuple has gone away since the
* Race condition -- if the pg_class tuple has gone away since the
last
*
last
time we saw it, we don't need to vacuum it.
* time we saw it, we don't need to vacuum it.
*/
*/
if
(
!
SearchSysCacheExists
(
RELOID
,
if
(
!
SearchSysCacheExists
(
RELOID
,
ObjectIdGetDatum
(
relid
),
ObjectIdGetDatum
(
relid
),
...
@@ -894,24 +887,21 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, char expected_relkind)
...
@@ -894,24 +887,21 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, char expected_relkind)
}
}
/*
/*
* Determine the type of lock we want --- hard exclusive lock for a
* Determine the type of lock we want --- hard exclusive lock for a FULL
* FULL vacuum, but just ShareUpdateExclusiveLock for concurrent
* vacuum, but just ShareUpdateExclusiveLock for concurrent vacuum. Either
* vacuum. Either way, we can be sure that no other backend is
* way, we can be sure that no other backend is vacuuming the same table.
* vacuuming the same table.
*/
*/
lmode
=
vacstmt
->
full
?
AccessExclusiveLock
:
ShareUpdateExclusiveLock
;
lmode
=
vacstmt
->
full
?
AccessExclusiveLock
:
ShareUpdateExclusiveLock
;
/*
/*
* Open the class, get an appropriate lock on it, and check
* Open the class, get an appropriate lock on it, and check permissions.
* permissions.
*
*
* We allow the user to vacuum a table if he is superuser, the table
* We allow the user to vacuum a table if he is superuser, the table owner,
* owner, or the database owner (but in the latter case, only if it's
* or the database owner (but in the latter case, only if it's not a
* not a shared relation). pg_class_ownercheck includes the superuser
* shared relation). pg_class_ownercheck includes the superuser case.
* case.
*
*
* Note we choose to treat permissions failure as a WARNING and keep
* Note we choose to treat permissions failure as a WARNING and keep
trying
* t
rying t
o vacuum the rest of the DB --- is this appropriate?
* to vacuum the rest of the DB --- is this appropriate?
*/
*/
onerel
=
relation_open
(
relid
,
lmode
);
onerel
=
relation_open
(
relid
,
lmode
);
...
@@ -928,8 +918,8 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, char expected_relkind)
...
@@ -928,8 +918,8 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, char expected_relkind)
}
}
/*
/*
* Check that it's a plain table; we used to do this in get_rel_oids()
* Check that it's a plain table; we used to do this in get_rel_oids()
but
*
but
seems safer to check after we've locked the relation.
* seems safer to check after we've locked the relation.
*/
*/
if
(
onerel
->
rd_rel
->
relkind
!=
expected_relkind
)
if
(
onerel
->
rd_rel
->
relkind
!=
expected_relkind
)
{
{
...
@@ -954,15 +944,14 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, char expected_relkind)
...
@@ -954,15 +944,14 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, char expected_relkind)
relation_close
(
onerel
,
lmode
);
relation_close
(
onerel
,
lmode
);
StrategyHintVacuum
(
false
);
StrategyHintVacuum
(
false
);
CommitTransactionCommand
();
CommitTransactionCommand
();
return
true
;
/* assume no long-lived data in temp
return
true
;
/* assume no long-lived data in temp tables */
* tables */
}
}
/*
/*
* Get a session-level lock too. This will protect our access to the
* Get a session-level lock too. This will protect our access to the
* relation across multiple transactions, so that we can vacuum the
* relation across multiple transactions, so that we can vacuum the
* relation's TOAST table (if any) secure in the knowledge that no one
* relation's TOAST table (if any) secure in the knowledge that no one
is
*
is
deleting the parent relation.
* deleting the parent relation.
*
*
* NOTE: this cannot block, even if someone else is waiting for access,
* NOTE: this cannot block, even if someone else is waiting for access,
* because the lock manager knows that both lock requests are from the
* because the lock manager knows that both lock requests are from the
...
@@ -971,14 +960,10 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, char expected_relkind)
...
@@ -971,14 +960,10 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, char expected_relkind)
onerelid
=
onerel
->
rd_lockInfo
.
lockRelId
;
onerelid
=
onerel
->
rd_lockInfo
.
lockRelId
;
LockRelationForSession
(
&
onerelid
,
lmode
);
LockRelationForSession
(
&
onerelid
,
lmode
);
/*
/* Remember the relation's TOAST relation for later */
* Remember the relation's TOAST relation for later
*/
toast_relid
=
onerel
->
rd_rel
->
reltoastrelid
;
toast_relid
=
onerel
->
rd_rel
->
reltoastrelid
;
/*
/* Do the actual work --- either FULL or "lazy" vacuum */
* Do the actual work --- either FULL or "lazy" vacuum
*/
if
(
vacstmt
->
full
)
if
(
vacstmt
->
full
)
full_vacuum_rel
(
onerel
,
vacstmt
);
full_vacuum_rel
(
onerel
,
vacstmt
);
else
else
...
@@ -989,18 +974,16 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, char expected_relkind)
...
@@ -989,18 +974,16 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, char expected_relkind)
/* all done with this class, but hold lock until commit */
/* all done with this class, but hold lock until commit */
relation_close
(
onerel
,
NoLock
);
relation_close
(
onerel
,
NoLock
);
/*
/* Complete the transaction and free all temporary memory used. */
* Complete the transaction and free all temporary memory used.
*/
StrategyHintVacuum
(
false
);
StrategyHintVacuum
(
false
);
CommitTransactionCommand
();
CommitTransactionCommand
();
/*
/*
* If the relation has a secondary toast rel, vacuum that too while we
* If the relation has a secondary toast rel, vacuum that too while we
* still hold the session lock on the master table. Note however that
* still hold the session lock on the master table. Note however that
* "analyze" will not get done on the toast table. This is good,
* "analyze" will not get done on the toast table. This is good,
because
*
because the toaster always uses hardcoded index access and
*
the toaster always uses hardcoded index access and statistics are
*
statistics are
totally unimportant for toast relations.
* totally unimportant for toast relations.
*/
*/
if
(
toast_relid
!=
InvalidOid
)
if
(
toast_relid
!=
InvalidOid
)
{
{
...
@@ -1008,9 +991,7 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, char expected_relkind)
...
@@ -1008,9 +991,7 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, char expected_relkind)
result
=
false
;
/* failed to vacuum the TOAST table? */
result
=
false
;
/* failed to vacuum the TOAST table? */
}
}
/*
/* Now release the session-level lock on the master table. */
* Now release the session-level lock on the master table.
*/
UnlockRelationForSession
(
&
onerelid
,
lmode
);
UnlockRelationForSession
(
&
onerelid
,
lmode
);
return
result
;
return
result
;
...
@@ -1039,8 +1020,8 @@ full_vacuum_rel(Relation onerel, VacuumStmt *vacstmt)
...
@@ -1039,8 +1020,8 @@ full_vacuum_rel(Relation onerel, VacuumStmt *vacstmt)
{
{
VacPageListData
vacuum_pages
;
/* List of pages to vacuum and/or
VacPageListData
vacuum_pages
;
/* List of pages to vacuum and/or
* clean indexes */
* clean indexes */
VacPageListData
fraged_pages
;
/* List of pages with space enough
VacPageListData
fraged_pages
;
/* List of pages with space enough
for
*
for
re-using */
* re-using */
Relation
*
Irel
;
Relation
*
Irel
;
int
nindexes
,
int
nindexes
,
i
;
i
;
...
@@ -1049,9 +1030,7 @@ full_vacuum_rel(Relation onerel, VacuumStmt *vacstmt)
...
@@ -1049,9 +1030,7 @@ full_vacuum_rel(Relation onerel, VacuumStmt *vacstmt)
vacuum_set_xid_limits
(
vacstmt
,
onerel
->
rd_rel
->
relisshared
,
vacuum_set_xid_limits
(
vacstmt
,
onerel
->
rd_rel
->
relisshared
,
&
OldestXmin
,
&
FreezeLimit
);
&
OldestXmin
,
&
FreezeLimit
);
/*
/* Set up statistics-gathering machinery. */
* Set up statistics-gathering machinery.
*/
vacrelstats
=
(
VRelStats
*
)
palloc
(
sizeof
(
VRelStats
));
vacrelstats
=
(
VRelStats
*
)
palloc
(
sizeof
(
VRelStats
));
vacrelstats
->
rel_pages
=
0
;
vacrelstats
->
rel_pages
=
0
;
vacrelstats
->
rel_tuples
=
0
;
vacrelstats
->
rel_tuples
=
0
;
...
@@ -1199,8 +1178,8 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
...
@@ -1199,8 +1178,8 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
VacPage
vacpagecopy
;
VacPage
vacpagecopy
;
ereport
(
WARNING
,
ereport
(
WARNING
,
(
errmsg
(
"relation
\"
%s
\"
page %u is uninitialized --- fixing"
,
(
errmsg
(
"relation
\"
%s
\"
page %u is uninitialized --- fixing"
,
relname
,
blkno
)));
relname
,
blkno
)));
PageInit
(
page
,
BufferGetPageSize
(
buf
),
0
);
PageInit
(
page
,
BufferGetPageSize
(
buf
),
0
);
vacpage
->
free
=
((
PageHeader
)
page
)
->
pd_upper
-
((
PageHeader
)
page
)
->
pd_lower
;
vacpage
->
free
=
((
PageHeader
)
page
)
->
pd_upper
-
((
PageHeader
)
page
)
->
pd_lower
;
free_space
+=
vacpage
->
free
;
free_space
+=
vacpage
->
free
;
...
@@ -1265,8 +1244,8 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
...
@@ -1265,8 +1244,8 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
case
HEAPTUPLE_LIVE
:
case
HEAPTUPLE_LIVE
:
/*
/*
* Tuple is good. Consider whether to replace its
* Tuple is good. Consider whether to replace its
xmin
*
xmin
value with FrozenTransactionId.
* value with FrozenTransactionId.
*/
*/
if
(
TransactionIdIsNormal
(
HeapTupleHeaderGetXmin
(
tuple
.
t_data
))
&&
if
(
TransactionIdIsNormal
(
HeapTupleHeaderGetXmin
(
tuple
.
t_data
))
&&
TransactionIdPrecedes
(
HeapTupleHeaderGetXmin
(
tuple
.
t_data
),
TransactionIdPrecedes
(
HeapTupleHeaderGetXmin
(
tuple
.
t_data
),
...
@@ -1278,9 +1257,7 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
...
@@ -1278,9 +1257,7 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
pgchanged
=
true
;
pgchanged
=
true
;
}
}
/*
/* Other checks... */
* Other checks...
*/
if
(
onerel
->
rd_rel
->
relhasoids
&&
if
(
onerel
->
rd_rel
->
relhasoids
&&
!
OidIsValid
(
HeapTupleGetOid
(
&
tuple
)))
!
OidIsValid
(
HeapTupleGetOid
(
&
tuple
)))
elog
(
WARNING
,
"relation
\"
%s
\"
TID %u/%u: OID is invalid"
,
elog
(
WARNING
,
"relation
\"
%s
\"
TID %u/%u: OID is invalid"
,
...
@@ -1289,15 +1266,14 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
...
@@ -1289,15 +1266,14 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
case
HEAPTUPLE_RECENTLY_DEAD
:
case
HEAPTUPLE_RECENTLY_DEAD
:
/*
/*
* If tuple is recently deleted then we must not
* If tuple is recently deleted then we must not
remove it
*
remove it
from relation.
* from relation.
*/
*/
nkeep
+=
1
;
nkeep
+=
1
;
/*
/*
* If we do shrinking and this tuple is updated one
* If we do shrinking and this tuple is updated one then
* then remember it to construct updated tuple
* remember it to construct updated tuple dependencies.
* dependencies.
*/
*/
if
(
do_shrinking
&&
if
(
do_shrinking
&&
!
(
ItemPointerEquals
(
&
(
tuple
.
t_self
),
!
(
ItemPointerEquals
(
&
(
tuple
.
t_self
),
...
@@ -1307,8 +1283,8 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
...
@@ -1307,8 +1283,8 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
{
{
free_vtlinks
=
1000
;
free_vtlinks
=
1000
;
vtlinks
=
(
VTupleLink
)
repalloc
(
vtlinks
,
vtlinks
=
(
VTupleLink
)
repalloc
(
vtlinks
,
(
free_vtlinks
+
num_vtlinks
)
*
(
free_vtlinks
+
num_vtlinks
)
*
sizeof
(
VTupleLinkData
));
sizeof
(
VTupleLinkData
));
}
}
vtlinks
[
num_vtlinks
].
new_tid
=
tuple
.
t_data
->
t_ctid
;
vtlinks
[
num_vtlinks
].
new_tid
=
tuple
.
t_data
->
t_ctid
;
vtlinks
[
num_vtlinks
].
this_tid
=
tuple
.
t_self
;
vtlinks
[
num_vtlinks
].
this_tid
=
tuple
.
t_self
;
...
@@ -1319,10 +1295,10 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
...
@@ -1319,10 +1295,10 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
case
HEAPTUPLE_INSERT_IN_PROGRESS
:
case
HEAPTUPLE_INSERT_IN_PROGRESS
:
/*
/*
* This should not happen, since we hold exclusive
* This should not happen, since we hold exclusive
lock on
*
lock on the relation; shouldn't we raise an error?
*
the relation; shouldn't we raise an error? (Actually,
*
(Actually, it can happen in system catalogs, since
*
it can happen in system catalogs, since we tend to
*
we tend to
release write lock before commit there.)
* release write lock before commit there.)
*/
*/
ereport
(
NOTICE
,
ereport
(
NOTICE
,
(
errmsg
(
"relation
\"
%s
\"
TID %u/%u: InsertTransactionInProgress %u --- can't shrink relation"
,
(
errmsg
(
"relation
\"
%s
\"
TID %u/%u: InsertTransactionInProgress %u --- can't shrink relation"
,
...
@@ -1332,10 +1308,10 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
...
@@ -1332,10 +1308,10 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
case
HEAPTUPLE_DELETE_IN_PROGRESS
:
case
HEAPTUPLE_DELETE_IN_PROGRESS
:
/*
/*
* This should not happen, since we hold exclusive
* This should not happen, since we hold exclusive
lock on
*
lock on the relation; shouldn't we raise an error?
*
the relation; shouldn't we raise an error? (Actually,
*
(Actually, it can happen in system catalogs, since
*
it can happen in system catalogs, since we tend to
*
we tend to
release write lock before commit there.)
* release write lock before commit there.)
*/
*/
ereport
(
NOTICE
,
ereport
(
NOTICE
,
(
errmsg
(
"relation
\"
%s
\"
TID %u/%u: DeleteTransactionInProgress %u --- can't shrink relation"
,
(
errmsg
(
"relation
\"
%s
\"
TID %u/%u: DeleteTransactionInProgress %u --- can't shrink relation"
,
...
@@ -1357,12 +1333,11 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
...
@@ -1357,12 +1333,11 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
ItemId
lpp
;
ItemId
lpp
;
/*
/*
* Here we are building a temporary copy of the page with
* Here we are building a temporary copy of the page with dead
* dead tuples removed. Below we will apply
* tuples removed. Below we will apply PageRepairFragmentation
* PageRepairFragmentation to the copy, so that we can
* to the copy, so that we can determine how much space will
* determine how much space will be available after
* be available after removal of dead tuples. But note we are
* removal of dead tuples. But note we are NOT changing
* NOT changing the real page yet...
* the real page yet...
*/
*/
if
(
tempPage
==
NULL
)
if
(
tempPage
==
NULL
)
{
{
...
@@ -1412,8 +1387,8 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
...
@@ -1412,8 +1387,8 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
/*
/*
* Add the page to fraged_pages if it has a useful amount of free
* Add the page to fraged_pages if it has a useful amount of free
* space. "Useful" means enough for a minimal-sized tuple. But we
* space. "Useful" means enough for a minimal-sized tuple. But we
* don't know that accurately near the start of the relation, so
* don't know that accurately near the start of the relation, so
add
*
add
pages unconditionally if they have >= BLCKSZ/10 free space.
* pages unconditionally if they have >= BLCKSZ/10 free space.
*/
*/
do_frag
=
(
vacpage
->
free
>=
min_tlen
||
vacpage
->
free
>=
BLCKSZ
/
10
);
do_frag
=
(
vacpage
->
free
>=
min_tlen
||
vacpage
->
free
>=
BLCKSZ
/
10
);
...
@@ -1429,8 +1404,7 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
...
@@ -1429,8 +1404,7 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
/*
/*
* Include the page in empty_end_pages if it will be empty after
* Include the page in empty_end_pages if it will be empty after
* vacuuming; this is to keep us from using it as a move
* vacuuming; this is to keep us from using it as a move destination.
* destination.
*/
*/
if
(
notup
)
if
(
notup
)
{
{
...
@@ -1500,11 +1474,11 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
...
@@ -1500,11 +1474,11 @@ scan_heap(VRelStats *vacrelstats, Relation onerel,
RelationGetRelationName
(
onerel
),
RelationGetRelationName
(
onerel
),
tups_vacuumed
,
num_tuples
,
nblocks
),
tups_vacuumed
,
num_tuples
,
nblocks
),
errdetail
(
"%.0f dead row versions cannot be removed yet.
\n
"
errdetail
(
"%.0f dead row versions cannot be removed yet.
\n
"
"Nonremovable row versions range from %lu to %lu bytes long.
\n
"
"Nonremovable row versions range from %lu to %lu bytes long.
\n
"
"There were %.0f unused item pointers.
\n
"
"There were %.0f unused item pointers.
\n
"
"Total free space (including removable row versions) is %.0f bytes.
\n
"
"Total free space (including removable row versions) is %.0f bytes.
\n
"
"%u pages are or will become empty, including %u at the end of the table.
\n
"
"%u pages are or will become empty, including %u at the end of the table.
\n
"
"%u pages containing %.0f free bytes are potential move destinations.
\n
"
"%u pages containing %.0f free bytes are potential move destinations.
\n
"
"%s"
,
"%s"
,
nkeep
,
nkeep
,
(
unsigned
long
)
min_tlen
,
(
unsigned
long
)
max_tlen
,
(
unsigned
long
)
min_tlen
,
(
unsigned
long
)
max_tlen
,
...
@@ -1577,14 +1551,14 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -1577,14 +1551,14 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
vacpage
->
offsets_used
=
vacpage
->
offsets_free
=
0
;
vacpage
->
offsets_used
=
vacpage
->
offsets_free
=
0
;
/*
/*
* Scan pages backwards from the last nonempty page, trying to move
* Scan pages backwards from the last nonempty page, trying to move
tuples
*
tuples down to lower pages. Quit when we reach a page that we have
*
down to lower pages. Quit when we reach a page that we have moved any
*
moved any tuples onto, or the first page if we haven't moved
*
tuples onto, or the first page if we haven't moved anything, or when we
*
anything, or when we find a page we cannot completely empty (this
*
find a page we cannot completely empty (this last condition is handled
*
last condition is handled
by "break" statements within the loop).
* by "break" statements within the loop).
*
*
* NB: this code depends on the vacuum_pages and fraged_pages lists being
* NB: this code depends on the vacuum_pages and fraged_pages lists being
in
*
in
order by blkno.
* order by blkno.
*/
*/
nblocks
=
vacrelstats
->
rel_pages
;
nblocks
=
vacrelstats
->
rel_pages
;
for
(
blkno
=
nblocks
-
vacuum_pages
->
empty_end_pages
-
1
;
for
(
blkno
=
nblocks
-
vacuum_pages
->
empty_end_pages
-
1
;
...
@@ -1602,26 +1576,23 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -1602,26 +1576,23 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
vacuum_delay_point
();
vacuum_delay_point
();
/*
/*
* Forget fraged_pages pages at or after this one; they're no
* Forget fraged_pages pages at or after this one; they're no longer
* longer useful as move targets, since we only want to move down.
* useful as move targets, since we only want to move down. Note that
* Note that since we stop the outer loop at last_move_dest_block,
* since we stop the outer loop at last_move_dest_block, pages removed
* pages removed here cannot have had anything moved onto them
* here cannot have had anything moved onto them already.
* already.
*
*
* Also note that we don't change the stored fraged_pages list, only
* Also note that we don't change the stored fraged_pages list, only
our
*
our local variable num_fraged_pages; so the forgotten pages are
*
local variable num_fraged_pages; so the forgotten pages are still
*
still
available to be loaded into the free space map later.
* available to be loaded into the free space map later.
*/
*/
while
(
num_fraged_pages
>
0
&&
while
(
num_fraged_pages
>
0
&&
fraged_pages
->
pagedesc
[
num_fraged_pages
-
1
]
->
blkno
>=
blkno
)
fraged_pages
->
pagedesc
[
num_fraged_pages
-
1
]
->
blkno
>=
blkno
)
{
{
Assert
(
fraged_pages
->
pagedesc
[
num_fraged_pages
-
1
]
->
offsets_used
==
0
);
Assert
(
fraged_pages
->
pagedesc
[
num_fraged_pages
-
1
]
->
offsets_used
==
0
);
--
num_fraged_pages
;
--
num_fraged_pages
;
}
}
/*
/* Process this page of relation. */
* Process this page of relation.
*/
buf
=
ReadBuffer
(
onerel
,
blkno
);
buf
=
ReadBuffer
(
onerel
,
blkno
);
page
=
BufferGetPage
(
buf
);
page
=
BufferGetPage
(
buf
);
...
@@ -1666,8 +1637,8 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -1666,8 +1637,8 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
else
else
Assert
(
!
isempty
);
Assert
(
!
isempty
);
chain_tuple_moved
=
false
;
/* no one chain-tuple was moved
chain_tuple_moved
=
false
;
/* no one chain-tuple was moved
off
*
off
this page, yet */
* this page, yet */
vacpage
->
blkno
=
blkno
;
vacpage
->
blkno
=
blkno
;
maxoff
=
PageGetMaxOffsetNumber
(
page
);
maxoff
=
PageGetMaxOffsetNumber
(
page
);
for
(
offnum
=
FirstOffsetNumber
;
for
(
offnum
=
FirstOffsetNumber
;
...
@@ -1687,38 +1658,36 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -1687,38 +1658,36 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
ItemPointerSet
(
&
(
tuple
.
t_self
),
blkno
,
offnum
);
ItemPointerSet
(
&
(
tuple
.
t_self
),
blkno
,
offnum
);
/*
/*
* VACUUM FULL has an exclusive lock on the relation. So
* VACUUM FULL has an exclusive lock on the relation. So normally
* normally no other transaction can have pending INSERTs or
* no other transaction can have pending INSERTs or DELETEs in
* DELETEs in this relation. A tuple is either (a) a tuple in
* this relation. A tuple is either (a) a tuple in a system
* a system catalog, inserted or deleted by a not yet
* catalog, inserted or deleted by a not yet committed transaction
* committed transaction or (b) dead (XMIN_INVALID or
* or (b) dead (XMIN_INVALID or XMAX_COMMITTED) or (c) inserted by
* XMAX_COMMITTED) or (c) inserted by a committed xact
* a committed xact (XMIN_COMMITTED) or (d) moved by the currently
* (XMIN_COMMITTED) or (d) moved by the currently running
* running VACUUM. In case (a) we wouldn't be in repair_frag() at
* VACUUM. In case (a) we wouldn't be in repair_frag() at all.
* all. In case (b) we cannot be here, because scan_heap() has
* In case (b) we cannot be here, because scan_heap() has
* already marked the item as unused, see continue above. Case (c)
* already marked the item as unused, see continue above. Case
* is what normally is to be expected. Case (d) is only possible,
* (c) is what normally is to be expected. Case (d) is only
* if a whole tuple chain has been moved while processing this or
* possible, if a whole tuple chain has been moved while
* a higher numbered block.
* processing this or a higher numbered block.
*/
*/
if
(
!
(
tuple
.
t_data
->
t_infomask
&
HEAP_XMIN_COMMITTED
))
if
(
!
(
tuple
.
t_data
->
t_infomask
&
HEAP_XMIN_COMMITTED
))
{
{
/*
/*
* There cannot be another concurrently running VACUUM. If
* There cannot be another concurrently running VACUUM. If
the
* t
he t
uple had been moved in by a previous VACUUM, the
* tuple had been moved in by a previous VACUUM, the
* visibility check would have set XMIN_COMMITTED. If the
* visibility check would have set XMIN_COMMITTED. If the
* tuple had been moved in by the currently running
* tuple had been moved in by the currently running VACUUM,
* VACUUM, the loop would have been terminated. We had
* the loop would have been terminated. We had elog(ERROR,
* elog(ERROR, ...) here, but as we are testing for a
* ...) here, but as we are testing for a can't-happen
* can't-happen condition, Assert() seems more
* condition, Assert() seems more appropriate.
* appropriate.
*/
*/
Assert
(
!
(
tuple
.
t_data
->
t_infomask
&
HEAP_MOVED_IN
));
Assert
(
!
(
tuple
.
t_data
->
t_infomask
&
HEAP_MOVED_IN
));
/*
/*
* If this (chain) tuple is moved by me already then I
* If this (chain) tuple is moved by me already then I
have to
*
have to check is it in vacpage or not - i.e. is it
*
check is it in vacpage or not - i.e. is it moved while
*
moved while
cleaning this page or some previous one.
* cleaning this page or some previous one.
*/
*/
Assert
(
tuple
.
t_data
->
t_infomask
&
HEAP_MOVED_OFF
);
Assert
(
tuple
.
t_data
->
t_infomask
&
HEAP_MOVED_OFF
);
...
@@ -1754,27 +1723,25 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -1754,27 +1723,25 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
}
}
/*
/*
* If this tuple is in the chain of tuples created in updates
* If this tuple is in the chain of tuples created in updates
by
*
by "recent" transactions then we have to move all chain of
*
"recent" transactions then we have to move all chain of tuples
* t
uples t
o another places.
* to another places.
*
*
* NOTE: this test is not 100% accurate: it is possible for a
* NOTE: this test is not 100% accurate: it is possible for a tuple
* tuple to be an updated one with recent xmin, and yet not
* to be an updated one with recent xmin, and yet not have a
* have a corresponding tuple in the vtlinks list. Presumably
* corresponding tuple in the vtlinks list. Presumably there was
* there was once a parent tuple with xmax matching the xmin,
* once a parent tuple with xmax matching the xmin, but it's
* but it's possible that that tuple has been removed --- for
* possible that that tuple has been removed --- for example, if
* example, if it had xmin = xmax then
* it had xmin = xmax then HeapTupleSatisfiesVacuum would deem it
* HeapTupleSatisfiesVacuum would deem it removable as soon as
* removable as soon as the xmin xact completes.
* the xmin xact completes.
*
*
* To be on the safe side, we abandon the repair_frag process if
* To be on the safe side, we abandon the repair_frag process if we
* we cannot find the parent tuple in vtlinks. This may be
* cannot find the parent tuple in vtlinks. This may be overly
* overly conservative; AFAICS it would be safe to move the
* conservative; AFAICS it would be safe to move the chain.
* chain.
*/
*/
if
(((
tuple
.
t_data
->
t_infomask
&
HEAP_UPDATED
)
&&
if
(((
tuple
.
t_data
->
t_infomask
&
HEAP_UPDATED
)
&&
!
TransactionIdPrecedes
(
HeapTupleHeaderGetXmin
(
tuple
.
t_data
),
!
TransactionIdPrecedes
(
HeapTupleHeaderGetXmin
(
tuple
.
t_data
),
OldestXmin
))
||
OldestXmin
))
||
(
!
(
tuple
.
t_data
->
t_infomask
&
(
HEAP_XMAX_INVALID
|
(
!
(
tuple
.
t_data
->
t_infomask
&
(
HEAP_XMAX_INVALID
|
HEAP_MARKED_FOR_UPDATE
))
&&
HEAP_MARKED_FOR_UPDATE
))
&&
!
(
ItemPointerEquals
(
&
(
tuple
.
t_self
),
!
(
ItemPointerEquals
(
&
(
tuple
.
t_self
),
...
@@ -1811,11 +1778,11 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -1811,11 +1778,11 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
free_vtmove
=
100
;
free_vtmove
=
100
;
/*
/*
* If this tuple is in the begin/middle of the chain then
* If this tuple is in the begin/middle of the chain then
we
*
we
have to move to the end of chain.
* have to move to the end of chain.
*/
*/
while
(
!
(
tp
.
t_data
->
t_infomask
&
(
HEAP_XMAX_INVALID
|
while
(
!
(
tp
.
t_data
->
t_infomask
&
(
HEAP_XMAX_INVALID
|
HEAP_MARKED_FOR_UPDATE
))
&&
HEAP_MARKED_FOR_UPDATE
))
&&
!
(
ItemPointerEquals
(
&
(
tp
.
t_self
),
!
(
ItemPointerEquals
(
&
(
tp
.
t_self
),
&
(
tp
.
t_data
->
t_ctid
))))
&
(
tp
.
t_data
->
t_ctid
))))
{
{
...
@@ -1831,17 +1798,17 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -1831,17 +1798,17 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
ItemPointerGetBlockNumber
(
&
Ctid
));
ItemPointerGetBlockNumber
(
&
Ctid
));
Cpage
=
BufferGetPage
(
Cbuf
);
Cpage
=
BufferGetPage
(
Cbuf
);
Citemid
=
PageGetItemId
(
Cpage
,
Citemid
=
PageGetItemId
(
Cpage
,
ItemPointerGetOffsetNumber
(
&
Ctid
));
ItemPointerGetOffsetNumber
(
&
Ctid
));
if
(
!
ItemIdIsUsed
(
Citemid
))
if
(
!
ItemIdIsUsed
(
Citemid
))
{
{
/*
/*
* This means that in the middle of chain there
* This means that in the middle of chain there
was
*
was tuple updated by older (than OldestXmin)
*
tuple updated by older (than OldestXmin) xaction
*
xaction and this tuple is already deleted by
*
and this tuple is already deleted by me. Actually,
*
me. Actually, upper part of chain should be
*
upper part of chain should be removed and seems
*
removed and seems that this should be handled
*
that this should be handled in scan_heap(), but
* i
n scan_heap(), but it's not implemented at the
* i
t's not implemented at the moment and so we just
*
moment and so we just
stop shrinking here.
* stop shrinking here.
*/
*/
elog
(
DEBUG2
,
"child itemid in update-chain marked as unused --- can't continue repair_frag"
);
elog
(
DEBUG2
,
"child itemid in update-chain marked as unused --- can't continue repair_frag"
);
chain_move_failed
=
true
;
chain_move_failed
=
true
;
...
@@ -1860,9 +1827,7 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -1860,9 +1827,7 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
break
;
/* out of walk-along-page loop */
break
;
/* out of walk-along-page loop */
}
}
/*
/* Check if all items in chain can be moved */
* Check if all items in chain can be moved
*/
for
(;;)
for
(;;)
{
{
Buffer
Pbuf
;
Buffer
Pbuf
;
...
@@ -1913,8 +1878,8 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -1913,8 +1878,8 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
/* At beginning of chain? */
/* At beginning of chain? */
if
(
!
(
tp
.
t_data
->
t_infomask
&
HEAP_UPDATED
)
||
if
(
!
(
tp
.
t_data
->
t_infomask
&
HEAP_UPDATED
)
||
TransactionIdPrecedes
(
HeapTupleHeaderGetXmin
(
tp
.
t_data
),
TransactionIdPrecedes
(
HeapTupleHeaderGetXmin
(
tp
.
t_data
),
OldestXmin
))
OldestXmin
))
break
;
break
;
/* No, move to tuple with prior row version */
/* No, move to tuple with prior row version */
...
@@ -1934,10 +1899,10 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -1934,10 +1899,10 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
}
}
tp
.
t_self
=
vtlp
->
this_tid
;
tp
.
t_self
=
vtlp
->
this_tid
;
Pbuf
=
ReadBuffer
(
onerel
,
Pbuf
=
ReadBuffer
(
onerel
,
ItemPointerGetBlockNumber
(
&
(
tp
.
t_self
)));
ItemPointerGetBlockNumber
(
&
(
tp
.
t_self
)));
Ppage
=
BufferGetPage
(
Pbuf
);
Ppage
=
BufferGetPage
(
Pbuf
);
Pitemid
=
PageGetItemId
(
Ppage
,
Pitemid
=
PageGetItemId
(
Ppage
,
ItemPointerGetOffsetNumber
(
&
(
tp
.
t_self
)));
ItemPointerGetOffsetNumber
(
&
(
tp
.
t_self
)));
/* this can't happen since we saw tuple earlier: */
/* this can't happen since we saw tuple earlier: */
if
(
!
ItemIdIsUsed
(
Pitemid
))
if
(
!
ItemIdIsUsed
(
Pitemid
))
elog
(
ERROR
,
"parent itemid marked as unused"
);
elog
(
ERROR
,
"parent itemid marked as unused"
);
...
@@ -1950,18 +1915,17 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -1950,18 +1915,17 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
/*
/*
* Read above about cases when !ItemIdIsUsed(Citemid)
* Read above about cases when !ItemIdIsUsed(Citemid)
* (child item is removed)... Due to the fact that at
* (child item is removed)... Due to the fact that at the
* the moment we don't remove unuseful part of
* moment we don't remove unuseful part of update-chain,
* update-chain, it's possible to get too old parent
* it's possible to get too old parent row here. Like as
* row here. Like as in the case which caused this
* in the case which caused this problem, we stop
* problem, we stop shrinking here. I could try to
* shrinking here. I could try to find real parent row but
* find real parent row but want not to do it because
* want not to do it because of real solution will be
* of real solution will be implemented anyway, later,
* implemented anyway, later, and we are too close to 6.5
* and we are too close to 6.5 release. - vadim
* release. - vadim 06/11/99
* 06/11/99
*/
*/
if
(
!
(
TransactionIdEquals
(
HeapTupleHeaderGetXmax
(
Ptp
.
t_data
),
if
(
!
(
TransactionIdEquals
(
HeapTupleHeaderGetXmax
(
Ptp
.
t_data
),
HeapTupleHeaderGetXmin
(
tp
.
t_data
))))
HeapTupleHeaderGetXmin
(
tp
.
t_data
))))
{
{
ReleaseBuffer
(
Pbuf
);
ReleaseBuffer
(
Pbuf
);
elog
(
DEBUG2
,
"too old parent tuple found --- can't continue repair_frag"
);
elog
(
DEBUG2
,
"too old parent tuple found --- can't continue repair_frag"
);
...
@@ -1984,9 +1948,9 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -1984,9 +1948,9 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
if
(
chain_move_failed
)
if
(
chain_move_failed
)
{
{
/*
/*
* Undo changes to offsets_used state. We don't
* Undo changes to offsets_used state. We don't
bother
*
bother cleaning up the amount-free state, since
*
cleaning up the amount-free state, since we're not
*
we're not
going to do any further tuple motion.
* going to do any further tuple motion.
*/
*/
for
(
i
=
0
;
i
<
num_vtmove
;
i
++
)
for
(
i
=
0
;
i
<
num_vtmove
;
i
++
)
{
{
...
@@ -1997,9 +1961,7 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -1997,9 +1961,7 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
break
;
/* out of walk-along-page loop */
break
;
/* out of walk-along-page loop */
}
}
/*
/* Okay, move the whole tuple chain */
* Okay, move the whole tuple chain
*/
ItemPointerSetInvalid
(
&
Ctid
);
ItemPointerSetInvalid
(
&
Ctid
);
for
(
ti
=
0
;
ti
<
num_vtmove
;
ti
++
)
for
(
ti
=
0
;
ti
<
num_vtmove
;
ti
++
)
{
{
...
@@ -2010,7 +1972,7 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -2010,7 +1972,7 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
/* Get page to move from */
/* Get page to move from */
tuple
.
t_self
=
vtmove
[
ti
].
tid
;
tuple
.
t_self
=
vtmove
[
ti
].
tid
;
Cbuf
=
ReadBuffer
(
onerel
,
Cbuf
=
ReadBuffer
(
onerel
,
ItemPointerGetBlockNumber
(
&
(
tuple
.
t_self
)));
ItemPointerGetBlockNumber
(
&
(
tuple
.
t_self
)));
/* Get page to move to */
/* Get page to move to */
dst_buffer
=
ReadBuffer
(
onerel
,
destvacpage
->
blkno
);
dst_buffer
=
ReadBuffer
(
onerel
,
destvacpage
->
blkno
);
...
@@ -2023,7 +1985,7 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -2023,7 +1985,7 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
Cpage
=
BufferGetPage
(
Cbuf
);
Cpage
=
BufferGetPage
(
Cbuf
);
Citemid
=
PageGetItemId
(
Cpage
,
Citemid
=
PageGetItemId
(
Cpage
,
ItemPointerGetOffsetNumber
(
&
(
tuple
.
t_self
)));
ItemPointerGetOffsetNumber
(
&
(
tuple
.
t_self
)));
tuple
.
t_datamcxt
=
NULL
;
tuple
.
t_datamcxt
=
NULL
;
tuple
.
t_data
=
(
HeapTupleHeader
)
PageGetItem
(
Cpage
,
Citemid
);
tuple
.
t_data
=
(
HeapTupleHeader
)
PageGetItem
(
Cpage
,
Citemid
);
tuple_len
=
tuple
.
t_len
=
ItemIdGetLength
(
Citemid
);
tuple_len
=
tuple
.
t_len
=
ItemIdGetLength
(
Citemid
);
...
@@ -2107,19 +2069,16 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -2107,19 +2069,16 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
}
/* walk along page */
}
/* walk along page */
/*
/*
* If we broke out of the walk-along-page loop early (ie, still
* If we broke out of the walk-along-page loop early (ie, still
have
*
have offnum <= maxoff), then we failed to move some tuple off
*
offnum <= maxoff), then we failed to move some tuple off this page.
*
this page. No point in shrinking any more, so clean up and
*
No point in shrinking any more, so clean up and exit the per-page
*
exit the per-page
loop.
* loop.
*/
*/
if
(
offnum
<
maxoff
&&
keep_tuples
>
0
)
if
(
offnum
<
maxoff
&&
keep_tuples
>
0
)
{
{
OffsetNumber
off
;
OffsetNumber
off
;
/*
/* Fix vacpage state for any unvisited tuples remaining on page */
* Fix vacpage state for any unvisited tuples remaining on
* page
*/
for
(
off
=
OffsetNumberNext
(
offnum
);
for
(
off
=
OffsetNumberNext
(
offnum
);
off
<=
maxoff
;
off
<=
maxoff
;
off
=
OffsetNumberNext
(
off
))
off
=
OffsetNumberNext
(
off
))
...
@@ -2134,8 +2093,8 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -2134,8 +2093,8 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
continue
;
continue
;
/*
/*
* * See comments in the walk-along-page loop above, why
* * See comments in the walk-along-page loop above, why
we *
*
we *
have Asserts here instead of if (...) elog(ERROR).
* have Asserts here instead of if (...) elog(ERROR).
*/
*/
Assert
(
!
(
htup
->
t_infomask
&
HEAP_MOVED_IN
));
Assert
(
!
(
htup
->
t_infomask
&
HEAP_MOVED_IN
));
Assert
(
htup
->
t_infomask
&
HEAP_MOVED_OFF
);
Assert
(
htup
->
t_infomask
&
HEAP_MOVED_OFF
);
...
@@ -2199,20 +2158,20 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -2199,20 +2158,20 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
* We have to commit our tuple movings before we truncate the
* We have to commit our tuple movings before we truncate the
* relation. Ideally we should do Commit/StartTransactionCommand
* relation. Ideally we should do Commit/StartTransactionCommand
* here, relying on the session-level table lock to protect our
* here, relying on the session-level table lock to protect our
* exclusive access to the relation. However, that would require
* exclusive access to the relation. However, that would require
a
*
a lot of extra code to close and re-open the relation, indexes,
*
lot of extra code to close and re-open the relation, indexes, etc.
*
etc. For now, a quick hack: record status of current
*
For now, a quick hack: record status of current transaction as
*
transaction as
committed, and continue.
* committed, and continue.
*/
*/
RecordTransactionCommit
();
RecordTransactionCommit
();
}
}
/*
/*
* We are not going to move any more tuples across pages, but we still
* We are not going to move any more tuples across pages, but we still
* need to apply vacuum_page to compact free space in the remaining
* need to apply vacuum_page to compact free space in the remaining
pages
*
pages in vacuum_pages list. Note that some of these pages may also
*
in vacuum_pages list. Note that some of these pages may also be in the
*
be in the fraged_pages list, and may have had tuples moved onto
*
fraged_pages list, and may have had tuples moved onto them; if so, we
*
them; if so, we
already did vacuum_page and needn't do it again.
* already did vacuum_page and needn't do it again.
*/
*/
for
(
i
=
0
,
curpage
=
vacuum_pages
->
pagedesc
;
for
(
i
=
0
,
curpage
=
vacuum_pages
->
pagedesc
;
i
<
vacuumed_pages
;
i
<
vacuumed_pages
;
...
@@ -2246,21 +2205,19 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -2246,21 +2205,19 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
last_move_dest_block
,
num_moved
);
last_move_dest_block
,
num_moved
);
/*
/*
* It'd be cleaner to make this report at the bottom of this routine,
* It'd be cleaner to make this report at the bottom of this routine,
but
*
but then the rusage would double-count the second pass of index
*
then the rusage would double-count the second pass of index vacuuming.
*
vacuuming. So do it here and ignore the relatively small amount of
*
So do it here and ignore the relatively small amount of processing that
*
processing that
occurs below.
* occurs below.
*/
*/
ereport
(
elevel
,
ereport
(
elevel
,
(
errmsg
(
"
\"
%s
\"
: moved %u row versions, truncated %u to %u pages"
,
(
errmsg
(
"
\"
%s
\"
: moved %u row versions, truncated %u to %u pages"
,
RelationGetRelationName
(
onerel
),
RelationGetRelationName
(
onerel
),
num_moved
,
nblocks
,
blkno
),
num_moved
,
nblocks
,
blkno
),
errdetail
(
"%s"
,
errdetail
(
"%s"
,
vac_show_rusage
(
&
ru0
))));
vac_show_rusage
(
&
ru0
))));
/*
/* Reflect the motion of system tuples to catalog cache here. */
* Reflect the motion of system tuples to catalog cache here.
*/
CommandCounterIncrement
();
CommandCounterIncrement
();
if
(
Nvacpagelist
.
num_pages
>
0
)
if
(
Nvacpagelist
.
num_pages
>
0
)
...
@@ -2274,7 +2231,7 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -2274,7 +2231,7 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
/* re-sort Nvacpagelist.pagedesc */
/* re-sort Nvacpagelist.pagedesc */
for
(
vpleft
=
Nvacpagelist
.
pagedesc
,
for
(
vpleft
=
Nvacpagelist
.
pagedesc
,
vpright
=
Nvacpagelist
.
pagedesc
+
Nvacpagelist
.
num_pages
-
1
;
vpright
=
Nvacpagelist
.
pagedesc
+
Nvacpagelist
.
num_pages
-
1
;
vpleft
<
vpright
;
vpleft
++
,
vpright
--
)
vpleft
<
vpright
;
vpleft
++
,
vpright
--
)
{
{
vpsave
=
*
vpleft
;
vpsave
=
*
vpleft
;
...
@@ -2283,11 +2240,10 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -2283,11 +2240,10 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
}
}
/*
/*
* keep_tuples is the number of tuples that have been moved
* keep_tuples is the number of tuples that have been moved off a
* off a page during chain moves but not been scanned over
* page during chain moves but not been scanned over subsequently.
* subsequently. The tuple ids of these tuples are not
* The tuple ids of these tuples are not recorded as free offsets
* recorded as free offsets for any VacPage, so they will not
* for any VacPage, so they will not be cleared from the indexes.
* be cleared from the indexes.
*/
*/
Assert
(
keep_tuples
>=
0
);
Assert
(
keep_tuples
>=
0
);
for
(
i
=
0
;
i
<
nindexes
;
i
++
)
for
(
i
=
0
;
i
<
nindexes
;
i
++
)
...
@@ -2325,8 +2281,8 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -2325,8 +2281,8 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
continue
;
continue
;
/*
/*
* * See comments in the walk-along-page loop above, why
* * See comments in the walk-along-page loop above, why
we *
*
we *
have Asserts here instead of if (...) elog(ERROR).
* have Asserts here instead of if (...) elog(ERROR).
*/
*/
Assert
(
!
(
htup
->
t_infomask
&
HEAP_MOVED_IN
));
Assert
(
!
(
htup
->
t_infomask
&
HEAP_MOVED_IN
));
Assert
(
htup
->
t_infomask
&
HEAP_MOVED_OFF
);
Assert
(
htup
->
t_infomask
&
HEAP_MOVED_OFF
);
...
@@ -2354,8 +2310,8 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -2354,8 +2310,8 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
else
else
{
{
/*
/*
* No XLOG record, but still need to flag that XID exists
* No XLOG record, but still need to flag that XID exists
on
*
on
disk
* disk
*/
*/
MyXactMadeTempRelUpdate
=
true
;
MyXactMadeTempRelUpdate
=
true
;
}
}
...
@@ -2374,10 +2330,10 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
...
@@ -2374,10 +2330,10 @@ repair_frag(VRelStats *vacrelstats, Relation onerel,
}
}
/*
/*
* Flush dirty pages out to disk. We do this unconditionally, even if
* Flush dirty pages out to disk. We do this unconditionally, even if
we
*
we don't need to truncate, because we want to ensure that all
*
don't need to truncate, because we want to ensure that all tuples have
*
tuples have correct on-row commit status on disk (see bufmgr.c's
*
correct on-row commit status on disk (see bufmgr.c's comments for
*
comments for
FlushRelationBuffers()).
* FlushRelationBuffers()).
*/
*/
FlushRelationBuffers
(
onerel
,
blkno
);
FlushRelationBuffers
(
onerel
,
blkno
);
...
@@ -2423,9 +2379,7 @@ move_chain_tuple(Relation rel,
...
@@ -2423,9 +2379,7 @@ move_chain_tuple(Relation rel,
heap_copytuple_with_tuple
(
old_tup
,
&
newtup
);
heap_copytuple_with_tuple
(
old_tup
,
&
newtup
);
/*
/* register invalidation of source tuple in catcaches. */
* register invalidation of source tuple in catcaches.
*/
CacheInvalidateHeapTuple
(
rel
,
old_tup
);
CacheInvalidateHeapTuple
(
rel
,
old_tup
);
/* NO EREPORT(ERROR) TILL CHANGES ARE LOGGED */
/* NO EREPORT(ERROR) TILL CHANGES ARE LOGGED */
...
@@ -2440,20 +2394,20 @@ move_chain_tuple(Relation rel,
...
@@ -2440,20 +2394,20 @@ move_chain_tuple(Relation rel,
/*
/*
* If this page was not used before - clean it.
* If this page was not used before - clean it.
*
*
* NOTE: a nasty bug used to lurk here. It is possible for the source
* NOTE: a nasty bug used to lurk here. It is possible for the source
and
*
and destination pages to be the same (since this tuple-chain member
*
destination pages to be the same (since this tuple-chain member can be
*
can be on a page lower than the one we're currently processing in
*
on a page lower than the one we're currently processing in the outer
*
the outer loop). If that's true, then after vacuum_page() the
*
loop). If that's true, then after vacuum_page() the source tuple will
*
source tuple will have been moved, and tuple.t_data will be
*
have been moved, and tuple.t_data will be pointing at garbage.
*
pointing at garbage. Therefore we must do everything that use
s
*
Therefore we must do everything that uses old_tup->t_data BEFORE thi
s
*
old_tup->t_data BEFORE this
step!!
* step!!
*
*
* This path is different from the other callers of vacuum_page, because
* This path is different from the other callers of vacuum_page, because
we
*
we have already incremented the vacpage's offsets_used field to
*
have already incremented the vacpage's offsets_used field to account
*
account
for the tuple(s) we expect to move onto the page. Therefore
* for the tuple(s) we expect to move onto the page. Therefore
* vacuum_page's check for offsets_used == 0 is wrong. But since
* vacuum_page's check for offsets_used == 0 is wrong. But since
that's a
*
that's a good debugging check for all other callers, we work around
*
good debugging check for all other callers, we work around it here
*
it here
rather than remove it.
* rather than remove it.
*/
*/
if
(
!
PageIsEmpty
(
dst_page
)
&&
cleanVpd
)
if
(
!
PageIsEmpty
(
dst_page
)
&&
cleanVpd
)
{
{
...
@@ -2465,8 +2419,8 @@ move_chain_tuple(Relation rel,
...
@@ -2465,8 +2419,8 @@ move_chain_tuple(Relation rel,
}
}
/*
/*
* Update the state of the copied tuple, and store it on the
* Update the state of the copied tuple, and store it on the
destination
*
destination
page.
* page.
*/
*/
newtup
.
t_data
->
t_infomask
&=
~
(
HEAP_XMIN_COMMITTED
|
newtup
.
t_data
->
t_infomask
&=
~
(
HEAP_XMIN_COMMITTED
|
HEAP_XMIN_INVALID
|
HEAP_XMIN_INVALID
|
...
@@ -2502,17 +2456,15 @@ move_chain_tuple(Relation rel,
...
@@ -2502,17 +2456,15 @@ move_chain_tuple(Relation rel,
}
}
else
else
{
{
/*
/* No XLOG record, but still need to flag that XID exists on disk */
* No XLOG record, but still need to flag that XID exists on disk
*/
MyXactMadeTempRelUpdate
=
true
;
MyXactMadeTempRelUpdate
=
true
;
}
}
END_CRIT_SECTION
();
END_CRIT_SECTION
();
/*
/*
* Set new tuple's t_ctid pointing to itself for last tuple in chain,
* Set new tuple's t_ctid pointing to itself for last tuple in chain,
and
*
and
to next tuple in chain otherwise.
* to next tuple in chain otherwise.
*/
*/
/* Is this ok after log_heap_move() and END_CRIT_SECTION()? */
/* Is this ok after log_heap_move() and END_CRIT_SECTION()? */
if
(
!
ItemPointerIsValid
(
ctid
))
if
(
!
ItemPointerIsValid
(
ctid
))
...
@@ -2563,17 +2515,15 @@ move_plain_tuple(Relation rel,
...
@@ -2563,17 +2515,15 @@ move_plain_tuple(Relation rel,
* register invalidation of source tuple in catcaches.
* register invalidation of source tuple in catcaches.
*
*
* (Note: we do not need to register the copied tuple, because we are not
* (Note: we do not need to register the copied tuple, because we are not
* changing the tuple contents and so there cannot be any need to
* changing the tuple contents and so there cannot be any need to
flush
*
flush
negative catcache entries.)
* negative catcache entries.)
*/
*/
CacheInvalidateHeapTuple
(
rel
,
old_tup
);
CacheInvalidateHeapTuple
(
rel
,
old_tup
);
/* NO EREPORT(ERROR) TILL CHANGES ARE LOGGED */
/* NO EREPORT(ERROR) TILL CHANGES ARE LOGGED */
START_CRIT_SECTION
();
START_CRIT_SECTION
();
/*
/* Mark new tuple as MOVED_IN by me. */
* Mark new tuple as MOVED_IN by me.
*/
newtup
.
t_data
->
t_infomask
&=
~
(
HEAP_XMIN_COMMITTED
|
newtup
.
t_data
->
t_infomask
&=
~
(
HEAP_XMIN_COMMITTED
|
HEAP_XMIN_INVALID
|
HEAP_XMIN_INVALID
|
HEAP_MOVED_OFF
);
HEAP_MOVED_OFF
);
...
@@ -2597,9 +2547,7 @@ move_plain_tuple(Relation rel,
...
@@ -2597,9 +2547,7 @@ move_plain_tuple(Relation rel,
ItemPointerSet
(
&
(
newtup
.
t_data
->
t_ctid
),
dst_vacpage
->
blkno
,
newoff
);
ItemPointerSet
(
&
(
newtup
.
t_data
->
t_ctid
),
dst_vacpage
->
blkno
,
newoff
);
newtup
.
t_self
=
newtup
.
t_data
->
t_ctid
;
newtup
.
t_self
=
newtup
.
t_data
->
t_ctid
;
/*
/* Mark old tuple as MOVED_OFF by me. */
* Mark old tuple as MOVED_OFF by me.
*/
old_tup
->
t_data
->
t_infomask
&=
~
(
HEAP_XMIN_COMMITTED
|
old_tup
->
t_data
->
t_infomask
&=
~
(
HEAP_XMIN_COMMITTED
|
HEAP_XMIN_INVALID
|
HEAP_XMIN_INVALID
|
HEAP_MOVED_IN
);
HEAP_MOVED_IN
);
...
@@ -2619,9 +2567,7 @@ move_plain_tuple(Relation rel,
...
@@ -2619,9 +2567,7 @@ move_plain_tuple(Relation rel,
}
}
else
else
{
{
/*
/* No XLOG record, but still need to flag that XID exists on disk */
* No XLOG record, but still need to flag that XID exists on disk
*/
MyXactMadeTempRelUpdate
=
true
;
MyXactMadeTempRelUpdate
=
true
;
}
}
...
@@ -2698,8 +2644,8 @@ update_hint_bits(Relation rel, VacPageList fraged_pages, int num_fraged_pages,
...
@@ -2698,8 +2644,8 @@ update_hint_bits(Relation rel, VacPageList fraged_pages, int num_fraged_pages,
/*
/*
* See comments in the walk-along-page loop above, why we have
* See comments in the walk-along-page loop above, why we have
* Asserts here instead of if (...) elog(ERROR). The
* Asserts here instead of if (...) elog(ERROR). The
difference
*
difference
here is that we may see MOVED_IN.
* here is that we may see MOVED_IN.
*/
*/
Assert
(
htup
->
t_infomask
&
HEAP_MOVED
);
Assert
(
htup
->
t_infomask
&
HEAP_MOVED
);
Assert
(
HeapTupleHeaderGetXvac
(
htup
)
==
GetCurrentTransactionId
());
Assert
(
HeapTupleHeaderGetXvac
(
htup
)
==
GetCurrentTransactionId
());
...
@@ -2753,10 +2699,10 @@ vacuum_heap(VRelStats *vacrelstats, Relation onerel, VacPageList vacuum_pages)
...
@@ -2753,10 +2699,10 @@ vacuum_heap(VRelStats *vacrelstats, Relation onerel, VacPageList vacuum_pages)
}
}
/*
/*
* Flush dirty pages out to disk. We do this unconditionally, even if
* Flush dirty pages out to disk. We do this unconditionally, even if
we
*
we don't need to truncate, because we want to ensure that all
*
don't need to truncate, because we want to ensure that all tuples have
*
tuples have correct on-row commit status on disk (see bufmgr.c's
*
correct on-row commit status on disk (see bufmgr.c's comments for
*
comments for
FlushRelationBuffers()).
* FlushRelationBuffers()).
*/
*/
Assert
(
vacrelstats
->
rel_pages
>=
vacuum_pages
->
empty_end_pages
);
Assert
(
vacrelstats
->
rel_pages
>=
vacuum_pages
->
empty_end_pages
);
relblocks
=
vacrelstats
->
rel_pages
-
vacuum_pages
->
empty_end_pages
;
relblocks
=
vacrelstats
->
rel_pages
-
vacuum_pages
->
empty_end_pages
;
...
@@ -2771,8 +2717,7 @@ vacuum_heap(VRelStats *vacrelstats, Relation onerel, VacPageList vacuum_pages)
...
@@ -2771,8 +2717,7 @@ vacuum_heap(VRelStats *vacrelstats, Relation onerel, VacPageList vacuum_pages)
RelationGetRelationName
(
onerel
),
RelationGetRelationName
(
onerel
),
vacrelstats
->
rel_pages
,
relblocks
)));
vacrelstats
->
rel_pages
,
relblocks
)));
RelationTruncate
(
onerel
,
relblocks
);
RelationTruncate
(
onerel
,
relblocks
);
vacrelstats
->
rel_pages
=
relblocks
;
/* set new number of
vacrelstats
->
rel_pages
=
relblocks
;
/* set new number of blocks */
* blocks */
}
}
}
}
...
@@ -2836,9 +2781,9 @@ scan_index(Relation indrel, double num_tuples)
...
@@ -2836,9 +2781,9 @@ scan_index(Relation indrel, double num_tuples)
/*
/*
* Even though we're not planning to delete anything, we use the
* Even though we're not planning to delete anything, we use the
* ambulkdelete call, because (a) the scan happens within the index AM
* ambulkdelete call, because (a) the scan happens within the index AM
for
*
for more speed, and (b) it may want to pass private statistics to
*
more speed, and (b) it may want to pass private statistics to the
*
the
amvacuumcleanup call.
* amvacuumcleanup call.
*/
*/
stats
=
index_bulk_delete
(
indrel
,
dummy_tid_reaped
,
NULL
);
stats
=
index_bulk_delete
(
indrel
,
dummy_tid_reaped
,
NULL
);
...
@@ -2857,18 +2802,18 @@ scan_index(Relation indrel, double num_tuples)
...
@@ -2857,18 +2802,18 @@ scan_index(Relation indrel, double num_tuples)
false
);
false
);
ereport
(
elevel
,
ereport
(
elevel
,
(
errmsg
(
"index
\"
%s
\"
now contains %.0f row versions in %u pages"
,
(
errmsg
(
"index
\"
%s
\"
now contains %.0f row versions in %u pages"
,
RelationGetRelationName
(
indrel
),
RelationGetRelationName
(
indrel
),
stats
->
num_index_tuples
,
stats
->
num_index_tuples
,
stats
->
num_pages
),
stats
->
num_pages
),
errdetail
(
"%u index pages have been deleted, %u are currently reusable.
\n
"
errdetail
(
"%u index pages have been deleted, %u are currently reusable.
\n
"
"%s"
,
"%s"
,
stats
->
pages_deleted
,
stats
->
pages_free
,
stats
->
pages_deleted
,
stats
->
pages_free
,
vac_show_rusage
(
&
ru0
))));
vac_show_rusage
(
&
ru0
))));
/*
/*
* Check for tuple count mismatch. If the index is partial, then it's
* Check for tuple count mismatch. If the index is partial, then it's
OK
*
OK
for it to have fewer tuples than the heap; else we got trouble.
* for it to have fewer tuples than the heap; else we got trouble.
*/
*/
if
(
stats
->
num_index_tuples
!=
num_tuples
)
if
(
stats
->
num_index_tuples
!=
num_tuples
)
{
{
...
@@ -2924,20 +2869,20 @@ vacuum_index(VacPageList vacpagelist, Relation indrel,
...
@@ -2924,20 +2869,20 @@ vacuum_index(VacPageList vacpagelist, Relation indrel,
false
);
false
);
ereport
(
elevel
,
ereport
(
elevel
,
(
errmsg
(
"index
\"
%s
\"
now contains %.0f row versions in %u pages"
,
(
errmsg
(
"index
\"
%s
\"
now contains %.0f row versions in %u pages"
,
RelationGetRelationName
(
indrel
),
RelationGetRelationName
(
indrel
),
stats
->
num_index_tuples
,
stats
->
num_index_tuples
,
stats
->
num_pages
),
stats
->
num_pages
),
errdetail
(
"%.0f index row versions were removed.
\n
"
errdetail
(
"%.0f index row versions were removed.
\n
"
"%u index pages have been deleted, %u are currently reusable.
\n
"
"%u index pages have been deleted, %u are currently reusable.
\n
"
"%s"
,
"%s"
,
stats
->
tuples_removed
,
stats
->
tuples_removed
,
stats
->
pages_deleted
,
stats
->
pages_free
,
stats
->
pages_deleted
,
stats
->
pages_free
,
vac_show_rusage
(
&
ru0
))));
vac_show_rusage
(
&
ru0
))));
/*
/*
* Check for tuple count mismatch. If the index is partial, then it's
* Check for tuple count mismatch. If the index is partial, then it's
OK
*
OK
for it to have fewer tuples than the heap; else we got trouble.
* for it to have fewer tuples than the heap; else we got trouble.
*/
*/
if
(
stats
->
num_index_tuples
!=
num_tuples
+
keep_tuples
)
if
(
stats
->
num_index_tuples
!=
num_tuples
+
keep_tuples
)
{
{
...
@@ -2946,7 +2891,7 @@ vacuum_index(VacPageList vacpagelist, Relation indrel,
...
@@ -2946,7 +2891,7 @@ vacuum_index(VacPageList vacpagelist, Relation indrel,
ereport
(
WARNING
,
ereport
(
WARNING
,
(
errmsg
(
"index
\"
%s
\"
contains %.0f row versions, but table contains %.0f row versions"
,
(
errmsg
(
"index
\"
%s
\"
contains %.0f row versions, but table contains %.0f row versions"
,
RelationGetRelationName
(
indrel
),
RelationGetRelationName
(
indrel
),
stats
->
num_index_tuples
,
num_tuples
+
keep_tuples
),
stats
->
num_index_tuples
,
num_tuples
+
keep_tuples
),
errhint
(
"Rebuild the index with REINDEX."
)));
errhint
(
"Rebuild the index with REINDEX."
)));
}
}
...
@@ -3031,14 +2976,13 @@ vac_update_fsm(Relation onerel, VacPageList fraged_pages,
...
@@ -3031,14 +2976,13 @@ vac_update_fsm(Relation onerel, VacPageList fraged_pages,
/*
/*
* We only report pages with free space at least equal to the average
* We only report pages with free space at least equal to the average
* request size --- this avoids cluttering FSM with uselessly-small
* request size --- this avoids cluttering FSM with uselessly-small bits
* bits of space. Although FSM would discard pages with little free
* of space. Although FSM would discard pages with little free space
* space anyway, it's important to do this prefiltering because (a) it
* anyway, it's important to do this prefiltering because (a) it reduces
* reduces the time spent holding the FSM lock in
* the time spent holding the FSM lock in RecordRelationFreeSpace, and (b)
* RecordRelationFreeSpace, and (b) FSM uses the number of pages
* FSM uses the number of pages reported as a statistic for guiding space
* reported as a statistic for guiding space management. If we didn't
* management. If we didn't threshold our reports the same way
* threshold our reports the same way vacuumlazy.c does, we'd be
* vacuumlazy.c does, we'd be skewing that statistic.
* skewing that statistic.
*/
*/
threshold
=
GetAvgFSMRequestSize
(
&
onerel
->
rd_node
);
threshold
=
GetAvgFSMRequestSize
(
&
onerel
->
rd_node
);
...
@@ -3049,9 +2993,9 @@ vac_update_fsm(Relation onerel, VacPageList fraged_pages,
...
@@ -3049,9 +2993,9 @@ vac_update_fsm(Relation onerel, VacPageList fraged_pages,
for
(
i
=
0
;
i
<
nPages
;
i
++
)
for
(
i
=
0
;
i
<
nPages
;
i
++
)
{
{
/*
/*
* fraged_pages may contain entries for pages that we later
* fraged_pages may contain entries for pages that we later
decided to
*
decided to truncate from the relation; don't enter them into
*
truncate from the relation; don't enter them into the free space
*
the free space
map!
* map!
*/
*/
if
(
pagedesc
[
i
]
->
blkno
>=
rel_pages
)
if
(
pagedesc
[
i
]
->
blkno
>=
rel_pages
)
break
;
break
;
...
@@ -3077,7 +3021,7 @@ copy_vac_page(VacPage vacpage)
...
@@ -3077,7 +3021,7 @@ copy_vac_page(VacPage vacpage)
/* allocate a VacPageData entry */
/* allocate a VacPageData entry */
newvacpage
=
(
VacPage
)
palloc
(
sizeof
(
VacPageData
)
+
newvacpage
=
(
VacPage
)
palloc
(
sizeof
(
VacPageData
)
+
vacpage
->
offsets_free
*
sizeof
(
OffsetNumber
));
vacpage
->
offsets_free
*
sizeof
(
OffsetNumber
));
/* fill it in */
/* fill it in */
if
(
vacpage
->
offsets_free
>
0
)
if
(
vacpage
->
offsets_free
>
0
)
...
@@ -3247,7 +3191,7 @@ vac_open_indexes(Relation relation, LOCKMODE lockmode,
...
@@ -3247,7 +3191,7 @@ vac_open_indexes(Relation relation, LOCKMODE lockmode,
}
}
/*
/*
* Release the resources acquired by vac_open_indexes.
Optionally release
* Release the resources acquired by vac_open_indexes.
Optionally release
* the locks (say NoLock to keep 'em).
* the locks (say NoLock to keep 'em).
*/
*/
void
void
...
@@ -3274,10 +3218,7 @@ vac_close_indexes(int nindexes, Relation *Irel, LOCKMODE lockmode)
...
@@ -3274,10 +3218,7 @@ vac_close_indexes(int nindexes, Relation *Irel, LOCKMODE lockmode)
bool
bool
vac_is_partial_index
(
Relation
indrel
)
vac_is_partial_index
(
Relation
indrel
)
{
{
/*
/* If the index's AM doesn't support nulls, it's partial for our purposes */
* If the index's AM doesn't support nulls, it's partial for our
* purposes
*/
if
(
!
indrel
->
rd_am
->
amindexnulls
)
if
(
!
indrel
->
rd_am
->
amindexnulls
)
return
true
;
return
true
;
...
@@ -3354,9 +3295,9 @@ vac_show_rusage(VacRUsage *ru0)
...
@@ -3354,9 +3295,9 @@ vac_show_rusage(VacRUsage *ru0)
snprintf
(
result
,
sizeof
(
result
),
snprintf
(
result
,
sizeof
(
result
),
"CPU %d.%02ds/%d.%02du sec elapsed %d.%02d sec."
,
"CPU %d.%02ds/%d.%02du sec elapsed %d.%02d sec."
,
(
int
)
(
ru1
.
ru
.
ru_stime
.
tv_sec
-
ru0
->
ru
.
ru_stime
.
tv_sec
),
(
int
)
(
ru1
.
ru
.
ru_stime
.
tv_sec
-
ru0
->
ru
.
ru_stime
.
tv_sec
),
(
int
)
(
ru1
.
ru
.
ru_stime
.
tv_usec
-
ru0
->
ru
.
ru_stime
.
tv_usec
)
/
10000
,
(
int
)
(
ru1
.
ru
.
ru_stime
.
tv_usec
-
ru0
->
ru
.
ru_stime
.
tv_usec
)
/
10000
,
(
int
)
(
ru1
.
ru
.
ru_utime
.
tv_sec
-
ru0
->
ru
.
ru_utime
.
tv_sec
),
(
int
)
(
ru1
.
ru
.
ru_utime
.
tv_sec
-
ru0
->
ru
.
ru_utime
.
tv_sec
),
(
int
)
(
ru1
.
ru
.
ru_utime
.
tv_usec
-
ru0
->
ru
.
ru_utime
.
tv_usec
)
/
10000
,
(
int
)
(
ru1
.
ru
.
ru_utime
.
tv_usec
-
ru0
->
ru
.
ru_utime
.
tv_usec
)
/
10000
,
(
int
)
(
ru1
.
tv
.
tv_sec
-
ru0
->
tv
.
tv_sec
),
(
int
)
(
ru1
.
tv
.
tv_sec
-
ru0
->
tv
.
tv_sec
),
(
int
)
(
ru1
.
tv
.
tv_usec
-
ru0
->
tv
.
tv_usec
)
/
10000
);
(
int
)
(
ru1
.
tv
.
tv_usec
-
ru0
->
tv
.
tv_usec
)
/
10000
);
...
...
src/tools/pgindent/pgindent
View file @
cdc84adb
...
@@ -38,10 +38,9 @@ do
...
@@ -38,10 +38,9 @@ do
# mark some comments for special treatment later
# mark some comments for special treatment later
sed
's;/\* *---;/*---X_X;g'
|
sed
's;/\* *---;/*---X_X;g'
|
# workaround for indent bug with 'else' handling
# workaround for indent bug with 'else' handling
sed
's;\([ ]*\)else[ ]*\(/\*.*\)$;\1else\
# indent comment so BSD indent will move it
\1\2;g'
|
sed
's;\([} ]\)else[ ]*\(/\*.*\)$;\1else\
sed
's;\([ ]*\)\(}[ ]\)else[ ]*\(/\*.*\)$;\1\2else\
\2;g'
|
\1\3;g'
|
detab
-t4
-qc
|
detab
-t4
-qc
|
# work around bug where function that defines no local variables misindents
# work around bug where function that defines no local variables misindents
# switch() case lines and line after #else. Do not do for struct/enum.
# switch() case lines and line after #else. Do not do for struct/enum.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment