Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
P
Postgres FD Implementation
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Abuhujair Javed
Postgres FD Implementation
Commits
db6e2b4c
Commit
db6e2b4c
authored
May 22, 2019
by
Tom Lane
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Initial pgperltidy run for v12.
Make all the perl code look nice, too (for some value of "nice").
parent
8255c7a5
Changes
34
Show whitespace changes
Inline
Side-by-side
Showing
34 changed files
with
480 additions
and
377 deletions
+480
-377
src/backend/catalog/genbki.pl
src/backend/catalog/genbki.pl
+4
-4
src/backend/utils/Gen_fmgrtab.pl
src/backend/utils/Gen_fmgrtab.pl
+1
-1
src/bin/initdb/t/001_initdb.pl
src/bin/initdb/t/001_initdb.pl
+5
-5
src/bin/pg_basebackup/t/010_pg_basebackup.pl
src/bin/pg_basebackup/t/010_pg_basebackup.pl
+1
-1
src/bin/pg_checksums/t/002_actions.pl
src/bin/pg_checksums/t/002_actions.pl
+94
-72
src/bin/pg_ctl/t/001_start_stop.pl
src/bin/pg_ctl/t/001_start_stop.pl
+1
-0
src/bin/pg_ctl/t/004_logrotate.pl
src/bin/pg_ctl/t/004_logrotate.pl
+13
-8
src/bin/pg_dump/t/001_basic.pl
src/bin/pg_dump/t/001_basic.pl
+6
-5
src/bin/pg_dump/t/002_pg_dump.pl
src/bin/pg_dump/t/002_pg_dump.pl
+18
-23
src/bin/pg_rewind/t/002_databases.pl
src/bin/pg_rewind/t/002_databases.pl
+3
-3
src/bin/pg_rewind/t/RewindTest.pm
src/bin/pg_rewind/t/RewindTest.pm
+9
-7
src/bin/pgbench/t/001_pgbench_with_server.pl
src/bin/pgbench/t/001_pgbench_with_server.pl
+84
-60
src/bin/scripts/t/090_reindexdb.pl
src/bin/scripts/t/090_reindexdb.pl
+1
-2
src/bin/scripts/t/100_vacuumdb.pl
src/bin/scripts/t/100_vacuumdb.pl
+6
-6
src/include/catalog/unused_oids
src/include/catalog/unused_oids
+1
-2
src/interfaces/ecpg/preproc/check_rules.pl
src/interfaces/ecpg/preproc/check_rules.pl
+4
-4
src/interfaces/ecpg/preproc/parse.pl
src/interfaces/ecpg/preproc/parse.pl
+4
-4
src/test/modules/commit_ts/t/004_restart.pl
src/test/modules/commit_ts/t/004_restart.pl
+3
-2
src/test/perl/TestLib.pm
src/test/perl/TestLib.pm
+1
-1
src/test/recovery/t/001_stream_rep.pl
src/test/recovery/t/001_stream_rep.pl
+45
-35
src/test/recovery/t/003_recovery_targets.pl
src/test/recovery/t/003_recovery_targets.pl
+15
-10
src/test/recovery/t/004_timeline_switch.pl
src/test/recovery/t/004_timeline_switch.pl
+3
-1
src/test/recovery/t/013_crash_restart.pl
src/test/recovery/t/013_crash_restart.pl
+4
-2
src/test/recovery/t/015_promotion_pages.pl
src/test/recovery/t/015_promotion_pages.pl
+9
-11
src/test/recovery/t/016_min_consistency.pl
src/test/recovery/t/016_min_consistency.pl
+14
-15
src/test/ssl/t/001_ssltests.pl
src/test/ssl/t/001_ssltests.pl
+44
-25
src/test/ssl/t/002_scram.pl
src/test/ssl/t/002_scram.pl
+1
-2
src/test/subscription/t/002_types.pl
src/test/subscription/t/002_types.pl
+4
-2
src/test/subscription/t/011_generated.pl
src/test/subscription/t/011_generated.pl
+11
-13
src/test/subscription/t/012_collation.pl
src/test/subscription/t/012_collation.pl
+14
-10
src/test/subscription/t/100_bugs.pl
src/test/subscription/t/100_bugs.pl
+7
-5
src/tools/gen_keywordlist.pl
src/tools/gen_keywordlist.pl
+18
-16
src/tools/msvc/Install.pm
src/tools/msvc/Install.pm
+10
-9
src/tools/msvc/Solution.pm
src/tools/msvc/Solution.pm
+22
-11
No files found.
src/backend/catalog/genbki.pl
View file @
db6e2b4c
src/backend/utils/Gen_fmgrtab.pl
View file @
db6e2b4c
src/bin/initdb/t/001_initdb.pl
View file @
db6e2b4c
...
@@ -60,14 +60,14 @@ mkdir $datadir;
...
@@ -60,14 +60,14 @@ mkdir $datadir;
}
}
# Control file should tell that data checksums are disabled by default.
# Control file should tell that data checksums are disabled by default.
command_like
(['
pg_controldata
',
$datadir
],
command_like
(
[
'
pg_controldata
',
$datadir
],
qr/Data page checksum version:.*0/
,
qr/Data page checksum version:.*0/
,
'
checksums are disabled in control file
');
'
checksums are disabled in control file
');
# pg_checksums fails with checksums disabled by default. This is
# pg_checksums fails with checksums disabled by default. This is
# not part of the tests included in pg_checksums to save from
# not part of the tests included in pg_checksums to save from
# the creation of an extra instance.
# the creation of an extra instance.
command_fails
(
command_fails
([
'
pg_checksums
',
'
-D
',
$datadir
],
[
'
pg_checksums
',
'
-D
',
$datadir
],
"
pg_checksums fails with data checksum disabled
");
"
pg_checksums fails with data checksum disabled
");
command_ok
([
'
initdb
',
'
-S
',
$datadir
],
'
sync only
');
command_ok
([
'
initdb
',
'
-S
',
$datadir
],
'
sync only
');
...
...
src/bin/pg_basebackup/t/010_pg_basebackup.pl
View file @
db6e2b4c
src/bin/pg_checksums/t/002_actions.pl
View file @
db6e2b4c
...
@@ -19,15 +19,16 @@ sub check_relation_corruption
...
@@ -19,15 +19,16 @@ sub check_relation_corruption
my
$tablespace
=
shift
;
my
$tablespace
=
shift
;
my
$pgdata
=
$node
->
data_dir
;
my
$pgdata
=
$node
->
data_dir
;
$node
->
safe_psql
('
postgres
',
$node
->
safe_psql
(
'
postgres
',
"
SELECT a INTO
$table
FROM generate_series(1,10000) AS a;
"
SELECT a INTO
$table
FROM generate_series(1,10000) AS a;
ALTER TABLE
$table
SET (autovacuum_enabled=false);
");
ALTER TABLE
$table
SET (autovacuum_enabled=false);
");
$node
->
safe_psql
('
postgres
',
$node
->
safe_psql
('
postgres
',
"
ALTER TABLE
"
.
$table
.
"
SET TABLESPACE
"
.
$tablespace
.
"
;
");
"
ALTER TABLE
"
.
$table
.
"
SET TABLESPACE
"
.
$tablespace
.
"
;
");
my
$file_corrupted
=
$node
->
safe_psql
('
postgres
',
my
$file_corrupted
=
"
SELECT pg_relation_filepath('
$table
');
");
$node
->
safe_psql
('
postgres
',
"
SELECT pg_relation_filepath('
$table
');
");
my
$relfilenode_corrupted
=
$node
->
safe_psql
('
postgres
',
my
$relfilenode_corrupted
=
$node
->
safe_psql
('
postgres
',
"
SELECT relfilenode FROM pg_class WHERE relname = '
$table
';
");
"
SELECT relfilenode FROM pg_class WHERE relname = '
$table
';
");
...
@@ -38,9 +39,14 @@ sub check_relation_corruption
...
@@ -38,9 +39,14 @@ sub check_relation_corruption
# Checksums are correct for single relfilenode as the table is not
# Checksums are correct for single relfilenode as the table is not
# corrupted yet.
# corrupted yet.
command_ok
(['
pg_checksums
',
'
--check
',
'
-D
',
$pgdata
,
'
-r
',
command_ok
(
$relfilenode_corrupted
],
[
"
succeeds for single relfilenode on tablespace
$tablespace
with offline cluster
");
'
pg_checksums
',
'
--check
',
'
-D
',
$pgdata
,
'
-r
',
$relfilenode_corrupted
],
"
succeeds for single relfilenode on tablespace
$tablespace
with offline cluster
"
);
# Time to create some corruption
# Time to create some corruption
open
my
$file
,
'
+<
',
"
$pgdata
/
$file_corrupted
";
open
my
$file
,
'
+<
',
"
$pgdata
/
$file_corrupted
";
...
@@ -49,15 +55,21 @@ sub check_relation_corruption
...
@@ -49,15 +55,21 @@ sub check_relation_corruption
close
$file
;
close
$file
;
# Checksum checks on single relfilenode fail
# Checksum checks on single relfilenode fail
$node
->
command_checks_all
([
'
pg_checksums
',
'
--check
',
'
-D
',
$pgdata
,
$node
->
command_checks_all
(
'
-r
',
$relfilenode_corrupted
],
[
'
pg_checksums
',
'
--check
',
'
-D
',
$pgdata
,
'
-r
',
$relfilenode_corrupted
],
1
,
1
,
[
qr/Bad checksums:.*1/
],
[
qr/Bad checksums:.*1/
],
[
qr/checksum verification failed/
],
[
qr/checksum verification failed/
],
"
fails with corrupted data for single relfilenode on tablespace
$tablespace
");
"
fails with corrupted data for single relfilenode on tablespace
$tablespace
"
);
# Global checksum checks fail as well
# Global checksum checks fail as well
$node
->
command_checks_all
([
'
pg_checksums
',
'
--check
',
'
-D
',
$pgdata
],
$node
->
command_checks_all
(
[
'
pg_checksums
',
'
--check
',
'
-D
',
$pgdata
],
1
,
1
,
[
qr/Bad checksums:.*1/
],
[
qr/Bad checksums:.*1/
],
[
qr/checksum verification failed/
],
[
qr/checksum verification failed/
],
...
@@ -67,7 +79,7 @@ sub check_relation_corruption
...
@@ -67,7 +79,7 @@ sub check_relation_corruption
$node
->
start
;
$node
->
start
;
$node
->
safe_psql
('
postgres
',
"
DROP TABLE
$table
;
");
$node
->
safe_psql
('
postgres
',
"
DROP TABLE
$table
;
");
$node
->
stop
;
$node
->
stop
;
$node
->
command_ok
([
'
pg_checksums
',
'
--check
',
'
-D
',
$pgdata
],
$node
->
command_ok
([
'
pg_checksums
',
'
--check
',
'
-D
',
$pgdata
],
"
succeeds again after table drop on tablespace
$tablespace
");
"
succeeds again after table drop on tablespace
$tablespace
");
$node
->
start
;
$node
->
start
;
...
@@ -80,7 +92,8 @@ $node->init();
...
@@ -80,7 +92,8 @@ $node->init();
my
$pgdata
=
$node
->
data_dir
;
my
$pgdata
=
$node
->
data_dir
;
# Control file should know that checksums are disabled.
# Control file should know that checksums are disabled.
command_like
(['
pg_controldata
',
$pgdata
],
command_like
(
[
'
pg_controldata
',
$pgdata
],
qr/Data page checksum version:.*0/
,
qr/Data page checksum version:.*0/
,
'
checksums disabled in control file
');
'
checksums disabled in control file
');
...
@@ -101,58 +114,66 @@ mkdir "$pgdata/global/pgsql_tmp";
...
@@ -101,58 +114,66 @@ mkdir "$pgdata/global/pgsql_tmp";
append_to_file
"
$pgdata
/global/pgsql_tmp/1.1
",
"
foo
";
append_to_file
"
$pgdata
/global/pgsql_tmp/1.1
",
"
foo
";
# Enable checksums.
# Enable checksums.
command_ok
([
'
pg_checksums
',
'
--enable
',
'
--no-sync
',
'
-D
',
$pgdata
],
command_ok
([
'
pg_checksums
',
'
--enable
',
'
--no-sync
',
'
-D
',
$pgdata
],
"
checksums successfully enabled in cluster
");
"
checksums successfully enabled in cluster
");
# Successive attempt to enable checksums fails.
# Successive attempt to enable checksums fails.
command_fails
([
'
pg_checksums
',
'
--enable
',
'
--no-sync
',
'
-D
',
$pgdata
],
command_fails
([
'
pg_checksums
',
'
--enable
',
'
--no-sync
',
'
-D
',
$pgdata
],
"
enabling checksums fails if already enabled
");
"
enabling checksums fails if already enabled
");
# Control file should know that checksums are enabled.
# Control file should know that checksums are enabled.
command_like
(['
pg_controldata
',
$pgdata
],
command_like
(
[
'
pg_controldata
',
$pgdata
],
qr/Data page checksum version:.*1/
,
qr/Data page checksum version:.*1/
,
'
checksums enabled in control file
');
'
checksums enabled in control file
');
# Disable checksums again. Flush result here as that should be cheap.
# Disable checksums again. Flush result here as that should be cheap.
command_ok
(['
pg_checksums
',
'
--disable
',
'
-D
',
$pgdata
],
command_ok
(
[
'
pg_checksums
',
'
--disable
',
'
-D
',
$pgdata
],
"
checksums successfully disabled in cluster
");
"
checksums successfully disabled in cluster
");
# Successive attempt to disable checksums fails.
# Successive attempt to disable checksums fails.
command_fails
(['
pg_checksums
',
'
--disable
',
'
--no-sync
',
'
-D
',
$pgdata
],
command_fails
(
[
'
pg_checksums
',
'
--disable
',
'
--no-sync
',
'
-D
',
$pgdata
],
"
disabling checksums fails if already disabled
");
"
disabling checksums fails if already disabled
");
# Control file should know that checksums are disabled.
# Control file should know that checksums are disabled.
command_like
(['
pg_controldata
',
$pgdata
],
command_like
(
[
'
pg_controldata
',
$pgdata
],
qr/Data page checksum version:.*0/
,
qr/Data page checksum version:.*0/
,
'
checksums disabled in control file
');
'
checksums disabled in control file
');
# Enable checksums again for follow-up tests.
# Enable checksums again for follow-up tests.
command_ok
([
'
pg_checksums
',
'
--enable
',
'
--no-sync
',
'
-D
',
$pgdata
],
command_ok
([
'
pg_checksums
',
'
--enable
',
'
--no-sync
',
'
-D
',
$pgdata
],
"
checksums successfully enabled in cluster
");
"
checksums successfully enabled in cluster
");
# Control file should know that checksums are enabled.
# Control file should know that checksums are enabled.
command_like
(['
pg_controldata
',
$pgdata
],
command_like
(
[
'
pg_controldata
',
$pgdata
],
qr/Data page checksum version:.*1/
,
qr/Data page checksum version:.*1/
,
'
checksums enabled in control file
');
'
checksums enabled in control file
');
# Checksums pass on a newly-created cluster
# Checksums pass on a newly-created cluster
command_ok
([
'
pg_checksums
',
'
--check
',
'
-D
',
$pgdata
],
command_ok
([
'
pg_checksums
',
'
--check
',
'
-D
',
$pgdata
],
"
succeeds with offline cluster
");
"
succeeds with offline cluster
");
# Checksums are verified if no other arguments are specified
# Checksums are verified if no other arguments are specified
command_ok
(['
pg_checksums
',
'
-D
',
$pgdata
],
command_ok
(
[
'
pg_checksums
',
'
-D
',
$pgdata
],
"
verifies checksums as default action
");
"
verifies checksums as default action
");
# Specific relation files cannot be requested when action is --disable
# Specific relation files cannot be requested when action is --disable
# or --enable.
# or --enable.
command_fails
(['
pg_checksums
',
'
--disable
',
'
-r
',
'
1234
',
'
-D
',
$pgdata
],
command_fails
(
[
'
pg_checksums
',
'
--disable
',
'
-r
',
'
1234
',
'
-D
',
$pgdata
],
"
fails when relfilenodes are requested and action is --disable
");
"
fails when relfilenodes are requested and action is --disable
");
command_fails
(['
pg_checksums
',
'
--enable
',
'
-r
',
'
1234
',
'
-D
',
$pgdata
],
command_fails
(
[
'
pg_checksums
',
'
--enable
',
'
-r
',
'
1234
',
'
-D
',
$pgdata
],
"
fails when relfilenodes are requested and action is --enable
");
"
fails when relfilenodes are requested and action is --enable
");
# Checks cannot happen with an online cluster
# Checks cannot happen with an online cluster
$node
->
start
;
$node
->
start
;
command_fails
([
'
pg_checksums
',
'
--check
',
'
-D
',
$pgdata
],
command_fails
([
'
pg_checksums
',
'
--check
',
'
-D
',
$pgdata
],
"
fails with online cluster
");
"
fails with online cluster
");
# Check corruption of table on default tablespace.
# Check corruption of table on default tablespace.
...
@@ -161,7 +182,7 @@ check_relation_corruption($node, 'corrupt1', 'pg_default');
...
@@ -161,7 +182,7 @@ check_relation_corruption($node, 'corrupt1', 'pg_default');
# Create tablespace to check corruptions in a non-default tablespace.
# Create tablespace to check corruptions in a non-default tablespace.
my
$basedir
=
$node
->
basedir
;
my
$basedir
=
$node
->
basedir
;
my
$tablespace_dir
=
"
$basedir
/ts_corrupt_dir
";
my
$tablespace_dir
=
"
$basedir
/ts_corrupt_dir
";
mkdir
(
$tablespace_dir
);
mkdir
(
$tablespace_dir
);
$tablespace_dir
=
TestLib::
real_dir
(
$tablespace_dir
);
$tablespace_dir
=
TestLib::
real_dir
(
$tablespace_dir
);
$node
->
safe_psql
('
postgres
',
$node
->
safe_psql
('
postgres
',
"
CREATE TABLESPACE ts_corrupt LOCATION '
$tablespace_dir
';
");
"
CREATE TABLESPACE ts_corrupt LOCATION '
$tablespace_dir
';
");
...
@@ -179,7 +200,8 @@ sub fail_corrupt
...
@@ -179,7 +200,8 @@ sub fail_corrupt
my
$file_name
=
"
$pgdata
/global/
$file
";
my
$file_name
=
"
$pgdata
/global/
$file
";
append_to_file
$file_name
,
"
foo
";
append_to_file
$file_name
,
"
foo
";
$node
->
command_checks_all
([
'
pg_checksums
',
'
--check
',
'
-D
',
$pgdata
],
$node
->
command_checks_all
(
[
'
pg_checksums
',
'
--check
',
'
-D
',
$pgdata
],
1
,
1
,
[
qr/^$/
],
[
qr/^$/
],
[
qr/could not read block 0 in file.*$file\":/
],
[
qr/could not read block 0 in file.*$file\":/
],
...
...
src/bin/pg_ctl/t/001_start_stop.pl
View file @
db6e2b4c
...
@@ -26,6 +26,7 @@ open my $conf, '>>', "$tempdir/data/postgresql.conf";
...
@@ -26,6 +26,7 @@ open my $conf, '>>', "$tempdir/data/postgresql.conf";
print
$conf
"
fsync = off
\n
";
print
$conf
"
fsync = off
\n
";
print
$conf
TestLib::
slurp_file
(
$ENV
{
TEMP_CONFIG
})
print
$conf
TestLib::
slurp_file
(
$ENV
{
TEMP_CONFIG
})
if
defined
$ENV
{
TEMP_CONFIG
};
if
defined
$ENV
{
TEMP_CONFIG
};
if
(
!
$windows_os
)
if
(
!
$windows_os
)
{
{
print
$conf
"
listen_addresses = ''
\n
";
print
$conf
"
listen_addresses = ''
\n
";
...
...
src/bin/pg_ctl/t/004_logrotate.pl
View file @
db6e2b4c
...
@@ -25,7 +25,9 @@ my $current_logfiles = slurp_file($node->data_dir . '/current_logfiles');
...
@@ -25,7 +25,9 @@ my $current_logfiles = slurp_file($node->data_dir . '/current_logfiles');
note
"
current_logfiles =
$current_logfiles
";
note
"
current_logfiles =
$current_logfiles
";
like
(
$current_logfiles
,
qr|^stderr log/postgresql-.*log$|
,
like
(
$current_logfiles
,
qr|^stderr log/postgresql-.*log$|
,
'
current_logfiles is sane
');
'
current_logfiles is sane
');
my
$lfname
=
$current_logfiles
;
my
$lfname
=
$current_logfiles
;
...
@@ -43,8 +45,7 @@ for (my $attempts = 0; $attempts < $max_attempts; $attempts++)
...
@@ -43,8 +45,7 @@ for (my $attempts = 0; $attempts < $max_attempts; $attempts++)
usleep
(
100_000
);
usleep
(
100_000
);
}
}
like
(
$first_logfile
,
qr/division by zero/
,
like
(
$first_logfile
,
qr/division by zero/
,
'
found expected log file content
');
'
found expected log file content
');
# Sleep 2 seconds and ask for log rotation; this should result in
# Sleep 2 seconds and ask for log rotation; this should result in
# output into a different log file name.
# output into a different log file name.
...
@@ -63,7 +64,9 @@ for (my $attempts = 0; $attempts < $max_attempts; $attempts++)
...
@@ -63,7 +64,9 @@ for (my $attempts = 0; $attempts < $max_attempts; $attempts++)
note
"
now current_logfiles =
$new_current_logfiles
";
note
"
now current_logfiles =
$new_current_logfiles
";
like
(
$new_current_logfiles
,
qr|^stderr log/postgresql-.*log$|
,
like
(
$new_current_logfiles
,
qr|^stderr log/postgresql-.*log$|
,
'
new current_logfiles is sane
');
'
new current_logfiles is sane
');
$lfname
=
$new_current_logfiles
;
$lfname
=
$new_current_logfiles
;
...
@@ -82,7 +85,9 @@ for (my $attempts = 0; $attempts < $max_attempts; $attempts++)
...
@@ -82,7 +85,9 @@ for (my $attempts = 0; $attempts < $max_attempts; $attempts++)
usleep
(
100_000
);
usleep
(
100_000
);
}
}
like
(
$second_logfile
,
qr/syntax error/
,
like
(
$second_logfile
,
qr/syntax error/
,
'
found expected log file content in new log file
');
'
found expected log file content in new log file
');
$node
->
stop
();
$node
->
stop
();
src/bin/pg_dump/t/001_basic.pl
View file @
db6e2b4c
...
@@ -50,10 +50,9 @@ command_fails_like(
...
@@ -50,10 +50,9 @@ command_fails_like(
);
);
command_fails_like
(
command_fails_like
(
[
'
pg_restore
'
],
[
'
pg_restore
'
],
qr{\Qpg_restore: error: one of -d/--dbname and -f/--file must be specified\E}
,
qr{\Qpg_restore: error: one of -d/--dbname and -f/--file must be specified\E}
,
'
pg_restore: error: one of -d/--dbname and -f/--file must be specified
'
'
pg_restore: error: one of -d/--dbname and -f/--file must be specified
');
);
command_fails_like
(
command_fails_like
(
[
'
pg_restore
',
'
-s
',
'
-a
',
'
-f -
'
],
[
'
pg_restore
',
'
-s
',
'
-a
',
'
-f -
'
],
...
@@ -125,7 +124,8 @@ command_fails_like(
...
@@ -125,7 +124,8 @@ command_fails_like(
command_fails_like
(
command_fails_like
(
[
'
pg_dump
',
'
--on-conflict-do-nothing
'
],
[
'
pg_dump
',
'
--on-conflict-do-nothing
'
],
qr/pg_dump: error: option --on-conflict-do-nothing requires option --inserts, --rows-per-insert or --column-inserts/
,
qr/pg_dump: error: option --on-conflict-do-nothing requires option --inserts, --rows-per-insert or --column-inserts/
,
'
pg_dump: --on-conflict-do-nothing requires --inserts, --rows-per-insert, --column-inserts
');
'
pg_dump: --on-conflict-do-nothing requires --inserts, --rows-per-insert, --column-inserts
'
);
# pg_dumpall command-line argument checks
# pg_dumpall command-line argument checks
command_fails_like
(
command_fails_like
(
...
@@ -161,4 +161,5 @@ command_fails_like(
...
@@ -161,4 +161,5 @@ command_fails_like(
command_fails_like
(
command_fails_like
(
[
'
pg_dumpall
',
'
--exclude-database=foo
',
'
--globals-only
'
],
[
'
pg_dumpall
',
'
--exclude-database=foo
',
'
--globals-only
'
],
qr/\Qpg_dumpall: error: option --exclude-database cannot be used together with -g\/
--
globals
-
only
\
E
/
,
qr/\Qpg_dumpall: error: option --exclude-database cannot be used together with -g\/
--
globals
-
only
\
E
/
,
'
pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only
');
'
pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only
'
);
src/bin/pg_dump/t/002_pg_dump.pl
View file @
db6e2b4c
...
@@ -810,7 +810,8 @@ my %tests = (
...
@@ -810,7 +810,8 @@ my %tests = (
},
},
'
ALTER TABLE test_second_table OWNER TO
'
=>
{
'
ALTER TABLE test_second_table OWNER TO
'
=>
{
regexp
=>
qr/^\QALTER TABLE dump_test.test_second_table OWNER TO \E.+;/
m,
regexp
=>
qr/^\QALTER TABLE dump_test.test_second_table OWNER TO \E.+;/
m,
like =>
like =>
{ %full_runs,
%
dump_test_schema_runs
,
section_pre_data
=>
1
,
},
{ %full_runs,
%
dump_test_schema_runs
,
section_pre_data
=>
1
,
},
unlike
=>
{
unlike
=>
{
...
@@ -3116,13 +3117,13 @@ my %tests = (
...
@@ -3116,13 +3117,13 @@ my %tests = (
'
CREATE ACCESS METHOD regress_test_table_am
'
=>
{
'
CREATE ACCESS METHOD regress_test_table_am
'
=>
{
create_order
=>
11
,
create_order
=>
11
,
create_sql
=>
'
CREATE ACCESS METHOD regress_table_am TYPE TABLE HANDLER heap_tableam_handler;
',
create_sql
=>
'
CREATE ACCESS METHOD regress_table_am TYPE TABLE HANDLER heap_tableam_handler;
',
regexp
=>
qr/^
regexp
=>
qr/^
\QCREATE ACCESS METHOD regress_table_am TYPE TABLE HANDLER heap_tableam_handler;\E
\QCREATE ACCESS METHOD regress_table_am TYPE TABLE HANDLER heap_tableam_handler;\E
\n/
xm
,
\n/
xm
,
like
=>
{
like
=>
{
%
full_runs
,
%
full_runs
,
section_pre_data
=>
1
,
section_pre_data
=>
1
,
},
},
},
},
...
@@ -3145,11 +3146,9 @@ my %tests = (
...
@@ -3145,11 +3146,9 @@ my %tests = (
\n\s+\Qcol1 integer\E
\n\s+\Qcol1 integer\E
\n\);/
xm
,
\n\);/
xm
,
like
=>
{
like
=>
{
%
full_runs
,
%
full_runs
,
%
dump_test_schema_runs
,
section_pre_data
=>
1
,
%
dump_test_schema_runs
,
section_pre_data
=>
1
,
},
},
unlike
=>
{
exclude_dump_test_schema
=>
1
},
unlike
=>
{
exclude_dump_test_schema
=>
1
},
},
},
'
CREATE MATERIALIZED VIEW regress_pg_dump_matview_am
'
=>
{
'
CREATE MATERIALIZED VIEW regress_pg_dump_matview_am
'
=>
{
...
@@ -3167,13 +3166,10 @@ my %tests = (
...
@@ -3167,13 +3166,10 @@ my %tests = (
\n\s+\QFROM pg_class\E
\n\s+\QFROM pg_class\E
\n\s+\QWITH NO DATA;\E\n/
xm
,
\n\s+\QWITH NO DATA;\E\n/
xm
,
like
=>
{
like
=>
{
%
full_runs
,
%
full_runs
,
%
dump_test_schema_runs
,
section_pre_data
=>
1
,
%
dump_test_schema_runs
,
section_pre_data
=>
1
,
},
},
unlike
=>
{
exclude_dump_test_schema
=>
1
},
unlike
=>
{
exclude_dump_test_schema
=>
1
},
}
});
);
#########################################
#########################################
# Create a PG instance to test actually dumping from
# Create a PG instance to test actually dumping from
...
@@ -3330,8 +3326,7 @@ foreach my $db (sort keys %create_sql)
...
@@ -3330,8 +3326,7 @@ foreach my $db (sort keys %create_sql)
command_fails_like
(
command_fails_like
(
[
'
pg_dump
',
'
-p
',
"
$port
",
'
qqq
'
],
[
'
pg_dump
',
'
-p
',
"
$port
",
'
qqq
'
],
qr/\Qpg_dump: error: connection to database "qqq" failed: FATAL: database "qqq" does not exist\E/
,
qr/\Qpg_dump: error: connection to database "qqq" failed: FATAL: database "qqq" does not exist\E/
,
'
connecting to a non-existent database
'
'
connecting to a non-existent database
');
);
#########################################
#########################################
# Test connecting with an unprivileged user
# Test connecting with an unprivileged user
...
...
src/bin/pg_rewind/t/002_databases.pl
View file @
db6e2b4c
src/bin/pg_rewind/t/RewindTest.pm
View file @
db6e2b4c
...
@@ -133,8 +133,10 @@ sub setup_cluster
...
@@ -133,8 +133,10 @@ sub setup_cluster
# Set up pg_hba.conf and pg_ident.conf for the role running
# Set up pg_hba.conf and pg_ident.conf for the role running
# pg_rewind. This role is used for all the tests, and has
# pg_rewind. This role is used for all the tests, and has
# minimal permissions enough to rewind from an online source.
# minimal permissions enough to rewind from an online source.
$node_master
->
init
(
allows_streaming
=>
1
,
extra
=>
$extra
,
$node_master
->
init
(
auth_extra
=>
['
--create-role
',
'
rewind_user
']);
allows_streaming
=>
1
,
extra
=>
$extra
,
auth_extra
=>
[
'
--create-role
',
'
rewind_user
'
]);
# Set wal_keep_segments to prevent WAL segment recycling after enforced
# Set wal_keep_segments to prevent WAL segment recycling after enforced
# checkpoints in the tests.
# checkpoints in the tests.
...
@@ -151,7 +153,8 @@ sub start_master
...
@@ -151,7 +153,8 @@ sub start_master
# Create custom role which is used to run pg_rewind, and adjust its
# Create custom role which is used to run pg_rewind, and adjust its
# permissions to the minimum necessary.
# permissions to the minimum necessary.
$node_master
->
psql
('
postgres
',
"
$node_master
->
psql
(
'
postgres
',
"
CREATE ROLE rewind_user LOGIN;
CREATE ROLE rewind_user LOGIN;
GRANT EXECUTE ON function pg_catalog.pg_ls_dir(text, boolean, boolean)
GRANT EXECUTE ON function pg_catalog.pg_ls_dir(text, boolean, boolean)
TO rewind_user;
TO rewind_user;
...
@@ -267,8 +270,7 @@ sub run_pg_rewind
...
@@ -267,8 +270,7 @@ sub run_pg_rewind
[
[
'
pg_rewind
',
"
--debug
",
'
pg_rewind
',
"
--debug
",
"
--source-server
",
$standby_connstr
,
"
--source-server
",
$standby_connstr
,
"
--target-pgdata=
$master_pgdata
",
"
--target-pgdata=
$master_pgdata
",
"
--no-sync
"
"
--no-sync
"
],
],
'
pg_rewind remote
');
'
pg_rewind remote
');
}
}
...
...
src/bin/pgbench/t/001_pgbench_with_server.pl
View file @
db6e2b4c
...
@@ -542,14 +542,17 @@ pgbench(
...
@@ -542,14 +542,17 @@ pgbench(
pgbench
(
pgbench
(
'
-t 1
',
0
,
'
-t 1
',
0
,
[
qr{type: .*/001_pgbench_gset}
,
qr{processed: 1/1}
],
[
qr{type: .*/001_pgbench_gset}
,
qr{processed: 1/1}
],
[
qr{command=3.: int 0\b}
,
[
qr{command=3.: int 0\b}
,
qr{command=5.: int 1\b}
,
qr{command=5.: int 1\b}
,
qr{command=6.: int 2\b}
,
qr{command=6.: int 2\b}
,
qr{command=8.: int 3\b}
,
qr{command=8.: int 3\b}
,
qr{command=10.: int 4\b}
,
qr{command=10.: int 4\b}
,
qr{command=12.: int 5\b}
],
qr{command=12.: int 5\b}
],
'
pgbench gset command
',
'
pgbench gset command
',
{
'
001_pgbench_gset
'
=>
q{-- test gset
{
'
001_pgbench_gset
'
=>
q{-- test gset
-- no columns
-- no columns
SELECT \gset
SELECT \gset
-- one value
-- one value
...
@@ -568,7 +571,8 @@ SELECT 0 AS i4, 4 AS i4 \gset
...
@@ -568,7 +571,8 @@ SELECT 0 AS i4, 4 AS i4 \gset
-- work on the last SQL command under \;
-- work on the last SQL command under \;
\; \; SELECT 0 AS i5 \; SELECT 5 AS i5 \; \; \gset
\; \; SELECT 0 AS i5 \; SELECT 5 AS i5 \; \; \gset
\set i debug(:i5)
\set i debug(:i5)
}
});
}
});
# trigger many expression errors
# trigger many expression errors
my
@errors
=
(
my
@errors
=
(
...
@@ -587,10 +591,11 @@ my @errors = (
...
@@ -587,10 +591,11 @@ my @errors = (
}
}
],
],
[
[
'
sql too many args
',
1
,
[
qr{statement has too many arguments.*\b255\b}
],
'
sql too many args
',
1
,
[
qr{statement has too many arguments.*\b255\b}
],
q{-- MAX_ARGS=256 for prepared
q{-- MAX_ARGS=256 for prepared
\set i 0
\set i 0
SELECT LEAST(}
.
join
('
,
',
('
:i
')
x
256
)
.
q{)}
SELECT LEAST(}
.
join
('
,
',
('
:i
')
x
256
)
.
q{)}
],
],
# SHELL
# SHELL
...
@@ -609,7 +614,7 @@ SELECT LEAST(}.join(', ', (':i') x 256).q{)}
...
@@ -609,7 +614,7 @@ SELECT LEAST(}.join(', ', (':i') x 256).q{)}
[
[
'
shell too many args
',
1
,
[
qr{too many arguments in command "shell"}
],
'
shell too many args
',
1
,
[
qr{too many arguments in command "shell"}
],
q{-- 256 arguments to \shell
q{-- 256 arguments to \shell
\shell echo }
.
join
('
',
('
arg
')
x
255
)
\shell echo }
.
join
('
',
('
arg
')
x
255
)
],
],
# SET
# SET
...
@@ -625,11 +630,9 @@ SELECT LEAST(}.join(', ', (':i') x 256).q{)}
...
@@ -625,11 +630,9 @@ SELECT LEAST(}.join(', ', (':i') x 256).q{)}
'
set invalid variable name
',
2
,
'
set invalid variable name
',
2
,
[
qr{invalid variable name}
],
q{\set . 1}
[
qr{invalid variable name}
],
q{\set . 1}
],
],
[
'
set division by zero
',
2
,
[
qr{division by zero}
],
q{\set i 1/0}
],
[
[
'
set division by zero
',
2
,
'
set undefined variable
',
[
qr{division by zero}
],
q{\set i 1/0}
],
[
'
set undefined variable
',
2
,
2
,
[
qr{undefined variable "nosuchvariable"}
],
[
qr{undefined variable "nosuchvariable"}
],
q{\set i :nosuchvariable}
q{\set i :nosuchvariable}
...
@@ -646,10 +649,8 @@ SELECT LEAST(}.join(', ', (':i') x 256).q{)}
...
@@ -646,10 +649,8 @@ SELECT LEAST(}.join(', ', (':i') x 256).q{)}
[
qr{empty range given to random}
],
q{\set i random(5,3)}
[
qr{empty range given to random}
],
q{\set i random(5,3)}
],
],
[
[
'
set random range too large
',
'
set random range too large
',
2
,
2
,
[
qr{random range is too large}
],
q{\set i random(:minint, :maxint)}
[
qr{random range is too large}
],
q{\set i random(:minint, :maxint)}
],
],
[
[
'
set gaussian param too small
',
'
set gaussian param too small
',
...
@@ -713,16 +714,26 @@ SELECT LEAST(}.join(', ', (':i') x 256).q{)}
...
@@ -713,16 +714,26 @@ SELECT LEAST(}.join(', ', (':i') x 256).q{)}
],
],
# SET: ARITHMETIC OVERFLOW DETECTION
# SET: ARITHMETIC OVERFLOW DETECTION
[
'
set double to int overflow
',
2
,
[
[
qr{double to int overflow for 100}
],
q{\set i int(1E32)}
],
'
set double to int overflow
',
2
,
[
'
set bigint add overflow
',
2
,
[
qr{double to int overflow for 100}
],
q{\set i int(1E32)}
[
qr{int add out}
],
q{\set i (1<<62) + (1<<62)}
],
],
[
'
set bigint sub overflow
',
2
,
[
[
qr{int sub out}
],
q{\set i 0 - (1<<62) - (1<<62) - (1<<62)}
],
'
set bigint add overflow
',
2
,
[
'
set bigint mul overflow
',
2
,
[
qr{int add out}
],
q{\set i (1<<62) + (1<<62)}
[
qr{int mul out}
],
q{\set i 2 * (1<<62)}
],
],
[
'
set bigint div out of range
',
2
,
[
[
qr{bigint div out of range}
],
q{\set i :minint / -1}
],
'
set bigint sub overflow
',
2
,
[
qr{int sub out}
],
q{\set i 0 - (1<<62) - (1<<62) - (1<<62)}
],
[
'
set bigint mul overflow
',
2
,
[
qr{int mul out}
],
q{\set i 2 * (1<<62)}
],
[
'
set bigint div out of range
',
2
,
[
qr{bigint div out of range}
],
q{\set i :minint / -1}
],
# SETSHELL
# SETSHELL
[
[
...
@@ -759,31 +770,47 @@ SELECT LEAST(}.join(', ', (':i') x 256).q{)}
...
@@ -759,31 +770,47 @@ SELECT LEAST(}.join(', ', (':i') x 256).q{)}
[
qr{invalid command .* "nosuchcommand"}
],
q{\nosuchcommand}
[
qr{invalid command .* "nosuchcommand"}
],
q{\nosuchcommand}
],
],
[
'
misc empty script
',
1
,
[
qr{empty command list for script}
],
q{}
],
[
'
misc empty script
',
1
,
[
qr{empty command list for script}
],
q{}
],
[
'
bad boolean
',
2
,
[
[
qr{malformed variable.*trueXXX}
],
q{\set b :badtrue or true}
],
'
bad boolean
',
2
,
[
qr{malformed variable.*trueXXX}
],
q{\set b :badtrue or true}
],
# GSET
# GSET
[
'
gset no row
',
2
,
[
[
qr{expected one row, got 0\b}
],
q{SELECT WHERE FALSE \gset}
],
'
gset no row
',
2
,
[
qr{expected one row, got 0\b}
],
q{SELECT WHERE FALSE \gset}
],
[
'
gset alone
',
1
,
[
qr{gset must follow a SQL command}
],
q{\gset}
],
[
'
gset alone
',
1
,
[
qr{gset must follow a SQL command}
],
q{\gset}
],
[
'
gset no SQL
',
1
,
[
'
gset no SQL
',
1
,
[
qr{gset must follow a SQL command}
],
q{\set i +1
[
qr{gset must follow a SQL command}
],
q{\set i +1
\gset}
],
\gset}
[
'
gset too many arguments
',
1
,
],
[
qr{too many arguments}
],
q{SELECT 1 \gset a b}
],
[
[
'
gset after gset
',
1
,
'
gset too many arguments
',
1
,
[
qr{too many arguments}
],
q{SELECT 1 \gset a b}
],
[
'
gset after gset
',
1
,
[
qr{gset must follow a SQL command}
],
q{SELECT 1 AS i \gset
[
qr{gset must follow a SQL command}
],
q{SELECT 1 AS i \gset
\gset}
],
\gset}
[
'
gset non SELECT
',
2
,
],
[
'
gset non SELECT
',
2
,
[
qr{expected one row, got 0}
],
[
qr{expected one row, got 0}
],
q{DROP TABLE IF EXISTS no_such_table \gset}
],
q{DROP TABLE IF EXISTS no_such_table \gset}
[
'
gset bad default name
',
2
,
],
[
qr{error storing into variable \?column\?}
],
[
q{SELECT 1 \gset}
],
'
gset bad default name
',
2
,
[
'
gset bad name
',
2
,
[
qr{error storing into variable \?column\?}
],
q{SELECT 1 \gset}
],
[
'
gset bad name
',
2
,
[
qr{error storing into variable bad name!}
],
[
qr{error storing into variable bad name!}
],
q{SELECT 1 AS "bad name!" \gset}
],
q{SELECT 1 AS "bad name!" \gset}
);
],
);
for
my
$e
(
@errors
)
for
my
$e
(
@errors
)
{
{
...
@@ -792,9 +819,9 @@ for my $e (@errors)
...
@@ -792,9 +819,9 @@ for my $e (@errors)
my
$n
=
'
001_pgbench_error_
'
.
$name
;
my
$n
=
'
001_pgbench_error_
'
.
$name
;
$n
=~
s/ /_/g
;
$n
=~
s/ /_/g
;
pgbench
(
pgbench
(
'
-n -t 1 -Dfoo=bla -Dnull=null -Dtrue=true -Done=1 -Dzero=0.0 -Dbadtrue=trueXXX
'
.
'
-n -t 1 -Dfoo=bla -Dnull=null -Dtrue=true -Done=1 -Dzero=0.0 -Dbadtrue=trueXXX
'
'
-Dmaxint=9223372036854775807 -Dminint=-9223372036854775808
'
.
.
'
-Dmaxint=9223372036854775807 -Dminint=-9223372036854775808
'
(
$no_prepare
?
''
:
'
-M prepared
'),
.
(
$no_prepare
?
''
:
'
-M prepared
'),
$status
,
$status
,
[
$status
==
1
?
qr{^$}
:
qr{processed: 0/1}
],
[
$status
==
1
?
qr{^$}
:
qr{processed: 0/1}
],
$re
,
$re
,
...
@@ -869,12 +896,9 @@ my $bdir = $node->basedir;
...
@@ -869,12 +896,9 @@ my $bdir = $node->basedir;
# with sampling rate
# with sampling rate
pgbench
(
pgbench
(
"
-n -S -t 50 -c 2 --log --sampling-rate=0.5
",
"
-n -S -t 50 -c 2 --log --sampling-rate=0.5
",
0
,
0
,
[
qr{select only}
,
qr{processed: 100/100}
],
[
qr{^$}
],
[
qr{select only}
,
qr{processed: 100/100}
],
'
pgbench logs
',
undef
,
[
qr{^$}
],
'
pgbench logs
',
undef
,
"
--log-prefix=
$bdir
/001_pgbench_log_2
");
"
--log-prefix=
$bdir
/001_pgbench_log_2
");
check_pgbench_logs
(
$bdir
,
'
001_pgbench_log_2
',
1
,
8
,
92
,
check_pgbench_logs
(
$bdir
,
'
001_pgbench_log_2
',
1
,
8
,
92
,
...
@@ -882,8 +906,8 @@ check_pgbench_logs($bdir, '001_pgbench_log_2', 1, 8, 92,
...
@@ -882,8 +906,8 @@ check_pgbench_logs($bdir, '001_pgbench_log_2', 1, 8, 92,
# check log file in some detail
# check log file in some detail
pgbench
(
pgbench
(
"
-n -b se -t 10 -l
",
"
-n -b se -t 10 -l
",
0
,
0
,
[
qr{select only}
,
qr{processed: 10/10}
],
[
qr{^$}
],
[
qr{select only}
,
qr{processed: 10/10}
],
[
qr{^$}
],
'
pgbench logs contents
',
undef
,
'
pgbench logs contents
',
undef
,
"
--log-prefix=
$bdir
/001_pgbench_log_3
");
"
--log-prefix=
$bdir
/001_pgbench_log_3
");
...
...
src/bin/scripts/t/090_reindexdb.pl
View file @
db6e2b4c
...
@@ -61,8 +61,7 @@ $node->issues_sql_like(
...
@@ -61,8 +61,7 @@ $node->issues_sql_like(
[
'
reindexdb
',
'
--concurrently
',
'
-S
',
'
public
',
'
postgres
'
],
[
'
reindexdb
',
'
--concurrently
',
'
-S
',
'
public
',
'
postgres
'
],
qr/statement: REINDEX SCHEMA CONCURRENTLY public;/
,
qr/statement: REINDEX SCHEMA CONCURRENTLY public;/
,
'
reindex specific schema concurrently
');
'
reindex specific schema concurrently
');
$node
->
command_fails
(
$node
->
command_fails
([
'
reindexdb
',
'
--concurrently
',
'
-s
',
'
postgres
'
],
[
'
reindexdb
',
'
--concurrently
',
'
-s
',
'
postgres
'
],
'
reindex system tables concurrently
');
'
reindex system tables concurrently
');
$node
->
issues_sql_like
(
$node
->
issues_sql_like
(
[
'
reindexdb
',
'
-v
',
'
-t
',
'
test1
',
'
postgres
'
],
[
'
reindexdb
',
'
-v
',
'
-t
',
'
test1
',
'
postgres
'
],
...
...
src/bin/scripts/t/100_vacuumdb.pl
View file @
db6e2b4c
...
@@ -96,16 +96,16 @@ $node->command_checks_all(
...
@@ -96,16 +96,16 @@ $node->command_checks_all(
[
qr/^WARNING.*cannot vacuum non-tables or special system tables/
s],
[
qr/^WARNING.*cannot vacuum non-tables or special system tables/
s],
'vacuumdb with view');
'vacuumdb with view');
$node->command_fails(
$node->command_fails(
[ 'vacuumdb', '--table', 'vactable', '--min-mxid-age', '0',
[ 'vacuumdb', '--table', 'vactable', '--min-mxid-age', '0', 'postgres' ],
'postgres'],
'vacuumdb --min-mxid-age with incorrect value');
'vacuumdb --min-mxid-age with incorrect value');
$node->command_fails(
$node->command_fails(
[ 'vacuumdb', '--table', 'vactable', '--min-xid-age', '0',
[ 'vacuumdb', '--table', 'vactable', '--min-xid-age', '0', 'postgres' ]
,
'postgres']
,
'
vacuumdb --min-xid-age with incorrect value
');
'
vacuumdb --min-xid-age with incorrect value
');
$node
->
issues_sql_like
(
$node
->
issues_sql_like
(
[
'
vacuumdb
',
'
--table
',
'
vactable
',
'
--min-mxid-age
',
'
2147483000
',
[
'
postgres
'],
'
vacuumdb
',
'
--table
',
'
vactable
',
'
--min-mxid-age
',
'
2147483000
',
'
postgres
'
],
qr/GREATEST.*relminmxid.*2147483000/
,
qr/GREATEST.*relminmxid.*2147483000/
,
'
vacuumdb --table --min-mxid-age
');
'
vacuumdb --table --min-mxid-age
');
$node
->
issues_sql_like
(
$node
->
issues_sql_like
(
...
...
src/include/catalog/unused_oids
View file @
db6e2b4c
...
@@ -34,8 +34,7 @@ my $oids = Catalog::FindAllOidsFromHeaders(@input_files);
...
@@ -34,8 +34,7 @@ my $oids = Catalog::FindAllOidsFromHeaders(@input_files);
# Also push FirstGenbkiObjectId to serve as a terminator for the last gap.
# Also push FirstGenbkiObjectId to serve as a terminator for the last gap.
my
$FirstGenbkiObjectId
=
my
$FirstGenbkiObjectId
=
Catalog::
FindDefinedSymbol
('
access/transam.h
',
'
..
',
Catalog::
FindDefinedSymbol
('
access/transam.h
',
'
..
',
'
FirstGenbkiObjectId
');
'
FirstGenbkiObjectId
');
push
@
{
$oids
},
$FirstGenbkiObjectId
;
push
@
{
$oids
},
$FirstGenbkiObjectId
;
my
$prev_oid
=
0
;
my
$prev_oid
=
0
;
...
...
src/interfaces/ecpg/preproc/check_rules.pl
View file @
db6e2b4c
...
@@ -39,11 +39,11 @@ my %replace_line = (
...
@@ -39,11 +39,11 @@ my %replace_line = (
'
ExecuteStmtEXECUTEnameexecute_param_clause
'
=>
'
ExecuteStmtEXECUTEnameexecute_param_clause
'
=>
'
EXECUTE prepared_name execute_param_clause execute_rest
',
'
EXECUTE prepared_name execute_param_clause execute_rest
',
'
ExecuteStmtCREATEOptTempTABLEcreate_as_targetASEXECUTEnameexecute_param_clauseopt_with_data
'
=>
'
ExecuteStmtCREATEOptTempTABLEcreate_as_targetASEXECUTEnameexecute_param_clauseopt_with_data
'
'
CREATE OptTemp TABLE create_as_target AS EXECUTE prepared_name execute_param_clause opt_with_data execute_rest
'
,
=>
'
CREATE OptTemp TABLE create_as_target AS EXECUTE prepared_name execute_param_clause opt_with_data execute_rest
'
,
'
ExecuteStmtCREATEOptTempTABLEIF_PNOTEXISTScreate_as_targetASEXECUTEnameexecute_param_clauseopt_with_data
'
=>
'
ExecuteStmtCREATEOptTempTABLEIF_PNOTEXISTScreate_as_targetASEXECUTEnameexecute_param_clauseopt_with_data
'
'
CREATE OptTemp TABLE IF_P NOT EXISTS create_as_target AS EXECUTE prepared_name execute_param_clause opt_with_data execute_rest
'
,
=>
'
CREATE OptTemp TABLE IF_P NOT EXISTS create_as_target AS EXECUTE prepared_name execute_param_clause opt_with_data execute_rest
'
,
'
PrepareStmtPREPAREnameprep_type_clauseASPreparableStmt
'
=>
'
PrepareStmtPREPAREnameprep_type_clauseASPreparableStmt
'
=>
'
PREPARE prepared_name prep_type_clause AS PreparableStmt
');
'
PREPARE prepared_name prep_type_clause AS PreparableStmt
');
...
...
src/interfaces/ecpg/preproc/parse.pl
View file @
db6e2b4c
...
@@ -103,10 +103,10 @@ my %replace_line = (
...
@@ -103,10 +103,10 @@ my %replace_line = (
'
RETURNING target_list opt_ecpg_into
',
'
RETURNING target_list opt_ecpg_into
',
'
ExecuteStmtEXECUTEnameexecute_param_clause
'
=>
'
ExecuteStmtEXECUTEnameexecute_param_clause
'
=>
'
EXECUTE prepared_name execute_param_clause execute_rest
',
'
EXECUTE prepared_name execute_param_clause execute_rest
',
'
ExecuteStmtCREATEOptTempTABLEcreate_as_targetASEXECUTEnameexecute_param_clauseopt_with_data
'
=>
'
ExecuteStmtCREATEOptTempTABLEcreate_as_targetASEXECUTEnameexecute_param_clauseopt_with_data
'
'
CREATE OptTemp TABLE create_as_target AS EXECUTE prepared_name execute_param_clause opt_with_data execute_rest
',
=>
'
CREATE OptTemp TABLE create_as_target AS EXECUTE prepared_name execute_param_clause opt_with_data execute_rest
',
'
ExecuteStmtCREATEOptTempTABLEIF_PNOTEXISTScreate_as_targetASEXECUTEnameexecute_param_clauseopt_with_data
'
=>
'
ExecuteStmtCREATEOptTempTABLEIF_PNOTEXISTScreate_as_targetASEXECUTEnameexecute_param_clauseopt_with_data
'
'
CREATE OptTemp TABLE IF_P NOT EXISTS create_as_target AS EXECUTE prepared_name execute_param_clause opt_with_data execute_rest
',
=>
'
CREATE OptTemp TABLE IF_P NOT EXISTS create_as_target AS EXECUTE prepared_name execute_param_clause opt_with_data execute_rest
',
'
PrepareStmtPREPAREnameprep_type_clauseASPreparableStmt
'
=>
'
PrepareStmtPREPAREnameprep_type_clauseASPreparableStmt
'
=>
'
PREPARE prepared_name prep_type_clause AS PreparableStmt
',
'
PREPARE prepared_name prep_type_clause AS PreparableStmt
',
'
var_nameColId
'
=>
'
ECPGColId
');
'
var_nameColId
'
=>
'
ECPGColId
');
...
...
src/test/modules/commit_ts/t/004_restart.pl
View file @
db6e2b4c
...
@@ -85,8 +85,9 @@ $node_master->restart;
...
@@ -85,8 +85,9 @@ $node_master->restart;
# Move commit timestamps across page boundaries. Things should still
# Move commit timestamps across page boundaries. Things should still
# be able to work across restarts with those transactions committed while
# be able to work across restarts with those transactions committed while
# track_commit_timestamp is disabled.
# track_commit_timestamp is disabled.
$node_master
->
safe_psql
('
postgres
',
$node_master
->
safe_psql
(
qq(CREATE PROCEDURE consume_xid(cnt int)
'
postgres
',
qq(CREATE PROCEDURE consume_xid(cnt int)
AS \$\$
AS \$\$
DECLARE
DECLARE
i int;
i int;
...
...
src/test/perl/TestLib.pm
View file @
db6e2b4c
src/test/recovery/t/001_stream_rep.pl
View file @
db6e2b4c
...
@@ -9,8 +9,9 @@ use Test::More tests => 32;
...
@@ -9,8 +9,9 @@ use Test::More tests => 32;
my
$node_master
=
get_new_node
('
master
');
my
$node_master
=
get_new_node
('
master
');
# A specific role is created to perform some tests related to replication,
# A specific role is created to perform some tests related to replication,
# and it needs proper authentication configuration.
# and it needs proper authentication configuration.
$node_master
->
init
(
allows_streaming
=>
1
,
$node_master
->
init
(
auth_extra
=>
['
--create-role
',
'
repl_role
']);
allows_streaming
=>
1
,
auth_extra
=>
[
'
--create-role
',
'
repl_role
'
]);
$node_master
->
start
;
$node_master
->
start
;
my
$backup_name
=
'
my_backup
';
my
$backup_name
=
'
my_backup
';
...
@@ -124,7 +125,8 @@ test_target_session_attrs($node_standby_1, $node_master, $node_standby_1,
...
@@ -124,7 +125,8 @@ test_target_session_attrs($node_standby_1, $node_master, $node_standby_1,
# role.
# role.
note
"
testing SHOW commands for replication connection
";
note
"
testing SHOW commands for replication connection
";
$node_master
->
psql
('
postgres
',"
$node_master
->
psql
(
'
postgres
',
"
CREATE ROLE repl_role REPLICATION LOGIN;
CREATE ROLE repl_role REPLICATION LOGIN;
GRANT pg_read_all_settings TO repl_role;
");
GRANT pg_read_all_settings TO repl_role;
");
my
$master_host
=
$node_master
->
host
;
my
$master_host
=
$node_master
->
host
;
...
@@ -134,40 +136,48 @@ my $connstr_rep = "$connstr_common replication=1";
...
@@ -134,40 +136,48 @@ my $connstr_rep = "$connstr_common replication=1";
my
$connstr_db
=
"
$connstr_common
replication=database dbname=postgres
";
my
$connstr_db
=
"
$connstr_common
replication=database dbname=postgres
";
# Test SHOW ALL
# Test SHOW ALL
my
(
$ret
,
$stdout
,
$stderr
)
=
my
(
$ret
,
$stdout
,
$stderr
)
=
$node_master
->
psql
(
$node_master
->
psql
(
'
postgres
',
'
SHOW ALL;
',
'
postgres
',
'
SHOW ALL;
',
on_error_die
=>
1
,
on_error_die
=>
1
,
extra_params
=>
[
'
-d
',
$connstr_rep
]);
extra_params
=>
[
'
-d
',
$connstr_rep
]);
ok
(
$ret
==
0
,
"
SHOW ALL with replication role and physical replication
");
ok
(
$ret
==
0
,
"
SHOW ALL with replication role and physical replication
");
(
$ret
,
$stdout
,
$stderr
)
=
(
$ret
,
$stdout
,
$stderr
)
=
$node_master
->
psql
(
$node_master
->
psql
(
'
postgres
',
'
SHOW ALL;
',
'
postgres
',
'
SHOW ALL;
',
on_error_die
=>
1
,
on_error_die
=>
1
,
extra_params
=>
[
'
-d
',
$connstr_db
]);
extra_params
=>
[
'
-d
',
$connstr_db
]);
ok
(
$ret
==
0
,
"
SHOW ALL with replication role and logical replication
");
ok
(
$ret
==
0
,
"
SHOW ALL with replication role and logical replication
");
# Test SHOW with a user-settable parameter
# Test SHOW with a user-settable parameter
(
$ret
,
$stdout
,
$stderr
)
=
(
$ret
,
$stdout
,
$stderr
)
=
$node_master
->
psql
(
$node_master
->
psql
(
'
postgres
',
'
SHOW work_mem;
',
'
postgres
',
'
SHOW work_mem;
',
on_error_die
=>
1
,
on_error_die
=>
1
,
extra_params
=>
[
'
-d
',
$connstr_rep
]);
extra_params
=>
[
'
-d
',
$connstr_rep
]);
ok
(
$ret
==
0
,
"
SHOW with user-settable parameter, replication role and physical replication
");
ok
(
$ret
==
0
,
(
$ret
,
$stdout
,
$stderr
)
=
"
SHOW with user-settable parameter, replication role and physical replication
"
$node_master
->
psql
('
postgres
',
'
SHOW work_mem;
',
);
(
$ret
,
$stdout
,
$stderr
)
=
$node_master
->
psql
(
'
postgres
',
'
SHOW work_mem;
',
on_error_die
=>
1
,
on_error_die
=>
1
,
extra_params
=>
[
'
-d
',
$connstr_db
]);
extra_params
=>
[
'
-d
',
$connstr_db
]);
ok
(
$ret
==
0
,
"
SHOW with user-settable parameter, replication role and logical replication
");
ok
(
$ret
==
0
,
"
SHOW with user-settable parameter, replication role and logical replication
"
);
# Test SHOW with a superuser-settable parameter
# Test SHOW with a superuser-settable parameter
(
$ret
,
$stdout
,
$stderr
)
=
(
$ret
,
$stdout
,
$stderr
)
=
$node_master
->
psql
(
$node_master
->
psql
(
'
postgres
',
'
SHOW primary_conninfo;
',
'
postgres
',
'
SHOW primary_conninfo;
',
on_error_die
=>
1
,
on_error_die
=>
1
,
extra_params
=>
[
'
-d
',
$connstr_rep
]);
extra_params
=>
[
'
-d
',
$connstr_rep
]);
ok
(
$ret
==
0
,
"
SHOW with superuser-settable parameter, replication role and physical replication
");
ok
(
$ret
==
0
,
(
$ret
,
$stdout
,
$stderr
)
=
"
SHOW with superuser-settable parameter, replication role and physical replication
"
$node_master
->
psql
('
postgres
',
'
SHOW primary_conninfo;
',
);
(
$ret
,
$stdout
,
$stderr
)
=
$node_master
->
psql
(
'
postgres
',
'
SHOW primary_conninfo;
',
on_error_die
=>
1
,
on_error_die
=>
1
,
extra_params
=>
[
'
-d
',
$connstr_db
]);
extra_params
=>
[
'
-d
',
$connstr_db
]);
ok
(
$ret
==
0
,
"
SHOW with superuser-settable parameter, replication role and logical replication
");
ok
(
$ret
==
0
,
"
SHOW with superuser-settable parameter, replication role and logical replication
"
);
note
"
switching to physical replication slot
";
note
"
switching to physical replication slot
";
...
...
src/test/recovery/t/003_recovery_targets.pl
View file @
db6e2b4c
...
@@ -129,14 +129,19 @@ test_recovery_standby('multiple overriding settings',
...
@@ -129,14 +129,19 @@ test_recovery_standby('multiple overriding settings',
'
standby_6
',
$node_master
,
\
@recovery_params
,
"
3000
",
$lsn3
);
'
standby_6
',
$node_master
,
\
@recovery_params
,
"
3000
",
$lsn3
);
my
$node_standby
=
get_new_node
('
standby_7
');
my
$node_standby
=
get_new_node
('
standby_7
');
$node_standby
->
init_from_backup
(
$node_master
,
'
my_backup
',
has_restoring
=>
1
);
$node_standby
->
init_from_backup
(
$node_master
,
'
my_backup
',
$node_standby
->
append_conf
('
postgresql.conf
',
"
recovery_target_name = '
$recovery_name
'
has_restoring
=>
1
);
$node_standby
->
append_conf
(
'
postgresql.conf
',
"
recovery_target_name = '
$recovery_name
'
recovery_target_time = '
$recovery_time
'
");
recovery_target_time = '
$recovery_time
'
");
my
$res
=
run_log
(['
pg_ctl
',
'
-D
',
$node_standby
->
data_dir
,
my
$res
=
run_log
(
'
-l
',
$node_standby
->
logfile
,
'
start
']);
[
ok
(
!
$res
,
'
invalid recovery startup fails
');
'
pg_ctl
',
'
-D
',
$node_standby
->
data_dir
,
'
-l
',
$node_standby
->
logfile
,
'
start
'
]);
ok
(
!
$res
,
'
invalid recovery startup fails
');
my
$logfile
=
slurp_file
(
$node_standby
->
logfile
());
my
$logfile
=
slurp_file
(
$node_standby
->
logfile
());
ok
(
$logfile
=~
qr/multiple recovery targets specified/
,
ok
(
$logfile
=~
qr/multiple recovery targets specified/
,
'
multiple conflicting settings
');
'
multiple conflicting settings
');
src/test/recovery/t/004_timeline_switch.pl
View file @
db6e2b4c
...
@@ -42,7 +42,9 @@ $node_master->teardown_node;
...
@@ -42,7 +42,9 @@ $node_master->teardown_node;
# promote standby 1 using "pg_promote", switching it to a new timeline
# promote standby 1 using "pg_promote", switching it to a new timeline
my
$psql_out
=
'';
my
$psql_out
=
'';
$node_standby_1
->
psql
('
postgres
',
"
SELECT pg_promote(wait_seconds => 300)
",
$node_standby_1
->
psql
(
'
postgres
',
"
SELECT pg_promote(wait_seconds => 300)
",
stdout
=>
\
$psql_out
);
stdout
=>
\
$psql_out
);
is
(
$psql_out
,
'
t
',
"
promotion of standby with pg_promote
");
is
(
$psql_out
,
'
t
',
"
promotion of standby with pg_promote
");
...
...
src/test/recovery/t/013_crash_restart.pl
View file @
db6e2b4c
...
@@ -196,8 +196,10 @@ $killme_stdin .= q[
...
@@ -196,8 +196,10 @@ $killme_stdin .= q[
SELECT 1;
SELECT 1;
]
;
]
;
ok
(
pump_until
(
ok
(
pump_until
(
$killme
,
\
$killme_stderr
,
$killme
,
qr/server closed the connection unexpectedly|connection to server was lost/
m),
\
$killme_stderr
,
qr/server closed the connection unexpectedly|connection to server was lost/
m
),
"psql query died successfully after SIGKILL")
;
"psql query died successfully after SIGKILL")
;
$killme
->
finish
;
$killme
->
finish
;
...
...
src/test/recovery/t/015_promotion_pages.pl
View file @
db6e2b4c
...
@@ -32,7 +32,8 @@ $bravo->start;
...
@@ -32,7 +32,8 @@ $bravo->start;
# Dummy table for the upcoming tests.
# Dummy table for the upcoming tests.
$alpha
->
safe_psql
('
postgres
',
'
create table test1 (a int)
');
$alpha
->
safe_psql
('
postgres
',
'
create table test1 (a int)
');
$alpha
->
safe_psql
('
postgres
',
'
insert into test1 select generate_series(1, 10000)
');
$alpha
->
safe_psql
('
postgres
',
'
insert into test1 select generate_series(1, 10000)
');
# take a checkpoint
# take a checkpoint
$alpha
->
safe_psql
('
postgres
',
'
checkpoint
');
$alpha
->
safe_psql
('
postgres
',
'
checkpoint
');
...
@@ -41,8 +42,7 @@ $alpha->safe_psql('postgres', 'checkpoint');
...
@@ -41,8 +42,7 @@ $alpha->safe_psql('postgres', 'checkpoint');
# problematic WAL records.
# problematic WAL records.
$alpha
->
safe_psql
('
postgres
',
'
vacuum verbose test1
');
$alpha
->
safe_psql
('
postgres
',
'
vacuum verbose test1
');
# Wait for last record to have been replayed on the standby.
# Wait for last record to have been replayed on the standby.
$alpha
->
wait_for_catchup
(
$bravo
,
'
replay
',
$alpha
->
wait_for_catchup
(
$bravo
,
'
replay
',
$alpha
->
lsn
('
insert
'));
$alpha
->
lsn
('
insert
'));
# Now force a checkpoint on the standby. This seems unnecessary but for "some"
# Now force a checkpoint on the standby. This seems unnecessary but for "some"
# reason, the previous checkpoint on the primary does not reflect on the standby
# reason, the previous checkpoint on the primary does not reflect on the standby
...
@@ -53,12 +53,12 @@ $bravo->safe_psql('postgres', 'checkpoint');
...
@@ -53,12 +53,12 @@ $bravo->safe_psql('postgres', 'checkpoint');
# Now just use a dummy table and run some operations to move minRecoveryPoint
# Now just use a dummy table and run some operations to move minRecoveryPoint
# beyond the previous vacuum.
# beyond the previous vacuum.
$alpha
->
safe_psql
('
postgres
',
'
create table test2 (a int, b text)
');
$alpha
->
safe_psql
('
postgres
',
'
create table test2 (a int, b text)
');
$alpha
->
safe_psql
('
postgres
',
'
insert into test2 select generate_series(1,10000), md5(random()::text)
');
$alpha
->
safe_psql
('
postgres
',
'
insert into test2 select generate_series(1,10000), md5(random()::text)
');
$alpha
->
safe_psql
('
postgres
',
'
truncate test2
');
$alpha
->
safe_psql
('
postgres
',
'
truncate test2
');
# Wait again for all records to be replayed.
# Wait again for all records to be replayed.
$alpha
->
wait_for_catchup
(
$bravo
,
'
replay
',
$alpha
->
wait_for_catchup
(
$bravo
,
'
replay
',
$alpha
->
lsn
('
insert
'));
$alpha
->
lsn
('
insert
'));
# Do the promotion, which reinitializes minRecoveryPoint in the control
# Do the promotion, which reinitializes minRecoveryPoint in the control
# file so as WAL is replayed up to the end.
# file so as WAL is replayed up to the end.
...
@@ -69,7 +69,8 @@ $bravo->promote;
...
@@ -69,7 +69,8 @@ $bravo->promote;
# has not happened yet.
# has not happened yet.
$bravo
->
safe_psql
('
postgres
',
'
truncate test1
');
$bravo
->
safe_psql
('
postgres
',
'
truncate test1
');
$bravo
->
safe_psql
('
postgres
',
'
vacuum verbose test1
');
$bravo
->
safe_psql
('
postgres
',
'
vacuum verbose test1
');
$bravo
->
safe_psql
('
postgres
',
'
insert into test1 select generate_series(1,1000)
');
$bravo
->
safe_psql
('
postgres
',
'
insert into test1 select generate_series(1,1000)
');
# Now crash-stop the promoted standby and restart. This makes sure that
# Now crash-stop the promoted standby and restart. This makes sure that
# replay does not see invalid page references because of an invalid
# replay does not see invalid page references because of an invalid
...
@@ -80,8 +81,5 @@ $bravo->start;
...
@@ -80,8 +81,5 @@ $bravo->start;
# Check state of the table after full crash recovery. All its data should
# Check state of the table after full crash recovery. All its data should
# be here.
# be here.
my
$psql_out
;
my
$psql_out
;
$bravo
->
psql
(
$bravo
->
psql
('
postgres
',
"
SELECT count(*) FROM test1
",
stdout
=>
\
$psql_out
);
'
postgres
',
"
SELECT count(*) FROM test1
",
stdout
=>
\
$psql_out
);
is
(
$psql_out
,
'
1000
',
"
Check that table state is correct
");
is
(
$psql_out
,
'
1000
',
"
Check that table state is correct
");
src/test/recovery/t/016_min_consistency.pl
View file @
db6e2b4c
...
@@ -18,19 +18,19 @@ sub find_largest_lsn
...
@@ -18,19 +18,19 @@ sub find_largest_lsn
{
{
my
$blocksize
=
int
(
shift
);
my
$blocksize
=
int
(
shift
);
my
$filename
=
shift
;
my
$filename
=
shift
;
my
(
$max_hi
,
$max_lo
)
=
(
0
,
0
);
my
(
$max_hi
,
$max_lo
)
=
(
0
,
0
);
open
(
my
$fh
,
"
<:raw
",
$filename
)
open
(
my
$fh
,
"
<:raw
",
$filename
)
or
die
"
failed to open
$filename
: $!
";
or
die
"
failed to open
$filename
: $!
";
my
(
$buf
,
$len
);
my
(
$buf
,
$len
);
while
(
$len
=
read
(
$fh
,
$buf
,
$blocksize
))
while
(
$len
=
read
(
$fh
,
$buf
,
$blocksize
))
{
{
$len
==
$blocksize
$len
==
$blocksize
or
die
"
read only
$len
of
$blocksize
bytes from
$filename
";
or
die
"
read only
$len
of
$blocksize
bytes from
$filename
";
my
(
$hi
,
$lo
)
=
unpack
("
LL
",
$buf
);
my
(
$hi
,
$lo
)
=
unpack
("
LL
",
$buf
);
if
(
$hi
>
$max_hi
or
(
$hi
==
$max_hi
and
$lo
>
$max_lo
))
if
(
$hi
>
$max_hi
or
(
$hi
==
$max_hi
and
$lo
>
$max_lo
))
{
{
(
$max_hi
,
$max_lo
)
=
(
$hi
,
$lo
);
(
$max_hi
,
$max_lo
)
=
(
$hi
,
$lo
);
}
}
}
}
defined
(
$len
)
or
die
"
read error on
$filename
: $!
";
defined
(
$len
)
or
die
"
read error on
$filename
: $!
";
...
@@ -63,7 +63,8 @@ $standby->init_from_backup($primary, 'bkp', has_streaming => 1);
...
@@ -63,7 +63,8 @@ $standby->init_from_backup($primary, 'bkp', has_streaming => 1);
$standby
->
start
;
$standby
->
start
;
# Create base table whose data consistency is checked.
# Create base table whose data consistency is checked.
$primary
->
safe_psql
('
postgres
',
"
$primary
->
safe_psql
(
'
postgres
',
"
CREATE TABLE test1 (a int) WITH (fillfactor = 10);
CREATE TABLE test1 (a int) WITH (fillfactor = 10);
INSERT INTO test1 SELECT generate_series(1, 10000);
");
INSERT INTO test1 SELECT generate_series(1, 10000);
");
...
@@ -74,8 +75,7 @@ $primary->safe_psql('postgres', 'CHECKPOINT;');
...
@@ -74,8 +75,7 @@ $primary->safe_psql('postgres', 'CHECKPOINT;');
$primary
->
safe_psql
('
postgres
',
'
UPDATE test1 SET a = a + 1;
');
$primary
->
safe_psql
('
postgres
',
'
UPDATE test1 SET a = a + 1;
');
# Wait for last record to have been replayed on the standby.
# Wait for last record to have been replayed on the standby.
$primary
->
wait_for_catchup
(
$standby
,
'
replay
',
$primary
->
wait_for_catchup
(
$standby
,
'
replay
',
$primary
->
lsn
('
insert
'));
$primary
->
lsn
('
insert
'));
# Fill in the standby's shared buffers with the data filled in
# Fill in the standby's shared buffers with the data filled in
# previously.
# previously.
...
@@ -96,8 +96,7 @@ my $relfilenode = $primary->safe_psql('postgres',
...
@@ -96,8 +96,7 @@ my $relfilenode = $primary->safe_psql('postgres',
"
SELECT pg_relation_filepath('test1'::regclass);
");
"
SELECT pg_relation_filepath('test1'::regclass);
");
# Wait for last record to have been replayed on the standby.
# Wait for last record to have been replayed on the standby.
$primary
->
wait_for_catchup
(
$standby
,
'
replay
',
$primary
->
wait_for_catchup
(
$standby
,
'
replay
',
$primary
->
lsn
('
insert
'));
$primary
->
lsn
('
insert
'));
# Issue a restart point on the standby now, which makes the checkpointer
# Issue a restart point on the standby now, which makes the checkpointer
# update minRecoveryPoint.
# update minRecoveryPoint.
...
@@ -115,11 +114,11 @@ $standby->stop('fast');
...
@@ -115,11 +114,11 @@ $standby->stop('fast');
# done by directly scanning the on-disk relation blocks and what
# done by directly scanning the on-disk relation blocks and what
# pg_controldata lets know.
# pg_controldata lets know.
my
$standby_data
=
$standby
->
data_dir
;
my
$standby_data
=
$standby
->
data_dir
;
my
$offline_max_lsn
=
find_largest_lsn
(
$blocksize
,
my
$offline_max_lsn
=
"
$standby_data
/
$relfilenode
");
find_largest_lsn
(
$blocksize
,
"
$standby_data
/
$relfilenode
");
# Fetch minRecoveryPoint from the control file itself
# Fetch minRecoveryPoint from the control file itself
my
(
$stdout
,
$stderr
)
=
run_command
([
'
pg_controldata
',
$standby_data
]);
my
(
$stdout
,
$stderr
)
=
run_command
([
'
pg_controldata
',
$standby_data
]);
my
@control_data
=
split
("
\n
",
$stdout
);
my
@control_data
=
split
("
\n
",
$stdout
);
my
$offline_recovery_lsn
=
undef
;
my
$offline_recovery_lsn
=
undef
;
foreach
(
@control_data
)
foreach
(
@control_data
)
...
...
src/test/ssl/t/001_ssltests.pl
View file @
db6e2b4c
...
@@ -315,10 +315,14 @@ test_connect_fails(
...
@@ -315,10 +315,14 @@ test_connect_fails(
"
does not connect with client-side CRL
");
"
does not connect with client-side CRL
");
# pg_stat_ssl
# pg_stat_ssl
command_like
([
command_like
(
'
psql
',
'
-X
',
'
-A
',
'
-F
',
'
,
',
'
-P
',
'
null=_null_
',
[
'
-d
',
"
$common_connstr
sslrootcert=invalid
",
'
psql
',
'
-X
',
'
-c
',
"
SELECT * FROM pg_stat_ssl WHERE pid = pg_backend_pid()
"
'
-A
',
'
-F
',
'
,
',
'
-P
',
'
null=_null_
',
'
-d
',
"
$common_connstr
sslrootcert=invalid
",
'
-c
',
"
SELECT * FROM pg_stat_ssl WHERE pid = pg_backend_pid()
"
],
],
qr{^pid,ssl,version,cipher,bits,compression,client_dn,client_serial,issuer_dn\n
qr{^pid,ssl,version,cipher,bits,compression,client_dn,client_serial,issuer_dn\n
^\d+,t,TLSv[\d.]+,[\w-]+,\d+,f,_null_,_null_,_null_$}
mx
,
^\d+,t,TLSv[\d.]+,[\w-]+,\d+,f,_null_,_null_,_null_$}
mx
,
...
@@ -347,10 +351,19 @@ test_connect_ok(
...
@@ -347,10 +351,19 @@ test_connect_ok(
"
certificate authorization succeeds with correct client cert
");
"
certificate authorization succeeds with correct client cert
");
# pg_stat_ssl
# pg_stat_ssl
command_like
([
command_like
(
'
psql
',
'
-X
',
'
-A
',
'
-F
',
'
,
',
'
-P
',
'
null=_null_
',
[
'
-d
',
"
$common_connstr
user=ssltestuser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key
",
'
psql
',
'
-c
',
"
SELECT * FROM pg_stat_ssl WHERE pid = pg_backend_pid()
"
'
-X
',
'
-A
',
'
-F
',
'
,
',
'
-P
',
'
null=_null_
',
'
-d
',
"
$common_connstr
user=ssltestuser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key
",
'
-c
',
"
SELECT * FROM pg_stat_ssl WHERE pid = pg_backend_pid()
"
],
],
qr{^pid,ssl,version,cipher,bits,compression,client_dn,client_serial,issuer_dn\n
qr{^pid,ssl,version,cipher,bits,compression,client_dn,client_serial,issuer_dn\n
^\d+,t,TLSv[\d.]+,[\w-]+,\d+,f,/CN=ssltestuser,1,\Q/CN=Test CA for PostgreSQL SSL regression test client certs\E$}
mx
,
^\d+,t,TLSv[\d.]+,[\w-]+,\d+,f,/CN=ssltestuser,1,\Q/CN=Test CA for PostgreSQL SSL regression test client certs\E$}
mx
,
...
@@ -382,22 +395,28 @@ test_connect_fails(
...
@@ -382,22 +395,28 @@ test_connect_fails(
# works, iff username matches Common Name
# works, iff username matches Common Name
# fails, iff username doesn't match Common Name.
# fails, iff username doesn't match Common Name.
$common_connstr
=
$common_connstr
=
"
sslrootcert=ssl/root+server_ca.crt sslmode=require dbname=verifydb hostaddr=
$SERVERHOSTADDR
";
"
sslrootcert=ssl/root+server_ca.crt sslmode=require dbname=verifydb hostaddr=
$SERVERHOSTADDR
";
test_connect_ok
(
$common_connstr
,
test_connect_ok
(
$common_connstr
,
"
user=ssltestuser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key
",
"
user=ssltestuser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key
",
"
auth_option clientcert=verify-full succeeds with matching username and Common Name
");
"
auth_option clientcert=verify-full succeeds with matching username and Common Name
"
);
test_connect_fails
(
$common_connstr
,
test_connect_fails
(
$common_connstr
,
"
user=anotheruser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key
",
"
user=anotheruser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key
",
qr/FATAL/
,
qr/FATAL/
,
"
auth_option clientcert=verify-full fails with mismatching username and Common Name
");
"
auth_option clientcert=verify-full fails with mismatching username and Common Name
"
);
# Check that connecting with auth-optionverify-ca in pg_hba :
# Check that connecting with auth-optionverify-ca in pg_hba :
# works, when username doesn't match Common Name
# works, when username doesn't match Common Name
test_connect_ok
(
$common_connstr
,
test_connect_ok
(
$common_connstr
,
"
user=yetanotheruser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key
",
"
user=yetanotheruser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key
",
"
auth_option clientcert=verify-ca succeeds with mismatching username and Common Name
");
"
auth_option clientcert=verify-ca succeeds with mismatching username and Common Name
"
);
# intermediate client_ca.crt is provided by client, and isn't in server's ssl_ca_file
# intermediate client_ca.crt is provided by client, and isn't in server's ssl_ca_file
switch_server_cert
(
$node
,
'
server-cn-only
',
'
root_ca
');
switch_server_cert
(
$node
,
'
server-cn-only
',
'
root_ca
');
...
...
src/test/ssl/t/002_scram.pl
View file @
db6e2b4c
...
@@ -47,7 +47,6 @@ $common_connstr =
...
@@ -47,7 +47,6 @@ $common_connstr =
"
user=ssltestuser dbname=trustdb sslmode=require sslcert=invalid sslrootcert=invalid hostaddr=
$SERVERHOSTADDR
";
"
user=ssltestuser dbname=trustdb sslmode=require sslcert=invalid sslrootcert=invalid hostaddr=
$SERVERHOSTADDR
";
# Default settings
# Default settings
test_connect_ok
(
$common_connstr
,
'',
test_connect_ok
(
$common_connstr
,
'',
"
Basic SCRAM authentication with SSL
");
"
Basic SCRAM authentication with SSL
");
done_testing
(
$number_of_tests
);
done_testing
(
$number_of_tests
);
src/test/subscription/t/002_types.pl
View file @
db6e2b4c
...
@@ -551,12 +551,14 @@ e|{e,d}
...
@@ -551,12 +551,14 @@ e|{e,d}
# Test a domain with a constraint backed by a SQL-language function,
# Test a domain with a constraint backed by a SQL-language function,
# which needs an active snapshot in order to operate.
# which needs an active snapshot in order to operate.
$node_publisher
->
safe_psql
('
postgres
',
"
INSERT INTO tst_dom_constr VALUES (11)
");
$node_publisher
->
safe_psql
('
postgres
',
"
INSERT INTO tst_dom_constr VALUES (11)
");
$node_publisher
->
wait_for_catchup
('
tap_sub
');
$node_publisher
->
wait_for_catchup
('
tap_sub
');
$result
=
$result
=
$node_subscriber
->
safe_psql
('
postgres
',
"
SELECT sum(a) FROM tst_dom_constr
");
$node_subscriber
->
safe_psql
('
postgres
',
"
SELECT sum(a) FROM tst_dom_constr
");
is
(
$result
,
'
21
',
'
sql-function constraint on domain
');
is
(
$result
,
'
21
',
'
sql-function constraint on domain
');
$node_subscriber
->
stop
('
fast
');
$node_subscriber
->
stop
('
fast
');
...
...
src/test/subscription/t/011_generated.pl
View file @
db6e2b4c
...
@@ -18,10 +18,12 @@ $node_subscriber->start;
...
@@ -18,10 +18,12 @@ $node_subscriber->start;
my
$publisher_connstr
=
$node_publisher
->
connstr
.
'
dbname=postgres
';
my
$publisher_connstr
=
$node_publisher
->
connstr
.
'
dbname=postgres
';
$node_publisher
->
safe_psql
('
postgres
',
$node_publisher
->
safe_psql
('
postgres
',
"
CREATE TABLE tab1 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 2) STORED)
");
"
CREATE TABLE tab1 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 2) STORED)
"
);
$node_subscriber
->
safe_psql
('
postgres
',
$node_subscriber
->
safe_psql
('
postgres
',
"
CREATE TABLE tab1 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 22) STORED)
");
"
CREATE TABLE tab1 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 22) STORED)
"
);
# data for initial sync
# data for initial sync
...
@@ -40,25 +42,21 @@ my $synced_query =
...
@@ -40,25 +42,21 @@ my $synced_query =
$node_subscriber
->
poll_query_until
('
postgres
',
$synced_query
)
$node_subscriber
->
poll_query_until
('
postgres
',
$synced_query
)
or
die
"
Timed out while waiting for subscriber to synchronize data
";
or
die
"
Timed out while waiting for subscriber to synchronize data
";
my
$result
=
$node_subscriber
->
safe_psql
('
postgres
',
my
$result
=
$node_subscriber
->
safe_psql
('
postgres
',
"
SELECT a, b FROM tab1
");
"
SELECT a, b FROM tab1
");
is
(
$result
,
qq(1|22
is
(
$result
,
qq(1|22
2|44
2|44
3|66)
,
'
generated columns initial sync
');
3|66)
,
'
generated columns initial sync
');
# data to replicate
# data to replicate
$node_publisher
->
safe_psql
('
postgres
',
$node_publisher
->
safe_psql
('
postgres
',
"
INSERT INTO tab1 VALUES (4), (5)
");
"
INSERT INTO tab1 VALUES (4), (5)
");
$node_publisher
->
safe_psql
('
postgres
',
$node_publisher
->
safe_psql
('
postgres
',
"
UPDATE tab1 SET a = 6 WHERE a = 5
");
"
UPDATE tab1 SET a = 6 WHERE a = 5
");
$node_publisher
->
wait_for_catchup
('
sub1
');
$node_publisher
->
wait_for_catchup
('
sub1
');
$result
=
$node_subscriber
->
safe_psql
('
postgres
',
$result
=
$node_subscriber
->
safe_psql
('
postgres
',
"
SELECT a, b FROM tab1
");
"
SELECT a, b FROM tab1
");
is
(
$result
,
qq(1|22
is
(
$result
,
qq(1|22
2|44
2|44
3|66
3|66
4|88
4|88
...
...
src/test/subscription/t/012_collation.pl
View file @
db6e2b4c
...
@@ -16,11 +16,15 @@ else
...
@@ -16,11 +16,15 @@ else
}
}
my
$node_publisher
=
get_new_node
('
publisher
');
my
$node_publisher
=
get_new_node
('
publisher
');
$node_publisher
->
init
(
allows_streaming
=>
'
logical
',
extra
=>
[
'
--locale=C
',
'
--encoding=UTF8
'
]);
$node_publisher
->
init
(
allows_streaming
=>
'
logical
',
extra
=>
[
'
--locale=C
',
'
--encoding=UTF8
'
]);
$node_publisher
->
start
;
$node_publisher
->
start
;
my
$node_subscriber
=
get_new_node
('
subscriber
');
my
$node_subscriber
=
get_new_node
('
subscriber
');
$node_subscriber
->
init
(
allows_streaming
=>
'
logical
',
extra
=>
[
'
--locale=C
',
'
--encoding=UTF8
'
]);
$node_subscriber
->
init
(
allows_streaming
=>
'
logical
',
extra
=>
[
'
--locale=C
',
'
--encoding=UTF8
'
]);
$node_subscriber
->
start
;
$node_subscriber
->
start
;
my
$publisher_connstr
=
$node_publisher
->
connstr
.
'
dbname=postgres
';
my
$publisher_connstr
=
$node_publisher
->
connstr
.
'
dbname=postgres
';
...
@@ -36,7 +40,8 @@ my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
...
@@ -36,7 +40,8 @@ my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
# full, since those have different code paths internally.
# full, since those have different code paths internally.
$node_subscriber
->
safe_psql
('
postgres
',
$node_subscriber
->
safe_psql
('
postgres
',
q{CREATE COLLATION ctest_nondet (provider = icu, locale = 'und', deterministic = false)}
);
q{CREATE COLLATION ctest_nondet (provider = icu, locale = 'und', deterministic = false)}
);
# table with replica identity index
# table with replica identity index
...
@@ -54,8 +59,7 @@ $node_subscriber->safe_psql('postgres',
...
@@ -54,8 +59,7 @@ $node_subscriber->safe_psql('postgres',
# table with replica identity full
# table with replica identity full
$node_publisher
->
safe_psql
('
postgres
',
$node_publisher
->
safe_psql
('
postgres
',
q{CREATE TABLE tab2 (a text, b text)}
);
q{CREATE TABLE tab2 (a text, b text)}
);
$node_publisher
->
safe_psql
('
postgres
',
$node_publisher
->
safe_psql
('
postgres
',
q{ALTER TABLE tab2 REPLICA IDENTITY FULL}
);
q{ALTER TABLE tab2 REPLICA IDENTITY FULL}
);
...
@@ -76,7 +80,8 @@ $node_publisher->safe_psql('postgres',
...
@@ -76,7 +80,8 @@ $node_publisher->safe_psql('postgres',
q{CREATE PUBLICATION pub1 FOR ALL TABLES}
);
q{CREATE PUBLICATION pub1 FOR ALL TABLES}
);
$node_subscriber
->
safe_psql
('
postgres
',
$node_subscriber
->
safe_psql
('
postgres
',
qq{CREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION pub1 WITH (copy_data = false)}
);
qq{CREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION pub1 WITH (copy_data = false)}
);
$node_publisher
->
wait_for_catchup
('
sub1
');
$node_publisher
->
wait_for_catchup
('
sub1
');
...
@@ -88,8 +93,7 @@ $node_publisher->safe_psql('postgres',
...
@@ -88,8 +93,7 @@ $node_publisher->safe_psql('postgres',
$node_publisher
->
wait_for_catchup
('
sub1
');
$node_publisher
->
wait_for_catchup
('
sub1
');
is
(
$node_subscriber
->
safe_psql
('
postgres
',
q{SELECT b FROM tab1}
),
is
(
$node_subscriber
->
safe_psql
('
postgres
',
q{SELECT b FROM tab1}
),
qq(bar)
,
qq(bar)
,
'
update with primary key with nondeterministic collation
');
'
update with primary key with nondeterministic collation
');
# test with replica identity full
# test with replica identity full
...
...
src/test/subscription/t/100_bugs.pl
View file @
db6e2b4c
...
@@ -30,7 +30,8 @@ $node_publisher->safe_psql('postgres',
...
@@ -30,7 +30,8 @@ $node_publisher->safe_psql('postgres',
"
CREATE TABLE tab1 (a int PRIMARY KEY, b int)
");
"
CREATE TABLE tab1 (a int PRIMARY KEY, b int)
");
$node_publisher
->
safe_psql
('
postgres
',
$node_publisher
->
safe_psql
('
postgres
',
"
CREATE FUNCTION double(x int) RETURNS int IMMUTABLE LANGUAGE SQL AS 'select x * 2'
");
"
CREATE FUNCTION double(x int) RETURNS int IMMUTABLE LANGUAGE SQL AS 'select x * 2'
"
);
# an index with a predicate that lends itself to constant expressions
# an index with a predicate that lends itself to constant expressions
# evaluation
# evaluation
...
@@ -42,7 +43,8 @@ $node_subscriber->safe_psql('postgres',
...
@@ -42,7 +43,8 @@ $node_subscriber->safe_psql('postgres',
"
CREATE TABLE tab1 (a int PRIMARY KEY, b int)
");
"
CREATE TABLE tab1 (a int PRIMARY KEY, b int)
");
$node_subscriber
->
safe_psql
('
postgres
',
$node_subscriber
->
safe_psql
('
postgres
',
"
CREATE FUNCTION double(x int) RETURNS int IMMUTABLE LANGUAGE SQL AS 'select x * 2'
");
"
CREATE FUNCTION double(x int) RETURNS int IMMUTABLE LANGUAGE SQL AS 'select x * 2'
"
);
$node_subscriber
->
safe_psql
('
postgres
',
$node_subscriber
->
safe_psql
('
postgres
',
"
CREATE INDEX ON tab1 (b) WHERE a > double(1)
");
"
CREATE INDEX ON tab1 (b) WHERE a > double(1)
");
...
@@ -51,14 +53,14 @@ $node_publisher->safe_psql('postgres',
...
@@ -51,14 +53,14 @@ $node_publisher->safe_psql('postgres',
"
CREATE PUBLICATION pub1 FOR ALL TABLES
");
"
CREATE PUBLICATION pub1 FOR ALL TABLES
");
$node_subscriber
->
safe_psql
('
postgres
',
$node_subscriber
->
safe_psql
('
postgres
',
"
CREATE SUBSCRIPTION sub1 CONNECTION '
$publisher_connstr
' PUBLICATION pub1
");
"
CREATE SUBSCRIPTION sub1 CONNECTION '
$publisher_connstr
' PUBLICATION pub1
"
);
$node_publisher
->
wait_for_catchup
('
sub1
');
$node_publisher
->
wait_for_catchup
('
sub1
');
# This would crash, first on the publisher, and then (if the publisher
# This would crash, first on the publisher, and then (if the publisher
# is fixed) on the subscriber.
# is fixed) on the subscriber.
$node_publisher
->
safe_psql
('
postgres
',
$node_publisher
->
safe_psql
('
postgres
',
"
INSERT INTO tab1 VALUES (1, 2)
");
"
INSERT INTO tab1 VALUES (1, 2)
");
$node_publisher
->
wait_for_catchup
('
sub1
');
$node_publisher
->
wait_for_catchup
('
sub1
');
...
...
src/tools/gen_keywordlist.pl
View file @
db6e2b4c
...
@@ -56,7 +56,8 @@ if ($output_path ne '' && substr($output_path, -1) ne '/')
...
@@ -56,7 +56,8 @@ if ($output_path ne '' && substr($output_path, -1) ne '/')
$output_path
.=
'
/
';
$output_path
.=
'
/
';
}
}
$kw_input_file
=~
/(\w+)\.h$/
||
die
"
Input file must be named something.h.
\n
";
$kw_input_file
=~
/(\w+)\.h$/
||
die
"
Input file must be named something.h.
\n
";
my
$base_filename
=
$1
.
'
_d
';
my
$base_filename
=
$1
.
'
_d
';
my
$kw_def_file
=
$output_path
.
$base_filename
.
'
.h
';
my
$kw_def_file
=
$output_path
.
$base_filename
.
'
.h
';
...
@@ -116,10 +117,11 @@ if ($case_fold)
...
@@ -116,10 +117,11 @@ if ($case_fold)
# helpful because it provides a cheap way to reject duplicate keywords.
# helpful because it provides a cheap way to reject duplicate keywords.
# Also, insisting on sorted order ensures that code that scans the keyword
# Also, insisting on sorted order ensures that code that scans the keyword
# table linearly will see the keywords in a canonical order.
# table linearly will see the keywords in a canonical order.
for
my
$i
(
0
..
$#keywords
-
1
)
for
my
$i
(
0
..
$#keywords
-
1
)
{
{
die
qq|The keyword "$keywords[$i + 1]" is out of order in $kw_input_file\n|
die
if
(
$keywords
[
$i
]
cmp
$keywords
[
$i
+
1
])
>=
0
;
qq|The keyword "$keywords[$i + 1]" is out of order in $kw_input_file\n|
if
(
$keywords
[
$i
]
cmp
$keywords
[
$i
+
1
])
>=
0
;
}
}
# Emit the string containing all the keywords.
# Emit the string containing all the keywords.
...
...
src/tools/msvc/Install.pm
View file @
db6e2b4c
...
@@ -520,7 +520,10 @@ sub CopySubdirFiles
...
@@ -520,7 +520,10 @@ sub CopySubdirFiles
if (
$mf
=~ /^HEADERS
\
s*=
\
s*(.*)$/m) {
$flist
.= $1 }
if (
$mf
=~ /^HEADERS
\
s*=
\
s*(.*)$/m) {
$flist
.= $1 }
my
@modlist
= ();
my
@modlist
= ();
my %fmodlist = ();
my %fmodlist = ();
while (
$mf
=~ /^HEADERS_([^
\
s=]+)
\
s*=
\
s*(.*)$/mg) {
$fmodlist
{$1} .= $2 }
while (
$mf
=~ /^HEADERS_([^
\
s=]+)
\
s*=
\
s*(.*)$/mg)
{
$fmodlist
{$1} .= $2;
}
if (
$mf
=~ /^MODULE_big
\
s*=
\
s*(.*)$/m)
if (
$mf
=~ /^MODULE_big
\
s*=
\
s*(.*)$/m)
{
{
...
@@ -544,8 +547,7 @@ sub CopySubdirFiles
...
@@ -544,8 +547,7 @@ sub CopySubdirFiles
croak
"
HEADERS_$mod
for
unknown
module
in
$subdir
$module
"
croak
"
HEADERS_$mod
for
unknown
module
in
$subdir
$module
"
unless grep {
$_
eq
$mod
}
@modlist
;
unless grep {
$_
eq
$mod
}
@modlist
;
$flist
= ParseAndCleanRule(
$fmodlist
{
$mod
},
$mf
);
$flist
= ParseAndCleanRule(
$fmodlist
{
$mod
},
$mf
);
EnsureDirectories(
$target
,
EnsureDirectories(
$target
,
"
include
"
,
"
include
/
server
"
,
"
include
"
,
"
include
/
server
"
,
"
include
/server/
$moduledir
"
,
"
include
/server/
$moduledir
"
,
"
include
/server/
$moduledir
/
$mod
"
);
"
include
/server/
$moduledir
/
$mod
"
);
foreach my
$f
(split /
\
s+/,
$flist
)
foreach my
$f
(split /
\
s+/,
$flist
)
...
@@ -615,8 +617,7 @@ sub CopyIncludeFiles
...
@@ -615,8 +617,7 @@ sub CopyIncludeFiles
'Public headers',
$target
. '/include/',
'Public headers',
$target
. '/include/',
'src/include/', 'postgres_ext.h',
'src/include/', 'postgres_ext.h',
'pg_config.h', 'pg_config_ext.h',
'pg_config.h', 'pg_config_ext.h',
'pg_config_os.h',
'pg_config_os.h', 'pg_config_manual.h');
'pg_config_manual.h');
lcopy('src/include/libpq/libpq-fs.h',
$target
. '/include/libpq/')
lcopy('src/include/libpq/libpq-fs.h',
$target
. '/include/libpq/')
|| croak 'Could not copy libpq-fs.h';
|| croak 'Could not copy libpq-fs.h';
...
...
src/tools/msvc/Solution.pm
View file @
db6e2b4c
...
@@ -409,12 +409,12 @@ sub GenerateFiles
...
@@ -409,12 +409,12 @@ sub GenerateFiles
chdir
('
../../..
');
chdir
('
../../..
');
}
}
if
(
IsNewer
(
if
(
IsNewer
('
src/common/kwlist_d.h
',
'
src/include/parser/kwlist.h
'))
'
src/common/kwlist_d.h
',
'
src/include/parser/kwlist.h
'))
{
{
print
"
Generating kwlist_d.h...
\n
";
print
"
Generating kwlist_d.h...
\n
";
system
('
perl -I src/tools src/tools/gen_keywordlist.pl --extern -o src/common src/include/parser/kwlist.h
');
system
(
'
perl -I src/tools src/tools/gen_keywordlist.pl --extern -o src/common src/include/parser/kwlist.h
'
);
}
}
if
(
IsNewer
(
if
(
IsNewer
(
...
@@ -424,10 +424,15 @@ sub GenerateFiles
...
@@ -424,10 +424,15 @@ sub GenerateFiles
'
src/pl/plpgsql/src/pl_unreserved_kwlist_d.h
',
'
src/pl/plpgsql/src/pl_unreserved_kwlist_d.h
',
'
src/pl/plpgsql/src/pl_unreserved_kwlist.h
'))
'
src/pl/plpgsql/src/pl_unreserved_kwlist.h
'))
{
{
print
"
Generating pl_reserved_kwlist_d.h and pl_unreserved_kwlist_d.h...
\n
";
print
"
Generating pl_reserved_kwlist_d.h and pl_unreserved_kwlist_d.h...
\n
";
chdir
('
src/pl/plpgsql/src
');
chdir
('
src/pl/plpgsql/src
');
system
('
perl -I ../../../tools ../../../tools/gen_keywordlist.pl --varname ReservedPLKeywords pl_reserved_kwlist.h
');
system
(
system
('
perl -I ../../../tools ../../../tools/gen_keywordlist.pl --varname UnreservedPLKeywords pl_unreserved_kwlist.h
');
'
perl -I ../../../tools ../../../tools/gen_keywordlist.pl --varname ReservedPLKeywords pl_reserved_kwlist.h
'
);
system
(
'
perl -I ../../../tools ../../../tools/gen_keywordlist.pl --varname UnreservedPLKeywords pl_unreserved_kwlist.h
'
);
chdir
('
../../../..
');
chdir
('
../../../..
');
}
}
...
@@ -440,8 +445,12 @@ sub GenerateFiles
...
@@ -440,8 +445,12 @@ sub GenerateFiles
{
{
print
"
Generating c_kwlist_d.h and ecpg_kwlist_d.h...
\n
";
print
"
Generating c_kwlist_d.h and ecpg_kwlist_d.h...
\n
";
chdir
('
src/interfaces/ecpg/preproc
');
chdir
('
src/interfaces/ecpg/preproc
');
system
('
perl -I ../../../tools ../../../tools/gen_keywordlist.pl --varname ScanCKeywords --no-case-fold c_kwlist.h
');
system
(
system
('
perl -I ../../../tools ../../../tools/gen_keywordlist.pl --varname ScanECPGKeywords ecpg_kwlist.h
');
'
perl -I ../../../tools ../../../tools/gen_keywordlist.pl --varname ScanCKeywords --no-case-fold c_kwlist.h
'
);
system
(
'
perl -I ../../../tools ../../../tools/gen_keywordlist.pl --varname ScanECPGKeywords ecpg_kwlist.h
'
);
chdir
('
../../../..
');
chdir
('
../../../..
');
}
}
...
@@ -527,7 +536,9 @@ EOF
...
@@ -527,7 +536,9 @@ EOF
{
{
chdir
('
src/backend/catalog
');
chdir
('
src/backend/catalog
');
my
$bki_srcs
=
join
('
../../../src/include/catalog/
',
@bki_srcs
);
my
$bki_srcs
=
join
('
../../../src/include/catalog/
',
@bki_srcs
);
system
("
perl genbki.pl --include-path ../../../src/include/ --set-version=
$self
->{majorver}
$bki_srcs
");
system
(
"
perl genbki.pl --include-path ../../../src/include/ --set-version=
$self
->{majorver}
$bki_srcs
"
);
open
(
my
$f
,
'
>
',
'
bki-stamp
')
open
(
my
$f
,
'
>
',
'
bki-stamp
')
||
confess
"
Could not touch bki-stamp
";
||
confess
"
Could not touch bki-stamp
";
close
(
$f
);
close
(
$f
);
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment