Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
P
Postgres FD Implementation
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Abuhujair Javed
Postgres FD Implementation
Commits
621e14dc
Commit
621e14dc
authored
Nov 08, 2007
by
Bruce Momjian
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Add "High Availability, Load Balancing, and Replication Feature Matrix"
table to docs.
parent
5db1c58a
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
166 additions
and
41 deletions
+166
-41
doc/src/sgml/high-availability.sgml
doc/src/sgml/high-availability.sgml
+166
-41
No files found.
doc/src/sgml/high-availability.sgml
View file @
621e14dc
<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.1
7 2007/11/04 19:23:24
momjian Exp $ -->
<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.1
8 2007/11/08 19:16:30
momjian Exp $ -->
<chapter id="high-availability">
<chapter id="high-availability">
<title>High Availability, Load Balancing, and Replication</title>
<title>High Availability, Load Balancing, and Replication</title>
...
@@ -92,16 +92,23 @@
...
@@ -92,16 +92,23 @@
</para>
</para>
<para>
<para>
Shared hardware functionality is common in network storage
Shared hardware functionality is common in network storage
devices.
devices. Using a network file system is also possible, though
Using a network file system is also possible, though care must be
care must be taken that the file system has full POSIX behavior.
taken that the file system has full POSIX behavior (see <xref
One significant limitation of this method is that if the shared
linkend="creating-cluster-nfs">). One significant limitation of this
disk array fails or becomes corrupt, the primary and standby
method is that if the shared disk array fails or becomes corrupt, the
servers are both nonfunctional. Another issue is that the
primary and standby servers are both nonfunctional. Another issue is
standby server should never access the shared storage while
that the
standby server should never access the shared storage while
the primary server is running.
the primary server is running.
</para>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>File System Replication</term>
<listitem>
<para>
<para>
A modified version of shared hardware functionality is file system
A modified version of shared hardware functionality is file system
replication, where all changes to a file system are mirrored to a file
replication, where all changes to a file system are mirrored to a file
...
@@ -125,7 +132,7 @@ protocol to make nodes agree on a serializable transactional order.
...
@@ -125,7 +132,7 @@ protocol to make nodes agree on a serializable transactional order.
</varlistentry>
</varlistentry>
<varlistentry>
<varlistentry>
<term>Warm Standby Using Point-In-Time Recovery</term>
<term>Warm Standby Using Point-In-Time Recovery
(<acronym>PITR</>)
</term>
<listitem>
<listitem>
<para>
<para>
...
@@ -190,6 +197,21 @@ protocol to make nodes agree on a serializable transactional order.
...
@@ -190,6 +197,21 @@ protocol to make nodes agree on a serializable transactional order.
</listitem>
</listitem>
</varlistentry>
</varlistentry>
<varlistentry>
<term>Asynchronous Multi-Master Replication</term>
<listitem>
<para>
For servers that are not regularly connected, like laptops or
remote servers, keeping data consistent among servers is a
challenge. Using asynchronous multi-master replication, each
server works independently, and periodically communicates with
the other servers to identify conflicting transactions. The
conflicts can be resolved by users or conflict resolution rules.
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry>
<term>Synchronous Multi-Master Replication</term>
<term>Synchronous Multi-Master Replication</term>
<listitem>
<listitem>
...
@@ -222,21 +244,6 @@ protocol to make nodes agree on a serializable transactional order.
...
@@ -222,21 +244,6 @@ protocol to make nodes agree on a serializable transactional order.
</listitem>
</listitem>
</varlistentry>
</varlistentry>
<varlistentry>
<term>Asynchronous Multi-Master Replication</term>
<listitem>
<para>
For servers that are not regularly connected, like laptops or
remote servers, keeping data consistent among servers is a
challenge. Using asynchronous multi-master replication, each
server works independently, and periodically communicates with
the other servers to identify conflicting transactions. The
conflicts can be resolved by users or conflict resolution rules.
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry>
<term>Data Partitioning</term>
<term>Data Partitioning</term>
<listitem>
<listitem>
...
@@ -253,23 +260,6 @@ protocol to make nodes agree on a serializable transactional order.
...
@@ -253,23 +260,6 @@ protocol to make nodes agree on a serializable transactional order.
</listitem>
</listitem>
</varlistentry>
</varlistentry>
<varlistentry>
<term>Multi-Server Parallel Query Execution</term>
<listitem>
<para>
Many of the above solutions allow multiple servers to handle
multiple queries, but none allow a single query to use multiple
servers to complete faster. This solution allows multiple
servers to work concurrently on a single query. This is usually
accomplished by splitting the data among servers and having
each server execute its part of the query and return results
to a central server where they are combined and returned to
the user. Pgpool-II has this capability.
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry>
<term>Commercial Solutions</term>
<term>Commercial Solutions</term>
<listitem>
<listitem>
...
@@ -285,4 +275,139 @@ protocol to make nodes agree on a serializable transactional order.
...
@@ -285,4 +275,139 @@ protocol to make nodes agree on a serializable transactional order.
</variablelist>
</variablelist>
<para>
The table below (<xref linkend="high-availability-matrix">) summarizes
the capabilities of the various solutions listed above.
</para>
<table id="high-availability-matrix">
<title>High Availability, Load Balancing, and Replication Feature Matrix</title>
<tgroup cols="9">
<thead>
<row>
<entry>Feature</entry>
<entry>Shared Disk Failover</entry>
<entry>File System Replication</entry>
<entry>Warm Standby Using PITR</entry>
<entry>Master-Slave Replication</entry>
<entry>Statement-Based Replication Middleware</entry>
<entry>Asynchronous Multi-Master Replication</entry>
<entry>Synchronous Multi-Master Replication</entry>
<entry>Data Partitioning</entry>
</row>
</thead>
<tbody>
<row>
<entry>No special hardware required</entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
</row>
<row>
<entry>Allows multiple master servers</entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center"></entry>
</row>
<row>
<entry>No master server overhead</entry>
<entry align="center">•</entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
</row>
<row>
<entry>Master server never locks others</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center"></entry>
<entry align="center">•</entry>
</row>
<row>
<entry>Master failure will never lose data</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center"></entry>
</row>
<row>
<entry>Slaves accept read-only queries</entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
</row>
<row>
<entry>Per-table granularity</entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
</row>
<row>
<entry>No conflict resolution necessary</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
</row>
</tbody>
</tgroup>
</table>
<para>
Many of the above solutions allow multiple servers to handle multiple
queries, but none allow a single query to use multiple servers to
complete faster. Multi-server parallel query execution allows multiple
servers to work concurrently on a single query. This is usually
accomplished by splitting the data among servers and having each server
execute its part of the query and return results to a central server
where they are combined and returned to the user. Pgpool-II has this
capability.
</para>
</chapter>
</chapter>
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment