Commit e0de8d98 authored by Bruce Momjian's avatar Bruce Momjian

Update FAQ_DEV.

parent 19815273
......@@ -9,31 +9,146 @@
postgreSQL Web site, http://www.PostgreSQL.org.
_________________________________________________________________
Questions
General Questions
1) What tools are available for developers?
2) What books are good for developers?
3) Why do we use palloc() and pfree() to allocate memory?
4) Why do we use Node and List to make data structures?
5) How do I add a feature or fix a bug?
6) How do I download/update the current source tree?
7) How do I test my changes?
7) I just added a field to a structure. What else should I do?
8) Why are table, column, type, function, view names sometimes
1.1) How do I get involved in PostgreSQL development?
1.2) How do I add a feature or fix a bug?
1.3) How do I download/update the current source tree?
1.4) How do I test my changes?
1.5) What tools are available for developers?
1.6) What books are good for developers?
1.7) What is configure all about?
1.8) How do I add a new port?
1.9) Why don't we use threads in the backend?
1.10) How are RPM's packaged?
1.11) How are CVS branches handled?
Technical Questions
2.1) How do I efficiently access information in tables from the
backend code?
2.2) Why are table, column, type, function, view names sometimes
referenced as Name or NameData, and sometimes as char *?
9) How do I efficiently access information in tables from the backend
code?
10) What is elog()?
11) What is configure all about?
12) How do I add a new port?
13) What is CommandCounterIncrement()?
14) Why don't we use threads in the backend?
15) How are RPM's packaged?
16) How are CVS branches handled?
17) How do I get involved in PostgreSQL development?
2.3) Why do we use Node and List to make data structures?
2.4) I just added a field to a structure. What else should I do?
2.5) Why do we use palloc() and pfree() to allocate memory?
2.6) What is elog()?
2.7) What is CommandCounterIncrement()?
_________________________________________________________________
1) What tools are available for developers?
General Questions
1.1) How go I get involved in PostgreSQL development?
This was written by Lamar Owen:
2001-06-22
What open source development process is used by the PostgreSQL team?
Read HACKERS for six months (or a full release cycle, whichever is
longer). Really. HACKERS _is_the process. The process is not well
documented (AFAIK -- it may be somewhere that I am not aware of) --
and it changes continually.
What development environment (OS, system, compilers, etc) is required
to develop code?
Developers Corner on the website has links to this information. The
distribution tarball itself includes all the extra tools and documents
that go beyond a good Unix-like development environment. In general, a
modern unix with a modern gcc, GNU make or equivalent, autoconf (of a
particular version), and good working knowledge of those tools are
required.
What areas need support?
The TODO list.
You've made the first step, by finding and subscribing to HACKERS.
Once you find an area to look at in the TODO, and have read the
documentation on the internals, etc, then you check out a current
CVS,write what you are going to write (keeping your CVS checkout up to
date in the process), and make up a patch (as a context diff only) and
send to the PATCHES list, prefereably.
Discussion on the patch typically happens here. If the patch adds a
major feature, it would be a good idea to talk about it first on the
HACKERS list, in order to increase the chances of it being accepted,
as well as toavoid duplication of effort. Note that experienced
developers with a proven track record usually get the big jobs -- for
more than one reason. Also note that PostgreSQL is highly portable --
nonportable code will likely be dismissed out of hand.
Once your contributions get accepted, things move from there.
Typically, you would be added as a developer on the list on the
website when one of the other developers recommends it. Membership on
the steering committee is by invitation only, by the other steering
committee members, from what I have gathered watching froma distance.
I make these statements from having watched the process for over two
years.
To see a good example of how one goes about this, search the archives
for the name 'Tom Lane' and see what his first post consisted of, and
where he took things. In particular, note that this hasn't been _that_
long ago -- and his bugfixing and general deep knowledge with this
codebase is legendary. Take a few days to read after him. And pay
special attention to both the sheer quantity as well as the
painstaking quality of his work. Both are in high demand.
1.2) How do I add a feature or fix a bug?
The source code is over 250,000 lines. Many problems/features are
isolated to one specific area of the code. Others require knowledge of
much of the source. If you are confused about where to start, ask the
hackers list, and they will be glad to assess the complexity and give
pointers on where to start.
Another thing to keep in mind is that many fixes and features can be
added with surprisingly little code. I often start by adding code,
then looking at other areas in the code where similar things are done,
and by the time I am finished, the patch is quite small and compact.
When adding code, keep in mind that it should use the existing
facilities in the source, for performance reasons and for simplicity.
Often a review of existing code doing similar things is helpful.
1.3) How do I download/update the current source tree?
There are several ways to obtain the source tree. Occasional
developers can just get the most recent source tree snapshot from
ftp.postgresql.org. For regular developers, you can use CVS. CVS
allows you to download the source tree, then occasionally update your
copy of the source tree with any new changes. Using CVS, you don't
have to download the entire source each time, only the changed files.
Anonymous CVS does not allows developers to update the remote source
tree, though privileged developers can do this. There is a CVS FAQ on
our web site that describes how to use remote CVS. You can also use
CVSup, which has similarly functionality, and is available from
ftp.postgresql.org.
To update the source tree, there are two ways. You can generate a
patch against your current source tree, perhaps using the make_diff
tools mentioned above, and send them to the patches list. They will be
reviewed, and applied in a timely manner. If the patch is major, and
we are in beta testing, the developers may wait for the final release
before applying your patches.
For hard-core developers, Marc(scrappy@postgresql.org) will give you a
Unix shell account on postgresql.org, so you can use CVS to update the
main source tree, or you can ftp your files into your account, patch,
and cvs install the changes directly into the source tree.
1.4) How do I test my changes?
First, use psql to make sure it is working as you expect. Then run
src/test/regress and get the output of src/test/regress/checkresults
with and without your changes, to see that your patch does not change
the regression test in unexpected ways. This practice has saved me
many times. The regression tests test the code in ways I would never
do, and has caught many bugs in my patches. By finding the problems
now, you save yourself a lot of debugging later when things are
broken, and you can't figure out when it happened.
1.5) What tools are available for developers?
Aside from the User documentation mentioned in the regular FAQ, there
are several development tools available. First, all the files in the
......@@ -126,264 +241,32 @@
*/
pgindent will the format code by specifying flags to your operating
system's utility indent.
pgindent is run on all source files just before each beta test period.
It auto-formats all source files to make them consistent. Comment
blocks that need specific line breaks should be formatted as block
comments, where the comment starts as /*------. These comments will
not be reformatted in any way.
pginclude contains scripts used to add needed #include's to include
files, and removed unneeded #include's.
When adding system types, you will need to assign oids to them. There
is also a script called unused_oids in pgsql/src/include/catalog that
shows the unused oids.
2) What books are good for developers?
I have four good books, An Introduction to Database Systems, by C.J.
Date, Addison, Wesley, A Guide to the SQL Standard, by C.J. Date, et.
al, Addison, Wesley, Fundamentals of Database Systems, by Elmasri and
Navathe, and Transaction Processing, by Jim Gray, Morgan, Kaufmann
There is also a database performance site, with a handbook on-line
written by Jim Gray at http://www.benchmarkresources.com.
3) Why do we use palloc() and pfree() to allocate memory?
palloc() and pfree() are used in place of malloc() and free() because
we automatically free all memory allocated when a transaction
completes. This makes it easier to make sure we free memory that gets
allocated in one place, but only freed much later. There are several
contexts that memory can be allocated in, and this controls when the
allocated memory is automatically freed by the backend.
4) Why do we use Node and List to make data structures?
We do this because this allows a consistent way to pass data inside
the backend in a flexible way. Every node has a NodeTag which
specifies what type of data is inside the Node. Lists are groups of
Nodes chained together as a forward-linked list.
Here are some of the List manipulation commands:
lfirst(i)
return the data at list element i.
lnext(i)
return the next list element after i.
foreach(i, list)
loop through list, assigning each list element to i. It is
important to note that i is a List *, not the data in the List
element. You need to use lfirst(i) to get at the data. Here is
a typical code snipped that loops through a List containing Var
*'s and processes each one:
List *i, *list;
foreach(i, list)
{
Var *var = lfirst(i);
/* process var here */
}
lcons(node, list)
add node to the front of list, or create a new list with node
if list is NIL.
lappend(list, node)
add node to the end of list. This is more expensive that lcons.
nconc(list1, list2)
Concat list2 on to the end of list1.
length(list)
return the length of the list.
nth(i, list)
return the i'th element in list.
lconsi, ...
There are integer versions of these: lconsi, lappendi, nthi.
List's containing integers instead of Node pointers are used to
hold list of relation object id's and other integer quantities.
You can print nodes easily inside gdb. First, to disable output
truncation when you use the gdb print command:
(gdb) set print elements 0
Instead of printing values in gdb format, you can use the next two
commands to print out List, Node, and structure contents in a verbose
format that is easier to understand. List's are unrolled into nodes,
and nodes are printed in detail. The first prints in a short format,
and the second in a long format:
(gdb) call print(any_pointer)
(gdb) call pprint(any_pointer)
The output appears in the postmaster log file, or on your screen if
you are running a backend directly without a postmaster.
5) How do I add a feature or fix a bug?
The source code is over 250,000 lines. Many problems/features are
isolated to one specific area of the code. Others require knowledge of
much of the source. If you are confused about where to start, ask the
hackers list, and they will be glad to assess the complexity and give
pointers on where to start.
Another thing to keep in mind is that many fixes and features can be
added with surprisingly little code. I often start by adding code,
then looking at other areas in the code where similar things are done,
and by the time I am finished, the patch is quite small and compact.
When adding code, keep in mind that it should use the existing
facilities in the source, for performance reasons and for simplicity.
Often a review of existing code doing similar things is helpful.
6) How do I download/update the current source tree?
There are several ways to obtain the source tree. Occasional
developers can just get the most recent source tree snapshot from
ftp.postgresql.org. For regular developers, you can use CVS. CVS
allows you to download the source tree, then occasionally update your
copy of the source tree with any new changes. Using CVS, you don't
have to download the entire source each time, only the changed files.
Anonymous CVS does not allows developers to update the remote source
tree, though privileged developers can do this. There is a CVS FAQ on
our web site that describes how to use remote CVS. You can also use
CVSup, which has similarly functionality, and is available from
ftp.postgresql.org.
To update the source tree, there are two ways. You can generate a
patch against your current source tree, perhaps using the make_diff
tools mentioned above, and send them to the patches list. They will be
reviewed, and applied in a timely manner. If the patch is major, and
we are in beta testing, the developers may wait for the final release
before applying your patches.
For hard-core developers, Marc(scrappy@postgresql.org) will give you a
Unix shell account on postgresql.org, so you can use CVS to update the
main source tree, or you can ftp your files into your account, patch,
and cvs install the changes directly into the source tree.
6) How do I test my changes?
First, use psql to make sure it is working as you expect. Then run
src/test/regress and get the output of src/test/regress/checkresults
with and without your changes, to see that your patch does not change
the regression test in unexpected ways. This practice has saved me
many times. The regression tests test the code in ways I would never
do, and has caught many bugs in my patches. By finding the problems
now, you save yourself a lot of debugging later when things are
broken, and you can't figure out when it happened.
7) I just added a field to a structure. What else should I do?
The structures passing around from the parser, rewrite, optimizer, and
executor require quite a bit of support. Most structures have support
routines in src/backend/nodes used to create, copy, read, and output
those structures. Make sure you add support for your new field to
these files. Find any other places the structure may need code for
your new field. mkid is helpful with this (see above).
8) Why are table, column, type, function, view names sometimes referenced as
Name or NameData, and sometimes as char *?
Table, column, type, function, and view names are stored in system
tables in columns of type Name. Name is a fixed-length,
null-terminated type of NAMEDATALEN bytes. (The default value for
NAMEDATALEN is 32 bytes.)
typedef struct nameData
{
char data[NAMEDATALEN];
} NameData;
typedef NameData *Name;
Table, column, type, function, and view names that come into the
backend via user queries are stored as variable-length,
null-terminated character strings.
Many functions are called with both types of names, ie. heap_open().
Because the Name type is null-terminated, it is safe to pass it to a
function expecting a char *. Because there are many cases where
on-disk names(Name) are compared to user-supplied names(char *), there
are many cases where Name and char * are used interchangeably.
9) How do I efficiently access information in tables from the backend code?
You first need to find the tuples(rows) you are interested in. There
are two ways. First, SearchSysCache() and related functions allow you
to query the system catalogs. This is the preferred way to access
system tables, because the first call to the cache loads the needed
rows, and future requests can return the results without accessing the
base table. The caches use system table indexes to look up tuples. A
list of available caches is located in
src/backend/utils/cache/syscache.c.
src/backend/utils/cache/lsyscache.c contains many column-specific
cache lookup functions.
The rows returned are cache-owned versions of the heap rows.
Therefore, you must not modify or delete the tuple returned by
SearchSysCache(). What you should do is release it with
ReleaseSysCache() when you are done using it; this informs the cache
that it can discard that tuple if necessary. If you neglect to call
ReleaseSysCache(), then the cache entry will remain locked in the
cache until end of transaction, which is tolerable but not very
desirable.
If you can't use the system cache, you will need to retrieve the data
directly from the heap table, using the buffer cache that is shared by
all backends. The backend automatically takes care of loading the rows
into the buffer cache.
Open the table with heap_open(). You can then start a table scan with
heap_beginscan(), then use heap_getnext() and continue as long as
HeapTupleIsValid() returns true. Then do a heap_endscan(). Keys can be
assigned to the scan. No indexes are used, so all rows are going to be
compared to the keys, and only the valid rows returned.
You can also use heap_fetch() to fetch rows by block number/offset.
While scans automatically lock/unlock rows from the buffer cache, with
heap_fetch(), you must pass a Buffer pointer, and ReleaseBuffer() it
when completed.
Once you have the row, you can get data that is common to all tuples,
like t_self and t_oid, by merely accessing the HeapTuple structure
entries. If you need a table-specific column, you should take the
HeapTuple pointer, and use the GETSTRUCT() macro to access the
table-specific start of the tuple. You then cast the pointer as a
Form_pg_proc pointer if you are accessing the pg_proc table, or
Form_pg_type if you are accessing pg_type. You can then access the
columns by using a structure pointer:
((Form_pg_class) GETSTRUCT(tuple))->relnatts
You must not directly change live tuples in this way. The best way is
to use heap_modifytuple() and pass it your original tuple, and the
values you want changed. It returns a palloc'ed tuple, which you pass
to heap_replace(). You can delete tuples by passing the tuple's t_self
to heap_destroy(). You use t_self for heap_update() too. Remember,
tuples can be either system cache copies, which may go away after you
call ReleaseSysCache(), or read directly from disk buffers, which go
away when you heap_getnext(), heap_endscan, or ReleaseBuffer(), in the
heap_fetch() case. Or it may be a palloc'ed tuple, that you must
pfree() when finished.
system's utility indent.
pgindent is run on all source files just before each beta test period.
It auto-formats all source files to make them consistent. Comment
blocks that need specific line breaks should be formatted as block
comments, where the comment starts as /*------. These comments will
not be reformatted in any way.
pginclude contains scripts used to add needed #include's to include
files, and removed unneeded #include's.
When adding system types, you will need to assign oids to them. There
is also a script called unused_oids in pgsql/src/include/catalog that
shows the unused oids.
10) What is elog()?
1.6) What books are good for developers?
elog() is used to send messages to the front-end, and optionally
terminate the current query being processed. The first parameter is an
elog level of NOTICE, DEBUG, ERROR, or FATAL. NOTICE prints on the
user's terminal and the postmaster logs. DEBUG prints only in the
postmaster logs. ERROR prints in both places, and terminates the
current query, never returning from the call. FATAL terminates the
backend process. The remaining parameters of elog are a printf-style
set of parameters to print.
I have four good books, An Introduction to Database Systems, by C.J.
Date, Addison, Wesley, A Guide to the SQL Standard, by C.J. Date, et.
al, Addison, Wesley, Fundamentals of Database Systems, by Elmasri and
Navathe, and Transaction Processing, by Jim Gray, Morgan, Kaufmann
There is also a database performance site, with a handbook on-line
written by Jim Gray at http://www.benchmarkresources.com.
11) What is configure all about?
1.7) What is configure all about?
The files configure and configure.in are part of the GNU autoconf
package. Configure allows us to test for various capabilities of the
......@@ -405,7 +288,7 @@ typedef struct nameData
removed, so you see only the file contained in the source
distribution.
12) How do I add a new port?
1.8) How do I add a new port?
There are a variety of places that need to be modified to add a new
port. First, start in the src/template directory. Add an appropriate
......@@ -422,19 +305,7 @@ typedef struct nameData
src/makefiles directory for port-specific Makefile handling. There is
a backend/port directory if you need special files for your OS.
13) What is CommandCounterIncrement()?
Normally, transactions can not see the rows they modify. This allows
UPDATE foo SET x = x + 1 to work correctly.
However, there are cases where a transactions needs to see rows
affected in previous parts of the transaction. This is accomplished
using a Command Counter. Incrementing the counter allows transactions
to be broken into pieces so each piece can see rows modified by
previous pieces. CommandCounterIncrement() increments the Command
Counter, creating a new part of the transaction.
14) Why don't we use threads in the backend?
1.9) Why don't we use threads in the backend?
There are several reasons threads are not used:
* Historically, threads were unsupported and buggy.
......@@ -443,7 +314,7 @@ typedef struct nameData
remaining backend startup time.
* The backend code would be more complex.
15) How are RPM's packaged?
1.10) How are RPM's packaged?
This was written by Lamar Owen:
......@@ -538,7 +409,7 @@ typedef struct nameData
Of course, there are many projects that DO include all the files
necessary to build RPMs from their Official Tarball (TM).
16) How are CVS branches managed?
1.11) How are CVS branches managed?
This was written by Tom Lane:
......@@ -597,58 +468,194 @@ typedef struct nameData
tree right away after a major release --- we wait for a dot-release or
two, so that we won't have to double-patch the first wave of fixes.
17) How go I get involved in PostgreSQL development?
Technical Questions
2.1) How do I efficiently access information in tables from the backend code?
This was written by Lamar Owen:
You first need to find the tuples(rows) you are interested in. There
are two ways. First, SearchSysCache() and related functions allow you
to query the system catalogs. This is the preferred way to access
system tables, because the first call to the cache loads the needed
rows, and future requests can return the results without accessing the
base table. The caches use system table indexes to look up tuples. A
list of available caches is located in
src/backend/utils/cache/syscache.c.
src/backend/utils/cache/lsyscache.c contains many column-specific
cache lookup functions.
2001-06-22
What open source development process is used by the PostgreSQL team?
The rows returned are cache-owned versions of the heap rows.
Therefore, you must not modify or delete the tuple returned by
SearchSysCache(). What you should do is release it with
ReleaseSysCache() when you are done using it; this informs the cache
that it can discard that tuple if necessary. If you neglect to call
ReleaseSysCache(), then the cache entry will remain locked in the
cache until end of transaction, which is tolerable but not very
desirable.
Read HACKERS for six months (or a full release cycle, whichever is
longer). Really. HACKERS _is_the process. The process is not well
documented (AFAIK -- it may be somewhere that I am not aware of) --
and it changes continually.
What development environment (OS, system, compilers, etc) is required
to develop code?
If you can't use the system cache, you will need to retrieve the data
directly from the heap table, using the buffer cache that is shared by
all backends. The backend automatically takes care of loading the rows
into the buffer cache.
Developers Corner on the website has links to this information. The
distribution tarball itself includes all the extra tools and documents
that go beyond a good Unix-like development environment. In general, a
modern unix with a modern gcc, GNU make or equivalent, autoconf (of a
particular version), and good working knowledge of those tools are
required.
What areas need support?
Open the table with heap_open(). You can then start a table scan with
heap_beginscan(), then use heap_getnext() and continue as long as
HeapTupleIsValid() returns true. Then do a heap_endscan(). Keys can be
assigned to the scan. No indexes are used, so all rows are going to be
compared to the keys, and only the valid rows returned.
The TODO list.
You can also use heap_fetch() to fetch rows by block number/offset.
While scans automatically lock/unlock rows from the buffer cache, with
heap_fetch(), you must pass a Buffer pointer, and ReleaseBuffer() it
when completed.
You've made the first step, by finding and subscribing to HACKERS.
Once you find an area to look at in the TODO, and have read the
documentation on the internals, etc, then you check out a current
CVS,write what you are going to write (keeping your CVS checkout up to
date in the process), and make up a patch (as a context diff only) and
send to the PATCHES list, prefereably.
Once you have the row, you can get data that is common to all tuples,
like t_self and t_oid, by merely accessing the HeapTuple structure
entries. If you need a table-specific column, you should take the
HeapTuple pointer, and use the GETSTRUCT() macro to access the
table-specific start of the tuple. You then cast the pointer as a
Form_pg_proc pointer if you are accessing the pg_proc table, or
Form_pg_type if you are accessing pg_type. You can then access the
columns by using a structure pointer:
((Form_pg_class) GETSTRUCT(tuple))->relnatts
You must not directly change live tuples in this way. The best way is
to use heap_modifytuple() and pass it your original tuple, and the
values you want changed. It returns a palloc'ed tuple, which you pass
to heap_replace(). You can delete tuples by passing the tuple's t_self
to heap_destroy(). You use t_self for heap_update() too. Remember,
tuples can be either system cache copies, which may go away after you
call ReleaseSysCache(), or read directly from disk buffers, which go
away when you heap_getnext(), heap_endscan, or ReleaseBuffer(), in the
heap_fetch() case. Or it may be a palloc'ed tuple, that you must
pfree() when finished.
Discussion on the patch typically happens here. If the patch adds a
major feature, it would be a good idea to talk about it first on the
HACKERS list, in order to increase the chances of it being accepted,
as well as toavoid duplication of effort. Note that experienced
developers with a proven track record usually get the big jobs -- for
more than one reason. Also note that PostgreSQL is highly portable --
nonportable code will likely be dismissed out of hand.
2.2) Why are table, column, type, function, view names sometimes referenced
as Name or NameData, and sometimes as char *?
Table, column, type, function, and view names are stored in system
tables in columns of type Name. Name is a fixed-length,
null-terminated type of NAMEDATALEN bytes. (The default value for
NAMEDATALEN is 32 bytes.)
typedef struct nameData
{
char data[NAMEDATALEN];
} NameData;
typedef NameData *Name;
Table, column, type, function, and view names that come into the
backend via user queries are stored as variable-length,
null-terminated character strings.
Once your contributions get accepted, things move from there.
Typically, you would be added as a developer on the list on the
website when one of the other developers recommends it. Membership on
the steering committee is by invitation only, by the other steering
committee members, from what I have gathered watching froma distance.
Many functions are called with both types of names, ie. heap_open().
Because the Name type is null-terminated, it is safe to pass it to a
function expecting a char *. Because there are many cases where
on-disk names(Name) are compared to user-supplied names(char *), there
are many cases where Name and char * are used interchangeably.
I make these statements from having watched the process for over two
years.
2.3) Why do we use Node and List to make data structures?
We do this because this allows a consistent way to pass data inside
the backend in a flexible way. Every node has a NodeTag which
specifies what type of data is inside the Node. Lists are groups of
Nodes chained together as a forward-linked list.
To see a good example of how one goes about this, search the archives
for the name 'Tom Lane' and see what his first post consisted of, and
where he took things. In particular, note that this hasn't been _that_
long ago -- and his bugfixing and general deep knowledge with this
codebase is legendary. Take a few days to read after him. And pay
special attention to both the sheer quantity as well as the
painstaking quality of his work. Both are in high demand.
Here are some of the List manipulation commands:
lfirst(i)
return the data at list element i.
lnext(i)
return the next list element after i.
foreach(i, list)
loop through list, assigning each list element to i. It is
important to note that i is a List *, not the data in the List
element. You need to use lfirst(i) to get at the data. Here is
a typical code snipped that loops through a List containing Var
*'s and processes each one:
List *i, *list;
foreach(i, list)
{
Var *var = lfirst(i);
/* process var here */
}
lcons(node, list)
add node to the front of list, or create a new list with node
if list is NIL.
lappend(list, node)
add node to the end of list. This is more expensive that lcons.
nconc(list1, list2)
Concat list2 on to the end of list1.
length(list)
return the length of the list.
nth(i, list)
return the i'th element in list.
lconsi, ...
There are integer versions of these: lconsi, lappendi, nthi.
List's containing integers instead of Node pointers are used to
hold list of relation object id's and other integer quantities.
You can print nodes easily inside gdb. First, to disable output
truncation when you use the gdb print command:
(gdb) set print elements 0
Instead of printing values in gdb format, you can use the next two
commands to print out List, Node, and structure contents in a verbose
format that is easier to understand. List's are unrolled into nodes,
and nodes are printed in detail. The first prints in a short format,
and the second in a long format:
(gdb) call print(any_pointer)
(gdb) call pprint(any_pointer)
The output appears in the postmaster log file, or on your screen if
you are running a backend directly without a postmaster.
2.4) I just added a field to a structure. What else should I do?
The structures passing around from the parser, rewrite, optimizer, and
executor require quite a bit of support. Most structures have support
routines in src/backend/nodes used to create, copy, read, and output
those structures. Make sure you add support for your new field to
these files. Find any other places the structure may need code for
your new field. mkid is helpful with this (see above).
2.5) Why do we use palloc() and pfree() to allocate memory?
palloc() and pfree() are used in place of malloc() and free() because
we automatically free all memory allocated when a transaction
completes. This makes it easier to make sure we free memory that gets
allocated in one place, but only freed much later. There are several
contexts that memory can be allocated in, and this controls when the
allocated memory is automatically freed by the backend.
2.6) What is elog()?
elog() is used to send messages to the front-end, and optionally
terminate the current query being processed. The first parameter is an
elog level of NOTICE, DEBUG, ERROR, or FATAL. NOTICE prints on the
user's terminal and the postmaster logs. DEBUG prints only in the
postmaster logs. ERROR prints in both places, and terminates the
current query, never returning from the call. FATAL terminates the
backend process. The remaining parameters of elog are a printf-style
set of parameters to print.
2.7) What is CommandCounterIncrement()?
Normally, transactions can not see the rows they modify. This allows
UPDATE foo SET x = x + 1 to work correctly.
However, there are cases where a transactions needs to see rows
affected in previous parts of the transaction. This is accomplished
using a Command Counter. Incrementing the counter allows transactions
to be broken into pieces so each piece can see rows modified by
previous pieces. CommandCounterIncrement() increments the Command
Counter, creating a new part of the transaction.
......@@ -27,39 +27,169 @@
<CENTER>
<H2>Questions</H2>
<H2>General Questions</H2>
</CENTER>
<A href="#1">1</A>) What tools are available for developers?<BR>
<A href="#2">2</A>) What books are good for developers?<BR>
<A href="#3">3</A>) Why do we use <I>palloc</I>() and
<I>pfree</I>() to allocate memory?<BR>
<A href="#4">4</A>) Why do we use <I>Node</I> and <I>List</I> to
make data structures?<BR>
<A href="#5">5</A>) How do I add a feature or fix a bug?<BR>
<A href="#6">6</A>) How do I download/update the current source
<A href="#1.1">1.1</A>) How do I get involved in PostgreSQL
development?<BR>
<A href="#1.2">1.2</A>) How do I add a feature or fix a bug?<BR>
<A href="#1.3">1.3</A>) How do I download/update the current source
tree?<BR>
<A href="#7">7</A>) How do I test my changes?<BR>
<A href="#7">7</A>) I just added a field to a structure. What else
should I do?<BR>
<A href="#8">8</A>) Why are table, column, type, function, view
<A href="#1.4">1.4</A>) How do I test my changes?<BR>
<A href="#1.5">1.5</A>) What tools are available for developers?<BR>
<A href="#1.6">1.6</A>) What books are good for developers?<BR>
<A href="#1.7">1.7</A>) What is configure all about?<BR>
<A href="#1.8">1.8</A>) How do I add a new port?<BR>
<A href="#1.9">1.9</A>) Why don't we use threads in the backend?<BR>
<A href="#1.10">1.10</A>) How are RPM's packaged?<BR>
<A href="#1.11">1.11</A>) How are CVS branches handled?<BR>
<H2>Technical Questions</H2>
<A href="#2.1">2.1</A>) How do I efficiently access information in
tables from the backend code?<BR>
<A href="#2.2">2.2</A>) Why are table, column, type, function, view
names sometimes referenced as <I>Name</I> or <I>NameData,</I> and
sometimes as <I>char *?</I><BR>
<A href="#9">9</A>) How do I efficiently access information in
tables from the backend code?<BR>
<A href="#10">10</A>) What is elog()?<BR>
<A href="#11">11</A>) What is configure all about?<BR>
<A href="#12">12</A>) How do I add a new port?<BR>
<A href="#13">13</A>) What is CommandCounterIncrement()?<BR>
<A href="#14">14</A>) Why don't we use threads in the backend?<BR>
<A href="#15">15</A>) How are RPM's packaged?<BR>
<A href="#16">16</A>) How are CVS branches handled?<BR>
<A href="#17">17</A>) How do I get involved in PostgreSQL
development?<BR>
<A href="#2.3">2.3</A>) Why do we use <I>Node</I> and <I>List</I> to
make data structures?<BR>
<A href="#2.4">2.4</A>) I just added a field to a structure. What else
should I do?<BR>
<A href="#2.5">2.5</A>) Why do we use <I>palloc</I>() and
<I>pfree</I>() to allocate memory?<BR>
<A href="#2.6">2.6</A>) What is elog()?<BR>
<A href="#2.7">2.7</A>) What is CommandCounterIncrement()?<BR>
<BR>
<HR>
<H3><A name="1">1</A>) What tools are available for
<CENTER>
<H2>General Questions</H2>
</CENTER>
<H3><A name="1.1">1.1</A>) How go I get involved in PostgreSQL
development?</H3>
<P>This was written by Lamar Owen:</P>
<P>2001-06-22</P>
<B>What open source development process is used by the PostgreSQL
team?</B>
<P>Read HACKERS for six months (or a full release cycle, whichever
is longer). Really. HACKERS _is_the process. The process is not
well documented (AFAIK -- it may be somewhere that I am not aware
of) -- and it changes continually.</P>
<B>What development environment (OS, system, compilers, etc) is
required to develop code?</B>
<P><A href="developers.postgresql.org">Developers Corner</A> on the
website has links to this information. The distribution tarball
itself includes all the extra tools and documents that go beyond a
good Unix-like development environment. In general, a modern unix
with a modern gcc, GNU make or equivalent, autoconf (of a
particular version), and good working knowledge of those tools are
required.</P>
<B>What areas need support?</B>
<P>The TODO list.</P>
<P>You've made the first step, by finding and subscribing to
HACKERS. Once you find an area to look at in the TODO, and have
read the documentation on the internals, etc, then you check out a
current CVS,write what you are going to write (keeping your CVS
checkout up to date in the process), and make up a patch (as a
context diff only) and send to the PATCHES list, prefereably.</P>
<P>Discussion on the patch typically happens here. If the patch
adds a major feature, it would be a good idea to talk about it
first on the HACKERS list, in order to increase the chances of it
being accepted, as well as toavoid duplication of effort. Note that
experienced developers with a proven track record usually get the
big jobs -- for more than one reason. Also note that PostgreSQL is
highly portable -- nonportable code will likely be dismissed out of
hand.</P>
<P>Once your contributions get accepted, things move from there.
Typically, you would be added as a developer on the list on the
website when one of the other developers recommends it. Membership
on the steering committee is by invitation only, by the other
steering committee members, from what I have gathered watching
froma distance.</P>
<P>I make these statements from having watched the process for over
two years.</P>
<P>To see a good example of how one goes about this, search the
archives for the name 'Tom Lane' and see what his first post
consisted of, and where he took things. In particular, note that
this hasn't been _that_ long ago -- and his bugfixing and general
deep knowledge with this codebase is legendary. Take a few days to
read after him. And pay special attention to both the sheer
quantity as well as the painstaking quality of his work. Both are
in high demand.</P>
<H3><A name="1.2">1.2</A>) How do I add a feature or fix a bug?</H3>
<P>The source code is over 250,000 lines. Many problems/features
are isolated to one specific area of the code. Others require
knowledge of much of the source. If you are confused about where to
start, ask the hackers list, and they will be glad to assess the
complexity and give pointers on where to start.</P>
<P>Another thing to keep in mind is that many fixes and features
can be added with surprisingly little code. I often start by adding
code, then looking at other areas in the code where similar things
are done, and by the time I am finished, the patch is quite small
and compact.</P>
<P>When adding code, keep in mind that it should use the existing
facilities in the source, for performance reasons and for
simplicity. Often a review of existing code doing similar things is
helpful.</P>
<H3><A name="1.3">1.3</A>) How do I download/update the current source
tree?</H3>
<P>There are several ways to obtain the source tree. Occasional
developers can just get the most recent source tree snapshot from
ftp.postgresql.org. For regular developers, you can use CVS. CVS
allows you to download the source tree, then occasionally update
your copy of the source tree with any new changes. Using CVS, you
don't have to download the entire source each time, only the
changed files. Anonymous CVS does not allows developers to update
the remote source tree, though privileged developers can do this.
There is a CVS FAQ on our web site that describes how to use remote
CVS. You can also use CVSup, which has similarly functionality, and
is available from ftp.postgresql.org.</P>
<P>To update the source tree, there are two ways. You can generate
a patch against your current source tree, perhaps using the
make_diff tools mentioned above, and send them to the patches list.
They will be reviewed, and applied in a timely manner. If the patch
is major, and we are in beta testing, the developers may wait for
the final release before applying your patches.</P>
<P>For hard-core developers, Marc(scrappy@postgresql.org) will give
you a Unix shell account on postgresql.org, so you can use CVS to
update the main source tree, or you can ftp your files into your
account, patch, and cvs install the changes directly into the
source tree.</P>
<H3><A name="1.4">1.4</A>) How do I test my changes?</H3>
<P>First, use <I>psql</I> to make sure it is working as you expect.
Then run <I>src/test/regress</I> and get the output of
<I>src/test/regress/checkresults</I> with and without your changes,
to see that your patch does not change the regression test in
unexpected ways. This practice has saved me many times. The
regression tests test the code in ways I would never do, and has
caught many bugs in my patches. By finding the problems now, you
save yourself a lot of debugging later when things are broken, and
you can't figure out when it happened.</P>
<H3><A name="1.5">1.5</A>) What tools are available for
developers?</H3>
<P>Aside from the User documentation mentioned in the regular FAQ,
......@@ -179,7 +309,7 @@
There is also a script called <I>unused_oids</I> in
<I>pgsql/src/include/catalog</I> that shows the unused oids.</P>
<H3><A name="2">2</A>) What books are good for developers?</H3>
<H3><A name="1.6">1.6</A>) What books are good for developers?</H3>
<P>I have four good books, <I>An Introduction to Database
Systems,</I> by C.J. Date, Addison, Wesley, <I>A Guide to the SQL
......@@ -192,207 +322,245 @@
on-line written by Jim Gray at <A href=
"http://www.benchmarkresources.com">http://www.benchmarkresources.com.</A></P>
<H3><A name="3">3</A>) Why do we use <I>palloc</I>() and
<I>pfree</I>() to allocate memory?</H3>
<H3><A name="1.7">1.7</A>) What is configure all about?</H3>
<P><I>palloc()</I> and <I>pfree()</I> are used in place of malloc()
and free() because we automatically free all memory allocated when
a transaction completes. This makes it easier to make sure we free
memory that gets allocated in one place, but only freed much later.
There are several contexts that memory can be allocated in, and
this controls when the allocated memory is automatically freed by
the backend.</P>
<P>The files <I>configure</I> and <I>configure.in</I> are part of
the GNU <I>autoconf</I> package. Configure allows us to test for
various capabilities of the OS, and to set variables that can then
be tested in C programs and Makefiles. Autoconf is installed on the
PostgreSQL main server. To add options to configure, edit
<I>configure.in,</I> and then run <I>autoconf</I> to generate
<I>configure.</I></P>
<H3><A name="4">4</A>) Why do we use <I>Node</I> and <I>List</I> to
make data structures?</H3>
<P>When <I>configure</I> is run by the user, it tests various OS
capabilities, stores those in <I>config.status</I> and
<I>config.cache,</I> and modifies a list of <I>*.in</I> files. For
example, if there exists a <I>Makefile.in,</I> configure generates
a <I>Makefile</I> that contains substitutions for all @var@
parameters found by configure.</P>
<P>We do this because this allows a consistent way to pass data
inside the backend in a flexible way. Every node has a
<I>NodeTag</I> which specifies what type of data is inside the
Node. <I>Lists</I> are groups of <I>Nodes chained together as a
forward-linked list.</I></P>
<P>When you need to edit files, make sure you don't waste time
modifying files generated by <I>configure.</I> Edit the <I>*.in</I>
file, and re-run <I>configure</I> to recreate the needed file. If
you run <I>make distclean</I> from the top-level source directory,
all files derived by configure are removed, so you see only the
file contained in the source distribution.</P>
<P>Here are some of the <I>List</I> manipulation commands:</P>
<H3><A name="1.8">1.8</A>) How do I add a new port?</H3>
<BLOCKQUOTE>
<DL>
<DT>lfirst(i)</DT>
<P>There are a variety of places that need to be modified to add a
new port. First, start in the <I>src/template</I> directory. Add an
appropriate entry for your OS. Also, use <I>src/config.guess</I> to
add your OS to <I>src/template/.similar.</I> You shouldn't match
the OS version exactly. The <I>configure</I> test will look for an
exact OS version number, and if not found, find a match without
version number. Edit <I>src/configure.in</I> to add your new OS.
(See configure item above.) You will need to run autoconf, or patch
<I>src/configure</I> too.</P>
<DD>return the data at list element <I>i.</I></DD>
<P>Then, check <I>src/include/port</I> and add your new OS file,
with appropriate values. Hopefully, there is already locking code
in <I>src/include/storage/s_lock.h</I> for your CPU. There is also
a <I>src/makefiles</I> directory for port-specific Makefile
handling. There is a <I>backend/port</I> directory if you need
special files for your OS.</P>
<DT>lnext(i)</DT>
<H3><A name="1.9">1.9</A>) Why don't we use threads in the
backend?</H3>
<DD>return the next list element after <I>i.</I></DD>
<P>There are several reasons threads are not used:</P>
<DT>foreach(i, list)</DT>
<UL>
<LI>Historically, threads were unsupported and buggy.</LI>
<DD>
loop through <I>list,</I> assigning each list element to
<I>i.</I> It is important to note that <I>i</I> is a List *,
not the data in the <I>List</I> element. You need to use
<I>lfirst(i)</I> to get at the data. Here is a typical code
snipped that loops through a List containing <I>Var *'s</I>
and processes each one:
<PRE>
<CODE>List *i, *list;
foreach(i, list)
{
Var *var = lfirst(i);
<LI>An error in one backend can corrupt other backends.</LI>
/* process var here */
}
</CODE>
</PRE>
</DD>
<LI>Speed improvements using threads are small compared to the
remaining backend startup time.</LI>
<DT>lcons(node, list)</DT>
<LI>The backend code would be more complex.</LI>
</UL>
<DD>add <I>node</I> to the front of <I>list,</I> or create a
new list with <I>node</I> if <I>list</I> is <I>NIL.</I></DD>
<H3><A name="1.10">1.10</A>) How are RPM's packaged?</H3>
<DT>lappend(list, node)</DT>
<P>This was written by Lamar Owen:</P>
<DD>add <I>node</I> to the end of <I>list.</I> This is more
expensive that lcons.</DD>
<P>2001-05-03</P>
<DT>nconc(list1, list2)</DT>
<P>As to how the RPMs are built -- to answer that question sanely
requires me to know how much experience you have with the whole RPM
paradigm. 'How is the RPM built?' is a multifaceted question. The
obvious simple answer is that I maintain:</P>
<DD>Concat <I>list2</I> on to the end of <I>list1.</I></DD>
<OL>
<LI>A set of patches to make certain portions of the source tree
'behave' in the different environment of the RPMset;</LI>
<DT>length(list)</DT>
<LI>The initscript;</LI>
<DD>return the length of the <I>list.</I></DD>
<LI>Any other ancilliary scripts and files;</LI>
<DT>nth(i, list)</DT>
<LI>A README.rpm-dist document that tries to adequately document
both the differences between the RPM build and the WHY of the
differences, as well as useful RPM environment operations (like,
using syslog, upgrading, getting postmaster to start at OS boot,
etc);</LI>
<DD>return the <I>i</I>'th element in <I>list.</I></DD>
<LI>The spec file that throws it all together. This is not a
trivial undertaking in a package of this size.</LI>
</OL>
<DT>lconsi, ...</DT>
<P>I then download and build on as many different canonical
distributions as I can -- currently I am able to build on Red Hat
6.2, 7.0, and 7.1 on my personal hardware. Occasionally I receive
opportunity from certain commercial enterprises such as Great
Bridge and PostgreSQL, Inc. to build on other distributions.</P>
<DD>There are integer versions of these: <I>lconsi, lappendi,
nthi.</I> <I>List's</I> containing integers instead of Node
pointers are used to hold list of relation object id's and
other integer quantities.</DD>
</DL>
</BLOCKQUOTE>
You can print nodes easily inside <I>gdb.</I> First, to disable
output truncation when you use the gdb <I>print</I> command:
<PRE>
<CODE>(gdb) set print elements 0
</CODE>
</PRE>
Instead of printing values in gdb format, you can use the next two
commands to print out List, Node, and structure contents in a
verbose format that is easier to understand. List's are unrolled
into nodes, and nodes are printed in detail. The first prints in a
short format, and the second in a long format:
<PRE>
<CODE>(gdb) call print(any_pointer)
(gdb) call pprint(any_pointer)
</CODE>
</PRE>
The output appears in the postmaster log file, or on your screen if
you are running a backend directly without a postmaster.
<P>I test the build by installing the resulting packages and
running the regression tests. Once the build passes these tests, I
upload to the postgresql.org ftp server and make a release
announcement. I am also responsible for maintaining the RPM
download area on the ftp site.</P>
<H3><A name="5">5</A>) How do I add a feature or fix a bug?</H3>
<P>You'll notice I said 'canonical' distributions above. That
simply means that the machine is as stock 'out of the box' as
practical -- that is, everything (except select few programs) on
these boxen are installed by RPM; only official Red Hat released
RPMs are used (except in unusual circumstances involving software
that will not alter the build -- for example, installing a newer
non-RedHat version of the Dia diagramming package is OK --
installing Python 2.1 on the box that has Python 1.5.2 installed is
not, as that alters the PostgreSQL build). The RPM as uploaded is
built to as close to out-of-the-box pristine as is possible. Only
the standard released 'official to that release' compiler is used
-- and only the standard official kernel is used as well.</P>
<P>The source code is over 250,000 lines. Many problems/features
are isolated to one specific area of the code. Others require
knowledge of much of the source. If you are confused about where to
start, ask the hackers list, and they will be glad to assess the
complexity and give pointers on where to start.</P>
<P>For a time I built on Mandrake for RedHat consumption -- no
more. Nonstandard RPM building systems are worse than useless.
Which is not to say that Mandrake is useless! By no means is
Mandrake useless -- unless you are building Red Hat RPMs -- and Red
Hat is useless if you're trying to build Mandrake or SuSE RPMs, for
that matter. But I would be foolish to use 'Lamar Owen's Super
Special RPM Blend Distro 0.1.2' to build for public consumption!
:-)</P>
<P>Another thing to keep in mind is that many fixes and features
can be added with surprisingly little code. I often start by adding
code, then looking at other areas in the code where similar things
are done, and by the time I am finished, the patch is quite small
and compact.</P>
<P>I _do_ attempt to make the _source_ RPM compatible with as many
distributions as possible -- however, since I have limited
resources (as a volunteer RPM maintainer) I am limited as to the
amount of testing said build will get on other distributions,
architectures, or systems.</P>
<P>When adding code, keep in mind that it should use the existing
facilities in the source, for performance reasons and for
simplicity. Often a review of existing code doing similar things is
helpful.</P>
<P>And, while I understand people's desire to immediately upgrade
to the newest version, realize that I do this as a side interest --
I have a regular, full-time job as a broadcast
engineer/webmaster/sysadmin/Technical Director which occasionally
prevents me from making timely RPM releases. This happened during
the early part of the 7.1 beta cycle -- but I believe I was pretty
much on the ball for the Release Candidates and the final
release.</P>
<H3><A name="6">6</A>) How do I download/update the current source
tree?</H3>
<P>I am working towards a more open RPM distribution -- I would
dearly love to more fully document the process and put everything
into CVS -- once I figure out how I want to represent things such
as the spec file in a CVS form. It makes no sense to maintain a
changelog, for instance, in the spec file in CVS when CVS does a
better job of changelogs -- I will need to write a tool to generate
a real spec file from a CVS spec-source file that would add version
numbers, changelog entries, etc to the result before building the
RPM. IOW, I need to rethink the process -- and then go through the
motions of putting my long RPM history into CVS one version at a
time so that version history information isn't lost.</P>
<P>There are several ways to obtain the source tree. Occasional
developers can just get the most recent source tree snapshot from
ftp.postgresql.org. For regular developers, you can use CVS. CVS
allows you to download the source tree, then occasionally update
your copy of the source tree with any new changes. Using CVS, you
don't have to download the entire source each time, only the
changed files. Anonymous CVS does not allows developers to update
the remote source tree, though privileged developers can do this.
There is a CVS FAQ on our web site that describes how to use remote
CVS. You can also use CVSup, which has similarly functionality, and
is available from ftp.postgresql.org.</P>
<P>As to why all these files aren't part of the source tree, well,
unless there was a large cry for it to happen, I don't believe it
should. PostgreSQL is very platform-agnostic -- and I like that.
Including the RPM stuff as part of the Official Tarball (TM) would,
IMHO, slant that agnostic stance in a negative way. But maybe I'm
too sensitive to that. I'm not opposed to doing that if that is the
consensus of the core group -- and that would be a sneaky way to
get the stuff into CVS :-). But if the core group isn't thrilled
with the idea (and my instinct says they're not likely to be), I am
opposed to the idea -- not to keep the stuff to myself, but to not
hinder the platform-neutral stance. IMHO, of course.</P>
<P>To update the source tree, there are two ways. You can generate
a patch against your current source tree, perhaps using the
make_diff tools mentioned above, and send them to the patches list.
They will be reviewed, and applied in a timely manner. If the patch
is major, and we are in beta testing, the developers may wait for
the final release before applying your patches.</P>
<P>Of course, there are many projects that DO include all the files
necessary to build RPMs from their Official Tarball (TM).</P>
<P>For hard-core developers, Marc(scrappy@postgresql.org) will give
you a Unix shell account on postgresql.org, so you can use CVS to
update the main source tree, or you can ftp your files into your
account, patch, and cvs install the changes directly into the
source tree.</P>
<H3><A name="1.11">1.11</A>) How are CVS branches managed?</H3>
<H3><A name="6">6</A>) How do I test my changes?</H3>
<P>This was written by Tom Lane:</P>
<P>First, use <I>psql</I> to make sure it is working as you expect.
Then run <I>src/test/regress</I> and get the output of
<I>src/test/regress/checkresults</I> with and without your changes,
to see that your patch does not change the regression test in
unexpected ways. This practice has saved me many times. The
regression tests test the code in ways I would never do, and has
caught many bugs in my patches. By finding the problems now, you
save yourself a lot of debugging later when things are broken, and
you can't figure out when it happened.</P>
<P>2001-05-07</P>
<H3><A name="7">7</A>) I just added a field to a structure. What
else should I do?</H3>
<P>If you just do basic "cvs checkout", "cvs update", "cvs commit",
then you'll always be dealing with the HEAD version of the files in
CVS. That's what you want for development, but if you need to patch
past stable releases then you have to be able to access and update
the "branch" portions of our CVS repository. We normally fork off a
branch for a stable release just before starting the development
cycle for the next release.</P>
<P>The structures passing around from the parser, rewrite,
optimizer, and executor require quite a bit of support. Most
structures have support routines in <I>src/backend/nodes</I> used
to create, copy, read, and output those structures. Make sure you
add support for your new field to these files. Find any other
places the structure may need code for your new field. <I>mkid</I>
is helpful with this (see above).</P>
<P>The first thing you have to know is the branch name for the
branch you are interested in getting at. To do this, look at some
long-lived file, say the top-level HISTORY file, with "cvs status
-v" to see what the branch names are. (Thanks to Ian Lance Taylor
for pointing out that this is the easiest way to do it.) Typical
branch names are:</P>
<PRE>
REL7_1_STABLE
REL7_0_PATCHES
REL6_5_PATCHES
</PRE>
<H3><A name="8">8</A>) Why are table, column, type, function, view
names sometimes referenced as <I>Name</I> or <I>NameData,</I> and
sometimes as <I>char *?</I></H3>
<P>OK, so how do you do work on a branch? By far the best way is to
create a separate checkout tree for the branch and do your work in
that. Not only is that the easiest way to deal with CVS, but you
really need to have the whole past tree available anyway to test
your work. (And you *better* test your work. Never forget that
dot-releases tend to go out with very little beta testing --- so
whenever you commit an update to a stable branch, you'd better be
doubly sure that it's correct.)</P>
<P>Table, column, type, function, and view names are stored in
system tables in columns of type <I>Name.</I> Name is a
fixed-length, null-terminated type of <I>NAMEDATALEN</I> bytes.
(The default value for NAMEDATALEN is 32 bytes.)</P>
<P>Normally, to checkout the head branch, you just cd to the place
you want to contain the toplevel "pgsql" directory and say</P>
<PRE>
<CODE>typedef struct nameData
{
char data[NAMEDATALEN];
} NameData;
typedef NameData *Name;
</CODE>
cvs ... checkout pgsql
</PRE>
Table, column, type, function, and view names that come into the
backend via user queries are stored as variable-length,
null-terminated character strings.
<P>Many functions are called with both types of names, ie.
<I>heap_open().</I> Because the Name type is null-terminated, it is
safe to pass it to a function expecting a char *. Because there are
many cases where on-disk names(Name) are compared to user-supplied
names(char *), there are many cases where Name and char * are used
interchangeably.</P>
<P>To get a past branch, you cd to whereever you want it and
say</P>
<PRE>
cvs ... checkout -r BRANCHNAME pgsql
</PRE>
<P>For example, just a couple days ago I did</P>
<PRE>
mkdir ~postgres/REL7_1
cd ~postgres/REL7_1
cvs ... checkout -r REL7_1_STABLE pgsql
</PRE>
<P>and now I have a maintenance copy of 7.1.*.</P>
<P>When you've done a checkout in this way, the branch name is
"sticky": CVS automatically knows that this directory tree is for
the branch, and whenever you do "cvs update" or "cvs commit" in
this tree, you'll fetch or store the latest version in the branch,
not the head version. Easy as can be.</P>
<P>So, if you have a patch that needs to apply to both the head and
a recent stable branch, you have to make the edits and do the
commit twice, once in your development tree and once in your stable
branch tree. This is kind of a pain, which is why we don't normally
fork the tree right away after a major release --- we wait for a
dot-release or two, so that we won't have to double-patch the first
wave of fixes.</P>
<CENTER>
<H2>Technical Questions</H2>
</CENTER>
<H3><A name="9">9</A>) How do I efficiently access information in
<H3><A name="2.1">2.1</A>) How do I efficiently access information in
tables from the backend code?</H3>
<P>You first need to find the tuples(rows) you are interested in.
......@@ -460,330 +628,172 @@
<I>ReleaseBuffer()</I>, in the <I>heap_fetch()</I> case. Or it may
be a palloc'ed tuple, that you must <I>pfree()</I> when finished.
<H3><A name="10">10</A>) What is elog()?</H3>
<H3><A name="2.2">2.2</A>) Why are table, column, type, function, view
names sometimes referenced as <I>Name</I> or <I>NameData,</I> and
sometimes as <I>char *?</I></H3>
<P><I>elog()</I> is used to send messages to the front-end, and
optionally terminate the current query being processed. The first
parameter is an elog level of <I>NOTICE,</I> <I>DEBUG,</I>
<I>ERROR,</I> or <I>FATAL.</I> <I>NOTICE</I> prints on the user's
terminal and the postmaster logs. <I>DEBUG</I> prints only in the
postmaster logs. <I>ERROR</I> prints in both places, and terminates
the current query, never returning from the call. <I>FATAL</I>
terminates the backend process. The remaining parameters of
<I>elog</I> are a <I>printf</I>-style set of parameters to
print.</P>
<P>Table, column, type, function, and view names are stored in
system tables in columns of type <I>Name.</I> Name is a
fixed-length, null-terminated type of <I>NAMEDATALEN</I> bytes.
(The default value for NAMEDATALEN is 32 bytes.)</P>
<PRE>
<CODE>typedef struct nameData
{
char data[NAMEDATALEN];
} NameData;
typedef NameData *Name;
</CODE>
</PRE>
Table, column, type, function, and view names that come into the
backend via user queries are stored as variable-length,
null-terminated character strings.
<H3><A name="11">11</A>) What is configure all about?</H3>
<P>Many functions are called with both types of names, ie.
<I>heap_open().</I> Because the Name type is null-terminated, it is
safe to pass it to a function expecting a char *. Because there are
many cases where on-disk names(Name) are compared to user-supplied
names(char *), there are many cases where Name and char * are used
interchangeably.</P>
<P>The files <I>configure</I> and <I>configure.in</I> are part of
the GNU <I>autoconf</I> package. Configure allows us to test for
various capabilities of the OS, and to set variables that can then
be tested in C programs and Makefiles. Autoconf is installed on the
PostgreSQL main server. To add options to configure, edit
<I>configure.in,</I> and then run <I>autoconf</I> to generate
<I>configure.</I></P>
<P>When <I>configure</I> is run by the user, it tests various OS
capabilities, stores those in <I>config.status</I> and
<I>config.cache,</I> and modifies a list of <I>*.in</I> files. For
example, if there exists a <I>Makefile.in,</I> configure generates
a <I>Makefile</I> that contains substitutions for all @var@
parameters found by configure.</P>
<P>When you need to edit files, make sure you don't waste time
modifying files generated by <I>configure.</I> Edit the <I>*.in</I>
file, and re-run <I>configure</I> to recreate the needed file. If
you run <I>make distclean</I> from the top-level source directory,
all files derived by configure are removed, so you see only the
file contained in the source distribution.</P>
<H3><A name="12">12</A>) How do I add a new port?</H3>
<P>There are a variety of places that need to be modified to add a
new port. First, start in the <I>src/template</I> directory. Add an
appropriate entry for your OS. Also, use <I>src/config.guess</I> to
add your OS to <I>src/template/.similar.</I> You shouldn't match
the OS version exactly. The <I>configure</I> test will look for an
exact OS version number, and if not found, find a match without
version number. Edit <I>src/configure.in</I> to add your new OS.
(See configure item above.) You will need to run autoconf, or patch
<I>src/configure</I> too.</P>
<P>Then, check <I>src/include/port</I> and add your new OS file,
with appropriate values. Hopefully, there is already locking code
in <I>src/include/storage/s_lock.h</I> for your CPU. There is also
a <I>src/makefiles</I> directory for port-specific Makefile
handling. There is a <I>backend/port</I> directory if you need
special files for your OS.</P>
<H3><A name="13">13</A>) What is CommandCounterIncrement()?</H3>
<P>Normally, transactions can not see the rows they modify. This
allows <CODE>UPDATE foo SET x = x + 1</CODE> to work correctly.</P>
<P>However, there are cases where a transactions needs to see rows
affected in previous parts of the transaction. This is accomplished
using a Command Counter. Incrementing the counter allows
transactions to be broken into pieces so each piece can see rows
modified by previous pieces. <I>CommandCounterIncrement()</I>
increments the Command Counter, creating a new part of the
transaction.</P>
<H3><A name="14">14</A>) Why don't we use threads in the
backend?</H3>
<P>There are several reasons threads are not used:</P>
<UL>
<LI>Historically, threads were unsupported and buggy.</LI>
<LI>An error in one backend can corrupt other backends.</LI>
<LI>Speed improvements using threads are small compared to the
remaining backend startup time.</LI>
<LI>The backend code would be more complex.</LI>
</UL>
<H3><A name="15">15</A>) How are RPM's packaged?</H3>
<P>This was written by Lamar Owen:</P>
<P>2001-05-03</P>
<P>As to how the RPMs are built -- to answer that question sanely
requires me to know how much experience you have with the whole RPM
paradigm. 'How is the RPM built?' is a multifaceted question. The
obvious simple answer is that I maintain:</P>
<OL>
<LI>A set of patches to make certain portions of the source tree
'behave' in the different environment of the RPMset;</LI>
<H3><A name="2.3">2.3</A>) Why do we use <I>Node</I> and <I>List</I> to
make data structures?</H3>
<LI>The initscript;</LI>
<P>We do this because this allows a consistent way to pass data
inside the backend in a flexible way. Every node has a
<I>NodeTag</I> which specifies what type of data is inside the
Node. <I>Lists</I> are groups of <I>Nodes chained together as a
forward-linked list.</I></P>
<LI>Any other ancilliary scripts and files;</LI>
<P>Here are some of the <I>List</I> manipulation commands:</P>
<LI>A README.rpm-dist document that tries to adequately document
both the differences between the RPM build and the WHY of the
differences, as well as useful RPM environment operations (like,
using syslog, upgrading, getting postmaster to start at OS boot,
etc);</LI>
<BLOCKQUOTE>
<DL>
<DT>lfirst(i)</DT>
<LI>The spec file that throws it all together. This is not a
trivial undertaking in a package of this size.</LI>
</OL>
<DD>return the data at list element <I>i.</I></DD>
<P>I then download and build on as many different canonical
distributions as I can -- currently I am able to build on Red Hat
6.2, 7.0, and 7.1 on my personal hardware. Occasionally I receive
opportunity from certain commercial enterprises such as Great
Bridge and PostgreSQL, Inc. to build on other distributions.</P>
<DT>lnext(i)</DT>
<P>I test the build by installing the resulting packages and
running the regression tests. Once the build passes these tests, I
upload to the postgresql.org ftp server and make a release
announcement. I am also responsible for maintaining the RPM
download area on the ftp site.</P>
<DD>return the next list element after <I>i.</I></DD>
<P>You'll notice I said 'canonical' distributions above. That
simply means that the machine is as stock 'out of the box' as
practical -- that is, everything (except select few programs) on
these boxen are installed by RPM; only official Red Hat released
RPMs are used (except in unusual circumstances involving software
that will not alter the build -- for example, installing a newer
non-RedHat version of the Dia diagramming package is OK --
installing Python 2.1 on the box that has Python 1.5.2 installed is
not, as that alters the PostgreSQL build). The RPM as uploaded is
built to as close to out-of-the-box pristine as is possible. Only
the standard released 'official to that release' compiler is used
-- and only the standard official kernel is used as well.</P>
<DT>foreach(i, list)</DT>
<P>For a time I built on Mandrake for RedHat consumption -- no
more. Nonstandard RPM building systems are worse than useless.
Which is not to say that Mandrake is useless! By no means is
Mandrake useless -- unless you are building Red Hat RPMs -- and Red
Hat is useless if you're trying to build Mandrake or SuSE RPMs, for
that matter. But I would be foolish to use 'Lamar Owen's Super
Special RPM Blend Distro 0.1.2' to build for public consumption!
:-)</P>
<DD>
loop through <I>list,</I> assigning each list element to
<I>i.</I> It is important to note that <I>i</I> is a List *,
not the data in the <I>List</I> element. You need to use
<I>lfirst(i)</I> to get at the data. Here is a typical code
snipped that loops through a List containing <I>Var *'s</I>
and processes each one:
<PRE>
<CODE>List *i, *list;
foreach(i, list)
{
Var *var = lfirst(i);
<P>I _do_ attempt to make the _source_ RPM compatible with as many
distributions as possible -- however, since I have limited
resources (as a volunteer RPM maintainer) I am limited as to the
amount of testing said build will get on other distributions,
architectures, or systems.</P>
/* process var here */
}
</CODE>
</PRE>
</DD>
<P>And, while I understand people's desire to immediately upgrade
to the newest version, realize that I do this as a side interest --
I have a regular, full-time job as a broadcast
engineer/webmaster/sysadmin/Technical Director which occasionally
prevents me from making timely RPM releases. This happened during
the early part of the 7.1 beta cycle -- but I believe I was pretty
much on the ball for the Release Candidates and the final
release.</P>
<DT>lcons(node, list)</DT>
<P>I am working towards a more open RPM distribution -- I would
dearly love to more fully document the process and put everything
into CVS -- once I figure out how I want to represent things such
as the spec file in a CVS form. It makes no sense to maintain a
changelog, for instance, in the spec file in CVS when CVS does a
better job of changelogs -- I will need to write a tool to generate
a real spec file from a CVS spec-source file that would add version
numbers, changelog entries, etc to the result before building the
RPM. IOW, I need to rethink the process -- and then go through the
motions of putting my long RPM history into CVS one version at a
time so that version history information isn't lost.</P>
<DD>add <I>node</I> to the front of <I>list,</I> or create a
new list with <I>node</I> if <I>list</I> is <I>NIL.</I></DD>
<P>As to why all these files aren't part of the source tree, well,
unless there was a large cry for it to happen, I don't believe it
should. PostgreSQL is very platform-agnostic -- and I like that.
Including the RPM stuff as part of the Official Tarball (TM) would,
IMHO, slant that agnostic stance in a negative way. But maybe I'm
too sensitive to that. I'm not opposed to doing that if that is the
consensus of the core group -- and that would be a sneaky way to
get the stuff into CVS :-). But if the core group isn't thrilled
with the idea (and my instinct says they're not likely to be), I am
opposed to the idea -- not to keep the stuff to myself, but to not
hinder the platform-neutral stance. IMHO, of course.</P>
<DT>lappend(list, node)</DT>
<P>Of course, there are many projects that DO include all the files
necessary to build RPMs from their Official Tarball (TM).</P>
<DD>add <I>node</I> to the end of <I>list.</I> This is more
expensive that lcons.</DD>
<H3><A name="16">16</A>) How are CVS branches managed?</H3>
<DT>nconc(list1, list2)</DT>
<P>This was written by Tom Lane:</P>
<DD>Concat <I>list2</I> on to the end of <I>list1.</I></DD>
<P>2001-05-07</P>
<DT>length(list)</DT>
<P>If you just do basic "cvs checkout", "cvs update", "cvs commit",
then you'll always be dealing with the HEAD version of the files in
CVS. That's what you want for development, but if you need to patch
past stable releases then you have to be able to access and update
the "branch" portions of our CVS repository. We normally fork off a
branch for a stable release just before starting the development
cycle for the next release.</P>
<DD>return the length of the <I>list.</I></DD>
<P>The first thing you have to know is the branch name for the
branch you are interested in getting at. To do this, look at some
long-lived file, say the top-level HISTORY file, with "cvs status
-v" to see what the branch names are. (Thanks to Ian Lance Taylor
for pointing out that this is the easiest way to do it.) Typical
branch names are:</P>
<PRE>
REL7_1_STABLE
REL7_0_PATCHES
REL6_5_PATCHES
</PRE>
<DT>nth(i, list)</DT>
<P>OK, so how do you do work on a branch? By far the best way is to
create a separate checkout tree for the branch and do your work in
that. Not only is that the easiest way to deal with CVS, but you
really need to have the whole past tree available anyway to test
your work. (And you *better* test your work. Never forget that
dot-releases tend to go out with very little beta testing --- so
whenever you commit an update to a stable branch, you'd better be
doubly sure that it's correct.)</P>
<DD>return the <I>i</I>'th element in <I>list.</I></DD>
<P>Normally, to checkout the head branch, you just cd to the place
you want to contain the toplevel "pgsql" directory and say</P>
<PRE>
cvs ... checkout pgsql
</PRE>
<DT>lconsi, ...</DT>
<P>To get a past branch, you cd to whereever you want it and
say</P>
<DD>There are integer versions of these: <I>lconsi, lappendi,
nthi.</I> <I>List's</I> containing integers instead of Node
pointers are used to hold list of relation object id's and
other integer quantities.</DD>
</DL>
</BLOCKQUOTE>
You can print nodes easily inside <I>gdb.</I> First, to disable
output truncation when you use the gdb <I>print</I> command:
<PRE>
cvs ... checkout -r BRANCHNAME pgsql
<CODE>(gdb) set print elements 0
</CODE>
</PRE>
<P>For example, just a couple days ago I did</P>
Instead of printing values in gdb format, you can use the next two
commands to print out List, Node, and structure contents in a
verbose format that is easier to understand. List's are unrolled
into nodes, and nodes are printed in detail. The first prints in a
short format, and the second in a long format:
<PRE>
mkdir ~postgres/REL7_1
cd ~postgres/REL7_1
cvs ... checkout -r REL7_1_STABLE pgsql
<CODE>(gdb) call print(any_pointer)
(gdb) call pprint(any_pointer)
</CODE>
</PRE>
The output appears in the postmaster log file, or on your screen if
you are running a backend directly without a postmaster.
<P>and now I have a maintenance copy of 7.1.*.</P>
<P>When you've done a checkout in this way, the branch name is
"sticky": CVS automatically knows that this directory tree is for
the branch, and whenever you do "cvs update" or "cvs commit" in
this tree, you'll fetch or store the latest version in the branch,
not the head version. Easy as can be.</P>
<P>So, if you have a patch that needs to apply to both the head and
a recent stable branch, you have to make the edits and do the
commit twice, once in your development tree and once in your stable
branch tree. This is kind of a pain, which is why we don't normally
fork the tree right away after a major release --- we wait for a
dot-release or two, so that we won't have to double-patch the first
wave of fixes.</P>
<H3><A name="17">17</A>) How go I get involved in PostgreSQL
development?</H3>
<P>This was written by Lamar Owen:</P>
<P>2001-06-22</P>
<B>What open source development process is used by the PostgreSQL
team?</B>
<P>Read HACKERS for six months (or a full release cycle, whichever
is longer). Really. HACKERS _is_the process. The process is not
well documented (AFAIK -- it may be somewhere that I am not aware
of) -- and it changes continually.</P>
<H3><A name="2.4">2.4</A>) I just added a field to a structure. What
else should I do?</H3>
<B>What development environment (OS, system, compilers, etc) is
required to develop code?</B>
<P>The structures passing around from the parser, rewrite,
optimizer, and executor require quite a bit of support. Most
structures have support routines in <I>src/backend/nodes</I> used
to create, copy, read, and output those structures. Make sure you
add support for your new field to these files. Find any other
places the structure may need code for your new field. <I>mkid</I>
is helpful with this (see above).</P>
<P><A href="developers.postgresql.org">Developers Corner</A> on the
website has links to this information. The distribution tarball
itself includes all the extra tools and documents that go beyond a
good Unix-like development environment. In general, a modern unix
with a modern gcc, GNU make or equivalent, autoconf (of a
particular version), and good working knowledge of those tools are
required.</P>
<H3><A name="2.5">2.5</A>) Why do we use <I>palloc</I>() and
<I>pfree</I>() to allocate memory?</H3>
<B>What areas need support?</B>
<P><I>palloc()</I> and <I>pfree()</I> are used in place of malloc()
and free() because we automatically free all memory allocated when
a transaction completes. This makes it easier to make sure we free
memory that gets allocated in one place, but only freed much later.
There are several contexts that memory can be allocated in, and
this controls when the allocated memory is automatically freed by
the backend.</P>
<P>The TODO list.</P>
<H3><A name="2.6">2.6</A>) What is elog()?</H3>
<P>You've made the first step, by finding and subscribing to
HACKERS. Once you find an area to look at in the TODO, and have
read the documentation on the internals, etc, then you check out a
current CVS,write what you are going to write (keeping your CVS
checkout up to date in the process), and make up a patch (as a
context diff only) and send to the PATCHES list, prefereably.</P>
<P><I>elog()</I> is used to send messages to the front-end, and
optionally terminate the current query being processed. The first
parameter is an elog level of <I>NOTICE,</I> <I>DEBUG,</I>
<I>ERROR,</I> or <I>FATAL.</I> <I>NOTICE</I> prints on the user's
terminal and the postmaster logs. <I>DEBUG</I> prints only in the
postmaster logs. <I>ERROR</I> prints in both places, and terminates
the current query, never returning from the call. <I>FATAL</I>
terminates the backend process. The remaining parameters of
<I>elog</I> are a <I>printf</I>-style set of parameters to
print.</P>
<P>Discussion on the patch typically happens here. If the patch
adds a major feature, it would be a good idea to talk about it
first on the HACKERS list, in order to increase the chances of it
being accepted, as well as toavoid duplication of effort. Note that
experienced developers with a proven track record usually get the
big jobs -- for more than one reason. Also note that PostgreSQL is
highly portable -- nonportable code will likely be dismissed out of
hand.</P>
<H3><A name="2.7">2.7</A>) What is CommandCounterIncrement()?</H3>
<P>Once your contributions get accepted, things move from there.
Typically, you would be added as a developer on the list on the
website when one of the other developers recommends it. Membership
on the steering committee is by invitation only, by the other
steering committee members, from what I have gathered watching
froma distance.</P>
<P>Normally, transactions can not see the rows they modify. This
allows <CODE>UPDATE foo SET x = x + 1</CODE> to work correctly.</P>
<P>I make these statements from having watched the process for over
two years.</P>
<P>However, there are cases where a transactions needs to see rows
affected in previous parts of the transaction. This is accomplished
using a Command Counter. Incrementing the counter allows
transactions to be broken into pieces so each piece can see rows
modified by previous pieces. <I>CommandCounterIncrement()</I>
increments the Command Counter, creating a new part of the
transaction.</P>
<P>To see a good example of how one goes about this, search the
archives for the name 'Tom Lane' and see what his first post
consisted of, and where he took things. In particular, note that
this hasn't been _that_ long ago -- and his bugfixing and general
deep knowledge with this codebase is legendary. Take a few days to
read after him. And pay special attention to both the sheer
quantity as well as the painstaking quality of his work. Both are
in high demand.</P>
</BODY>
</HTML>
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment