FAQ_DEV 38.6 KB
Newer Older
Bruce Momjian's avatar
Bruce Momjian committed
1

Bruce Momjian's avatar
Bruce Momjian committed
2 3
          Developer's Frequently Asked Questions (FAQ) for PostgreSQL
                                       
4
   Last updated: Tue Jul 11 13:01:46 EDT 2006
Bruce Momjian's avatar
Bruce Momjian committed
5
   
Bruce Momjian's avatar
Bruce Momjian committed
6
   Current maintainer: Bruce Momjian (bruce@momjian.us)
Bruce Momjian's avatar
Bruce Momjian committed
7
   
8
   The most recent version of this document can be viewed at
Bruce Momjian's avatar
Bruce Momjian committed
9
   http://www.postgresql.org/files/documentation/faqs/FAQ_DEV.html.
Bruce Momjian's avatar
Bruce Momjian committed
10 11
     _________________________________________________________________
   
12 13
General Questions

Bruce Momjian's avatar
Bruce Momjian committed
14
   1.1) How do I get involved in PostgreSQL development?
15 16 17
   1.2) What development environment is required to develop code?
   1.3) What areas need work?
   1.4) What do I do after choosing an item to work on?
18 19
   1.5) I've developed a patch, what next?
   1.6) Where can I learn more about the code?
20 21 22 23 24 25
   1.7) How do I download/update the current source tree?
   1.8) How do I test my changes?
   1.9) What tools are available for developers?
   1.10) What books are good for developers?
   1.11) What is configure all about?
   1.12) How do I add a new port?
Bruce Momjian's avatar
Bruce Momjian committed
26
   1.13) Why don't you use threads, raw devices, async-I/O, <insert your
Bruce Momjian's avatar
Bruce Momjian committed
27
   favorite wizz-bang feature here>?
28 29 30 31 32
   1.14) How are RPM's packaged?
   1.15) How are CVS branches handled?
   1.16) Where can I get a copy of the SQL standards?
   1.17) Where can I get technical assistance?
   1.18) How do I get involved in PostgreSQL web site development?
Bruce Momjian's avatar
Bruce Momjian committed
33
   
34 35
Technical Questions

Bruce Momjian's avatar
Bruce Momjian committed
36 37 38
   2.1) How do I efficiently access information in tables from the
   backend code?
   2.2) Why are table, column, type, function, view names sometimes
Bruce Momjian's avatar
Bruce Momjian committed
39
   referenced as Name or NameData, and sometimes as char *?
Bruce Momjian's avatar
Bruce Momjian committed
40 41 42
   2.3) Why do we use Node and List to make data structures?
   2.4) I just added a field to a structure. What else should I do?
   2.5) Why do we use palloc() and pfree() to allocate memory?
43
   2.6) What is ereport()?
Bruce Momjian's avatar
Bruce Momjian committed
44
   2.7) What is CommandCounterIncrement()?
45
   2.8) What debugging features are available?
Bruce Momjian's avatar
Bruce Momjian committed
46 47
     _________________________________________________________________
   
Bruce Momjian's avatar
Bruce Momjian committed
48 49
General Questions

Bruce Momjian's avatar
Bruce Momjian committed
50
  1.1) How do I get involved in PostgreSQL development?
Bruce Momjian's avatar
Bruce Momjian committed
51
  
52 53 54 55 56
   Download the code and have a look around. See 1.7.
   
   Subscribe to and read the pgsql-hackers mailing list (often termed
   'hackers'). This is where the major contributors and core members of
   the project discuss development.
Bruce Momjian's avatar
Bruce Momjian committed
57
   
58
  1.2) What development environment is required to develop code?
Bruce Momjian's avatar
Bruce Momjian committed
59
  
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87
   PostgreSQL is developed mostly in the C programming language. It also
   makes use of Yacc and Lex.
   
   The source code is targeted at most of the popular Unix platforms and
   the Windows environment (XP, Windows 2000, and up).
   
   Most developers make use of the open source development tool chain. If
   you have contributed to open source software before, you will probably
   be familiar with these tools. They include: GCC (http://gcc.gnu.org,
   GDB (www.gnu.org/software/gdb/gdb.html), autoconf
   (www.gnu.org/software/autoconf/) AND GNU make
   (www.gnu.org/software/make/make.html.
   
   Developers using this tool chain on Windows make use of MingW (see
   http://www.mingw.org/).
   
   Some developers use compilers from other software vendors with mixed
   results.
   
   Developers who are regularly rebuilding the source often pass the
   --enable-depend flag to configure. The result is that when you make a
   modification to a C header file, all files depend upon that file are
   also rebuilt.
   
  1.3) What areas need work?
  
   Outstanding features are detailed in the TODO list. This is located in
   doc/TODO in the source distribution or at
Bruce Momjian's avatar
Bruce Momjian committed
88
   http://www.postgresql.org/docs/faqs.TODO.html.
89 90 91 92 93 94 95 96
   
   You can learn more about these features by consulting the archives,
   the SQL standards and the recommend texts (see 1.10).
   
  1.4) What do I do after choosing an item to work on?
  
   Send an email to pgsql-hackers with a proposal for what you want to do
   (assuming your contribution is not trivial). Working in isolation is
97 98 99 100 101 102
   not advisable because others might be working on the same TODO item,
   or you might have misunderstood the TODO item. In the email, discuss
   both the internal implementation method you plan to use, and any
   user-visible changes (new syntax, etc). For complex patches, it is
   important to get community feeback on your proposal before starting
   work. Failure to do so might mean your patch is rejected.
103
   
104
   A web site is maintained for patches awaiting review,
Bruce Momjian's avatar
Bruce Momjian committed
105
   http://momjian.postgresql.org/cgi-bin/pgpatches, and those that are
Bruce Momjian's avatar
Bruce Momjian committed
106
   being kept for the next release,
Bruce Momjian's avatar
Bruce Momjian committed
107 108
   http://momjian.postgresql.org/cgi-bin/pgpatches2.
   
109
  1.5) I've developed a patch, what next?
110
  
Bruce Momjian's avatar
Bruce Momjian committed
111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156
   You will need to submit the patch to pgsql-patches@postgresql.org. It
   will be reviewed by other contributors to the project and will be
   either accepted or sent back for further work. To help ensure your
   patch is reviewed and committed in a timely fashion, please try to
   make sure your submission conforms to the following guidelines:
    1. Ensure that your patch is generated against the most recent
       version of the code, which for developers is CVS HEAD. For more on
       branches in PostgreSQL, see 1.15.
    2. Try to make your patch as readable as possible by following the
       project's code-layout conventions. This makes it easier for the
       reviewer, and there's no point in trying to layout things
       differently than pgindent. Also avoid unnecessary whitespace
       changes because they just distract the reviewer, and formatting
       changes will be removed by the next run of pgindent.
    3. The patch should be generated in contextual diff format (diff -c
       and should be applicable from the root directory. If you are
       unfamiliar with this, you might find the script
       src/tools/makediff/difforig useful. (Unified diffs are only
       preferable if the file changes are single-line changes and do not
       rely on surrounding lines.)
    4. PostgreSQL is licensed under a BSD license, so any submissions
       must conform to the BSD license to be included. If you use code
       that is available under some other license that is BSD compatible
       (eg. public domain) please note that code in your email submission
    5. Confirm that your changes can pass the regression tests. If your
       changes are port specific, please list the ports you have tested
       it on.
    6. Provide an implementation overview, preferably in code comments.
       Following the surrounding code commenting style is usually a good
       approach.
    7. New feature patches should also be accompanied by documentation
       patches. If you need help checking the SQL standard, see 1.16.
    8. If you are adding a new feature, confirm that it has been tested
       thoughly. Try to test the feature in all conceivable scenarios.
    9. If it is a performance patch, please provide confirming test
       results to show the benefit of your patch. It is OK to post
       patches without this information, though the patch will not be
       applied until somebody has tested the patch and found a
       significant performance improvement.
       
   Even if you pass all of the above, the patch might still be rejected
   for other reasons. Please be prepared to listen to comments and make
   modifications.
   
   You will be notified via email when the patch is applied, and your
   name will appear in the next version of the release notes.
157 158 159 160 161
   
  1.6) Where can I learn more about the code?
  
   Other than documentation in the source tree itself, you can find some
   papers/presentations discussing the code at
162 163
   http://www.postgresql.org/developer. An excellent presentation is at
   http://neilconway.org/talks/hacking/
164 165
   
  1.7) How do I download/update the current source tree?
Bruce Momjian's avatar
Bruce Momjian committed
166 167 168
  
   There are several ways to obtain the source tree. Occasional
   developers can just get the most recent source tree snapshot from
169 170
   ftp://ftp.postgresql.org.
   
171
   Regular developers might want to take advantage of anonymous access to
172 173 174 175 176
   our source code management system. The source tree is currently hosted
   in CVS. For details of how to obtain the source from CVS see
   http://developer.postgresql.org/docs/postgres/cvs.html.
   
  1.8) How do I test my changes?
Bruce Momjian's avatar
Bruce Momjian committed
177
  
178 179 180
   Basic system testing
   
   The easiest way to test your code is to ensure that it builds against
Bruce Momjian's avatar
Bruce Momjian committed
181
   the latest version of the code and that it does not generate compiler
182 183 184 185 186 187 188 189 190 191 192 193 194 195 196
   warnings.
   
   It is worth advised that you pass --enable-cassert to configure. This
   will turn on assertions with in the source which will often show us
   bugs because they cause data corruption of segmentation violations.
   This generally makes debugging much easier.
   
   Then, perform run time testing via psql.
   
   Regression test suite
   
   The next step is to test your changes against the existing regression
   test suite. To do this, issue "make check" in the root directory of
   the source tree. If any tests failure, investigate.
   
197
   If you've deliberately changed existing behavior, this change might
198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213
   cause a regression test failure but not any actual regression. If so,
   you should also patch the regression test suite.
   
   Other run time testing
   
   Some developers make use of tools such as valgrind
   (http://valgrind.kde.org) for memory testing, gprof (which comes with
   the GNU binutils suite) and oprofile
   (http://oprofile.sourceforge.net/) for profiling and other related
   tools.
   
   What about unit testing, static analysis, model checking...?
   
   There have been a number of discussions about other testing frameworks
   and some developers are exploring these ideas.
   
214 215 216 217 218
   Keep in mind the Makefiles do not have the proper dependencies for
   include files. You have to do a make clean and then another make. If
   you are using GCC you can use the --enable-depend option of configure
   to have the compiler compute the dependencies automatically.
   
219
  1.9) What tools are available for developers?
Bruce Momjian's avatar
Bruce Momjian committed
220
  
221 222
   First, all the files in the src/tools directory are designed for
   developers.
Bruce Momjian's avatar
Bruce Momjian committed
223
    RELEASE_CHANGES changes we have to make for each release
Bruce Momjian's avatar
Bruce Momjian committed
224 225
    backend         description/flowchart of the backend directories
    ccsym           find standard defines made by your compiler
226 227
     copyright       fixes copyright notices

Bruce Momjian's avatar
Bruce Momjian committed
228 229
    entab           converts tabs to spaces, used by pgindent
    find_static     finds functions that could be made static
Bruce Momjian's avatar
Bruce Momjian committed
230
    find_typedef    finds typedefs in the source code
231
    find_badmacros  finds macros that use braces incorrectly
232 233
    fsync           a script to provide information about the cost of cache
                     syncing system calls
Bruce Momjian's avatar
Bruce Momjian committed
234 235 236
    make_ctags      make vi 'tags' file in each directory
    make_diff       make *.orig and diffs of source
    make_etags      make emacs 'etags' files
Bruce Momjian's avatar
Bruce Momjian committed
237
    make_keywords   make comparison of our keywords and SQL'92
Bruce Momjian's avatar
Bruce Momjian committed
238
    make_mkid       make mkid ID files
239
    pgcvslog        used to generate a list of changes for each release
Bruce Momjian's avatar
Bruce Momjian committed
240
    pginclude       scripts for adding/removing include files
241 242 243 244 245 246 247 248
    pgindent        indents source files
    pgtest          a semi-automated build system
    thread          a thread testing script

   In src/include/catalog:
    unused_oids     a script which generates unused OIDs for use in system
                     catalogs
    duplicate_oids  finds duplicate OIDs in system catalog definitions
Bruce Momjian's avatar
Bruce Momjian committed
249

250
   If you point your browser at the tools/backend/index.html file, you
Bruce Momjian's avatar
Bruce Momjian committed
251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266
   will see few paragraphs describing the data flow, the backend
   components in a flow chart, and a description of the shared memory
   area. You can click on any flowchart box to see a description. If you
   then click on the directory name, you will be taken to the source
   directory, to browse the actual source code behind it. We also have
   several README files in some source directories to describe the
   function of the module. The browser will display these when you enter
   the directory also. The tools/backend directory is also contained on
   our web page under the title How PostgreSQL Processes a Query.
   
   Second, you really should have an editor that can handle tags, so you
   can tag a function call to see the function definition, and then tag
   inside that function to see an even lower-level function, and then
   back out twice to return to the original function. Most editors
   support this via tags or etags files.
   
267 268
   Third, you need to get id-utils from ftp://ftp.gnu.org/gnu/id-utils/
   
Bruce Momjian's avatar
Bruce Momjian committed
269
   By running tools/make_mkid, an archive of source symbols can be
270
   created that can be rapidly queried.
Bruce Momjian's avatar
Bruce Momjian committed
271
   
272 273 274 275 276 277 278
   Some developers make use of cscope, which can be found at
   http://cscope.sf.net/. Others use glimpse, which can be found at
   http://webglimpse.net/.
   
   tools/make_diff has tools to create patch diff files that can be
   applied to the distribution. This produces context diffs, which is our
   preferred format.
Bruce Momjian's avatar
Bruce Momjian committed
279
   
280 281 282
   Our standard format is to indent each code level with one tab, where
   each tab is four spaces. You will need to set your editor to display
   tabs as four spaces:
Bruce Momjian's avatar
Bruce Momjian committed
283 284 285 286 287 288 289 290 291
    vi in ~/.exrc:
            set tabstop=4
            set sw=4
    more:
            more -x4
    less:
            less -x4
    emacs:
        M-x set-variable tab-width
292

Bruce Momjian's avatar
Bruce Momjian committed
293
        or
294 295 296 297 298 299 300 301 302 303 304 305 306 307 308

        (c-add-style "pgsql"
                '("bsd"
                        (indent-tabs-mode . t)
                        (c-basic-offset   . 4)
                        (tab-width . 4)
                        (c-offsets-alist .
                                ((case-label . +)))
                )
                nil ) ; t = set this style, nil = don't

        (defun pgsql-c-mode ()
                (c-mode)
                (c-set-style "pgsql")
        )
Bruce Momjian's avatar
Bruce Momjian committed
309

Bruce Momjian's avatar
Bruce Momjian committed
310
        and add this to your autoload list (modify file path in macro):
Bruce Momjian's avatar
Bruce Momjian committed
311

Bruce Momjian's avatar
Bruce Momjian committed
312
        (setq auto-mode-alist
313 314
                (cons '("\\`/home/andrew/pgsql/.*\\.[chyl]\\'" . pgsql-c-mode)
                auto-mode-alist))
Bruce Momjian's avatar
Bruce Momjian committed
315 316 317 318 319 320 321 322
        or
            /*
             * Local variables:
             *  tab-width: 4
             *  c-indent-level: 4
             *  c-basic-offset: 4
             * End:
             */
323 324

   pgindent will the format code by specifying flags to your operating
Bruce Momjian's avatar
Bruce Momjian committed
325
   system's utility indent. This article describes the value of a
326
   consistent coding style.
Bruce Momjian's avatar
Bruce Momjian committed
327 328 329 330 331 332 333 334 335 336 337 338 339
   
   pgindent is run on all source files just before each beta test period.
   It auto-formats all source files to make them consistent. Comment
   blocks that need specific line breaks should be formatted as block
   comments, where the comment starts as /*------. These comments will
   not be reformatted in any way.
   
   pginclude contains scripts used to add needed #include's to include
   files, and removed unneeded #include's.
   
   When adding system types, you will need to assign oids to them. There
   is also a script called unused_oids in pgsql/src/include/catalog that
   shows the unused oids.
Bruce Momjian's avatar
Bruce Momjian committed
340
   
341
  1.10) What books are good for developers?
Bruce Momjian's avatar
Bruce Momjian committed
342
  
Bruce Momjian's avatar
Bruce Momjian committed
343 344 345 346 347 348
   I have four good books, An Introduction to Database Systems, by C.J.
   Date, Addison, Wesley, A Guide to the SQL Standard, by C.J. Date, et.
   al, Addison, Wesley, Fundamentals of Database Systems, by Elmasri and
   Navathe, and Transaction Processing, by Jim Gray, Morgan, Kaufmann
   
   There is also a database performance site, with a handbook on-line
349
   written by Jim Gray at http://www.benchmarkresources.com..
350
   
351
  1.11) What is configure all about?
352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372
  
   The files configure and configure.in are part of the GNU autoconf
   package. Configure allows us to test for various capabilities of the
   OS, and to set variables that can then be tested in C programs and
   Makefiles. Autoconf is installed on the PostgreSQL main server. To add
   options to configure, edit configure.in, and then run autoconf to
   generate configure.
   
   When configure is run by the user, it tests various OS capabilities,
   stores those in config.status and config.cache, and modifies a list of
   *.in files. For example, if there exists a Makefile.in, configure
   generates a Makefile that contains substitutions for all @var@
   parameters found by configure.
   
   When you need to edit files, make sure you don't waste time modifying
   files generated by configure. Edit the *.in file, and re-run configure
   to recreate the needed file. If you run make distclean from the
   top-level source directory, all files derived by configure are
   removed, so you see only the file contained in the source
   distribution.
   
373
  1.12) How do I add a new port?
374 375 376 377 378 379 380 381 382 383 384 385
  
   There are a variety of places that need to be modified to add a new
   port. First, start in the src/template directory. Add an appropriate
   entry for your OS. Also, use src/config.guess to add your OS to
   src/template/.similar. You shouldn't match the OS version exactly. The
   configure test will look for an exact OS version number, and if not
   found, find a match without version number. Edit src/configure.in to
   add your new OS. (See configure item above.) You will need to run
   autoconf, or patch src/configure too.
   
   Then, check src/include/port and add your new OS file, with
   appropriate values. Hopefully, there is already locking code in
386 387 388
   src/include/storage/s_lock.h for your CPU. There is also a
   src/makefiles directory for port-specific Makefile handling. There is
   a backend/port directory if you need special files for your OS.
Bruce Momjian's avatar
Bruce Momjian committed
389
   
Bruce Momjian's avatar
Bruce Momjian committed
390 391
  1.13) Why don't you use threads, raw devices, async-I/O, <insert your
  favorite wizz-bang feature here>?
Bruce Momjian's avatar
Bruce Momjian committed
392
  
Bruce Momjian's avatar
Bruce Momjian committed
393 394 395 396 397
   There is always a temptation to use the newest operating system
   features as soon as they arrive. We resist that temptation.
   
   First, we support 15+ operating systems, so any new feature has to be
   well established before we will consider it. Second, most new
Bruce Momjian's avatar
Bruce Momjian committed
398
   wizz-bang features don't provide dramatic improvements. Third, they
Bruce Momjian's avatar
Bruce Momjian committed
399 400 401 402 403 404 405
   usually have some downside, such as decreased reliability or
   additional code required. Therefore, we don't rush to use new features
   but rather wait for the feature to be established, then ask for
   testing to show that a measurable improvement is possible.
   
   As an example, threads are not currently used in the backend code
   because:
Bruce Momjian's avatar
Bruce Momjian committed
406 407 408 409 410
     * Historically, threads were unsupported and buggy.
     * An error in one backend can corrupt other backends.
     * Speed improvements using threads are small compared to the
       remaining backend startup time.
     * The backend code would be more complex.
411
       
Bruce Momjian's avatar
Bruce Momjian committed
412 413 414
   So, we are not ignorant of new features. It is just that we are
   cautious about their adoption. The TODO list often contains links to
   discussions showing our reasoning in these areas.
Bruce Momjian's avatar
Bruce Momjian committed
415
   
416
  1.14) How are RPMs packaged?
417
  
Bruce Momjian's avatar
Bruce Momjian committed
418
   This was written by Lamar Owen:
Bruce Momjian's avatar
Bruce Momjian committed
419 420 421 422 423 424 425
   
   2001-05-03
   
   As to how the RPMs are built -- to answer that question sanely
   requires me to know how much experience you have with the whole RPM
   paradigm. 'How is the RPM built?' is a multifaceted question. The
   obvious simple answer is that I maintain:
Bruce Momjian's avatar
Bruce Momjian committed
426 427
    1. A set of patches to make certain portions of the source tree
       'behave' in the different environment of the RPMset;
Bruce Momjian's avatar
Bruce Momjian committed
428
    2. The initscript;
Bruce Momjian's avatar
Bruce Momjian committed
429
    3. Any other ancillary scripts and files;
Bruce Momjian's avatar
Bruce Momjian committed
430 431
    4. A README.rpm-dist document that tries to adequately document both
       the differences between the RPM build and the WHY of the
Bruce Momjian's avatar
Bruce Momjian committed
432 433 434
       differences, as well as useful RPM environment operations (like,
       using syslog, upgrading, getting postmaster to start at OS boot,
       etc);
Bruce Momjian's avatar
Bruce Momjian committed
435 436
    5. The spec file that throws it all together. This is not a trivial
       undertaking in a package of this size.
Bruce Momjian's avatar
Bruce Momjian committed
437
       
Bruce Momjian's avatar
Bruce Momjian committed
438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510
   I then download and build on as many different canonical distributions
   as I can -- currently I am able to build on Red Hat 6.2, 7.0, and 7.1
   on my personal hardware. Occasionally I receive opportunity from
   certain commercial enterprises such as Great Bridge and PostgreSQL,
   Inc. to build on other distributions.
   
   I test the build by installing the resulting packages and running the
   regression tests. Once the build passes these tests, I upload to the
   postgresql.org ftp server and make a release announcement. I am also
   responsible for maintaining the RPM download area on the ftp site.
   
   You'll notice I said 'canonical' distributions above. That simply
   means that the machine is as stock 'out of the box' as practical --
   that is, everything (except select few programs) on these boxen are
   installed by RPM; only official Red Hat released RPMs are used (except
   in unusual circumstances involving software that will not alter the
   build -- for example, installing a newer non-RedHat version of the Dia
   diagramming package is OK -- installing Python 2.1 on the box that has
   Python 1.5.2 installed is not, as that alters the PostgreSQL build).
   The RPM as uploaded is built to as close to out-of-the-box pristine as
   is possible. Only the standard released 'official to that release'
   compiler is used -- and only the standard official kernel is used as
   well.
   
   For a time I built on Mandrake for RedHat consumption -- no more.
   Nonstandard RPM building systems are worse than useless. Which is not
   to say that Mandrake is useless! By no means is Mandrake useless --
   unless you are building Red Hat RPMs -- and Red Hat is useless if
   you're trying to build Mandrake or SuSE RPMs, for that matter. But I
   would be foolish to use 'Lamar Owen's Super Special RPM Blend Distro
   0.1.2' to build for public consumption! :-)
   
   I _do_ attempt to make the _source_ RPM compatible with as many
   distributions as possible -- however, since I have limited resources
   (as a volunteer RPM maintainer) I am limited as to the amount of
   testing said build will get on other distributions, architectures, or
   systems.
   
   And, while I understand people's desire to immediately upgrade to the
   newest version, realize that I do this as a side interest -- I have a
   regular, full-time job as a broadcast
   engineer/webmaster/sysadmin/Technical Director which occasionally
   prevents me from making timely RPM releases. This happened during the
   early part of the 7.1 beta cycle -- but I believe I was pretty much on
   the ball for the Release Candidates and the final release.
   
   I am working towards a more open RPM distribution -- I would dearly
   love to more fully document the process and put everything into CVS --
   once I figure out how I want to represent things such as the spec file
   in a CVS form. It makes no sense to maintain a changelog, for
   instance, in the spec file in CVS when CVS does a better job of
   changelogs -- I will need to write a tool to generate a real spec file
   from a CVS spec-source file that would add version numbers, changelog
   entries, etc to the result before building the RPM. IOW, I need to
   rethink the process -- and then go through the motions of putting my
   long RPM history into CVS one version at a time so that version
   history information isn't lost.
   
   As to why all these files aren't part of the source tree, well, unless
   there was a large cry for it to happen, I don't believe it should.
   PostgreSQL is very platform-agnostic -- and I like that. Including the
   RPM stuff as part of the Official Tarball (TM) would, IMHO, slant that
   agnostic stance in a negative way. But maybe I'm too sensitive to
   that. I'm not opposed to doing that if that is the consensus of the
   core group -- and that would be a sneaky way to get the stuff into CVS
   :-). But if the core group isn't thrilled with the idea (and my
   instinct says they're not likely to be), I am opposed to the idea --
   not to keep the stuff to myself, but to not hinder the
   platform-neutral stance. IMHO, of course.
   
   Of course, there are many projects that DO include all the files
   necessary to build RPMs from their Official Tarball (TM).
   
511
  1.15) How are CVS branches managed?
Bruce Momjian's avatar
Bruce Momjian committed
512 513
  
   This was written by Tom Lane:
Bruce Momjian's avatar
Bruce Momjian committed
514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529
   
   2001-05-07
   
   If you just do basic "cvs checkout", "cvs update", "cvs commit", then
   you'll always be dealing with the HEAD version of the files in CVS.
   That's what you want for development, but if you need to patch past
   stable releases then you have to be able to access and update the
   "branch" portions of our CVS repository. We normally fork off a branch
   for a stable release just before starting the development cycle for
   the next release.
   
   The first thing you have to know is the branch name for the branch you
   are interested in getting at. To do this, look at some long-lived
   file, say the top-level HISTORY file, with "cvs status -v" to see what
   the branch names are. (Thanks to Ian Lance Taylor for pointing out
   that this is the easiest way to do it.) Typical branch names are:
Bruce Momjian's avatar
Bruce Momjian committed
530 531 532
    REL7_1_STABLE
    REL7_0_PATCHES
    REL6_5_PATCHES
Bruce Momjian's avatar
Bruce Momjian committed
533

Bruce Momjian's avatar
Bruce Momjian committed
534 535 536 537 538 539 540 541 542 543 544
   OK, so how do you do work on a branch? By far the best way is to
   create a separate checkout tree for the branch and do your work in
   that. Not only is that the easiest way to deal with CVS, but you
   really need to have the whole past tree available anyway to test your
   work. (And you *better* test your work. Never forget that dot-releases
   tend to go out with very little beta testing --- so whenever you
   commit an update to a stable branch, you'd better be doubly sure that
   it's correct.)
   
   Normally, to checkout the head branch, you just cd to the place you
   want to contain the toplevel "pgsql" directory and say
Bruce Momjian's avatar
Bruce Momjian committed
545
    cvs ... checkout pgsql
Bruce Momjian's avatar
Bruce Momjian committed
546

Bruce Momjian's avatar
Bruce Momjian committed
547
   To get a past branch, you cd to wherever you want it and say
Bruce Momjian's avatar
Bruce Momjian committed
548
    cvs ... checkout -r BRANCHNAME pgsql
Bruce Momjian's avatar
Bruce Momjian committed
549

Bruce Momjian's avatar
Bruce Momjian committed
550
   For example, just a couple days ago I did
Bruce Momjian's avatar
Bruce Momjian committed
551 552 553
    mkdir ~postgres/REL7_1
    cd ~postgres/REL7_1
    cvs ... checkout -r REL7_1_STABLE pgsql
Bruce Momjian's avatar
Bruce Momjian committed
554

Bruce Momjian's avatar
Bruce Momjian committed
555 556 557 558 559 560 561 562 563 564 565 566 567 568 569
   and now I have a maintenance copy of 7.1.*.
   
   When you've done a checkout in this way, the branch name is "sticky":
   CVS automatically knows that this directory tree is for the branch,
   and whenever you do "cvs update" or "cvs commit" in this tree, you'll
   fetch or store the latest version in the branch, not the head version.
   Easy as can be.
   
   So, if you have a patch that needs to apply to both the head and a
   recent stable branch, you have to make the edits and do the commit
   twice, once in your development tree and once in your stable branch
   tree. This is kind of a pain, which is why we don't normally fork the
   tree right away after a major release --- we wait for a dot-release or
   two, so that we won't have to double-patch the first wave of fixes.
   
570
  1.16) Where can I get a copy of the SQL standards?
571
  
572 573
   There are three versions of the SQL standard: SQL-92, SQL:1999, and
   SQL:2003. They are endorsed by ANSI and ISO. Draft versions can be
Bruce Momjian's avatar
Bruce Momjian committed
574
   downloaded from:
575
     * SQL-92 http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt
Bruce Momjian's avatar
Bruce Momjian committed
576
     * SQL:1999
577 578
       http://www.cse.iitb.ac.in/dbms/Data/Papers-Other/SQL1999/ansi-iso-
       9075-2-1999.pdf
Bruce Momjian's avatar
Bruce Momjian committed
579
     * SQL:2003 http://www.wiscorp.com/sql_2003_standard.zip
580
       
Bruce Momjian's avatar
Bruce Momjian committed
581 582
   Some SQL standards web pages are:
     * http://troels.arvin.dk/db/rdbms/links/#standards
Bruce Momjian's avatar
Bruce Momjian committed
583
     * http://www.wiscorp.com/SQLStandards.html
Bruce Momjian's avatar
Bruce Momjian committed
584
     * http://www.contrib.andrew.cmu.edu/~shadow/sql.html#syntax (SQL-92)
Bruce Momjian's avatar
Bruce Momjian committed
585 586
     * http://dbs.uni-leipzig.de/en/lokal/standards.pdf (paper)
       
587 588 589 590 591 592 593 594 595 596 597 598 599
  1.17) Where can I get technical assistance?
  
   Many technical questions held by those new to the code have been
   answered on the pgsql-hackers mailing list - the archives of which can
   be found at http://archives.postgresql.org/pgsql-hackers/.
   
   If you cannot find discussion or your particular question, feel free
   to put it to the list.
   
   Major contributors also answer technical questions, including
   questions about development of new features, on IRC at
   irc.freenode.net in the #postgresql channel.
   
Bruce Momjian's avatar
Bruce Momjian committed
600
  1.18) How do I get involved in PostgreSQL web site development?
601 602 603 604 605 606
  
   PostgreSQL website development is discussed on the
   pgsql-www@postgresql.org mailing list. The is a project page where the
   source code is available at
   http://gborg.postgresql.org/project/pgweb/projdisplay.php , the code
   for the next version of the website is under the "portal" module. You
Bruce Momjian's avatar
Bruce Momjian committed
607
   will also find code for the "techdocs" website if you would like to
608 609 610
   contribute to that. A temporary todo list for current website
   development issues is available at http://xzilla.postgresql.org/todo
   
611 612
Technical Questions

Bruce Momjian's avatar
Bruce Momjian committed
613
  2.1) How do I efficiently access information in tables from the backend code?
Bruce Momjian's avatar
Bruce Momjian committed
614
  
Bruce Momjian's avatar
Bruce Momjian committed
615 616 617 618 619 620 621 622 623 624
   You first need to find the tuples(rows) you are interested in. There
   are two ways. First, SearchSysCache() and related functions allow you
   to query the system catalogs. This is the preferred way to access
   system tables, because the first call to the cache loads the needed
   rows, and future requests can return the results without accessing the
   base table. The caches use system table indexes to look up tuples. A
   list of available caches is located in
   src/backend/utils/cache/syscache.c.
   src/backend/utils/cache/lsyscache.c contains many column-specific
   cache lookup functions.
Bruce Momjian's avatar
Bruce Momjian committed
625
   
Bruce Momjian's avatar
Bruce Momjian committed
626 627 628 629 630 631 632 633
   The rows returned are cache-owned versions of the heap rows.
   Therefore, you must not modify or delete the tuple returned by
   SearchSysCache(). What you should do is release it with
   ReleaseSysCache() when you are done using it; this informs the cache
   that it can discard that tuple if necessary. If you neglect to call
   ReleaseSysCache(), then the cache entry will remain locked in the
   cache until end of transaction, which is tolerable but not very
   desirable.
Bruce Momjian's avatar
Bruce Momjian committed
634
   
Bruce Momjian's avatar
Bruce Momjian committed
635 636 637 638
   If you can't use the system cache, you will need to retrieve the data
   directly from the heap table, using the buffer cache that is shared by
   all backends. The backend automatically takes care of loading the rows
   into the buffer cache.
Bruce Momjian's avatar
Bruce Momjian committed
639
   
Bruce Momjian's avatar
Bruce Momjian committed
640 641 642 643 644
   Open the table with heap_open(). You can then start a table scan with
   heap_beginscan(), then use heap_getnext() and continue as long as
   HeapTupleIsValid() returns true. Then do a heap_endscan(). Keys can be
   assigned to the scan. No indexes are used, so all rows are going to be
   compared to the keys, and only the valid rows returned.
Bruce Momjian's avatar
Bruce Momjian committed
645
   
Bruce Momjian's avatar
Bruce Momjian committed
646 647 648 649
   You can also use heap_fetch() to fetch rows by block number/offset.
   While scans automatically lock/unlock rows from the buffer cache, with
   heap_fetch(), you must pass a Buffer pointer, and ReleaseBuffer() it
   when completed.
Bruce Momjian's avatar
Bruce Momjian committed
650
   
Bruce Momjian's avatar
Bruce Momjian committed
651 652 653 654 655 656 657 658 659 660 661 662 663 664 665
   Once you have the row, you can get data that is common to all tuples,
   like t_self and t_oid, by merely accessing the HeapTuple structure
   entries. If you need a table-specific column, you should take the
   HeapTuple pointer, and use the GETSTRUCT() macro to access the
   table-specific start of the tuple. You then cast the pointer as a
   Form_pg_proc pointer if you are accessing the pg_proc table, or
   Form_pg_type if you are accessing pg_type. You can then access the
   columns by using a structure pointer:
((Form_pg_class) GETSTRUCT(tuple))->relnatts

   You must not directly change live tuples in this way. The best way is
   to use heap_modifytuple() and pass it your original tuple, and the
   values you want changed. It returns a palloc'ed tuple, which you pass
   to heap_replace(). You can delete tuples by passing the tuple's t_self
   to heap_destroy(). You use t_self for heap_update() too. Remember,
666 667 668 669
   tuples can be either system cache copies, which might go away after
   you call ReleaseSysCache(), or read directly from disk buffers, which
   go away when you heap_getnext(), heap_endscan, or ReleaseBuffer(), in
   the heap_fetch() case. Or it may be a palloc'ed tuple, that you must
Bruce Momjian's avatar
Bruce Momjian committed
670
   pfree() when finished.
Bruce Momjian's avatar
Bruce Momjian committed
671
   
Bruce Momjian's avatar
Bruce Momjian committed
672 673 674 675 676 677
  2.2) Why are table, column, type, function, view names sometimes referenced
  as Name or NameData, and sometimes as char *?
  
   Table, column, type, function, and view names are stored in system
   tables in columns of type Name. Name is a fixed-length,
   null-terminated type of NAMEDATALEN bytes. (The default value for
678
   NAMEDATALEN is 64 bytes.)
Bruce Momjian's avatar
Bruce Momjian committed
679 680 681 682 683 684 685 686 687
typedef struct nameData
    {
        char        data[NAMEDATALEN];
    } NameData;
    typedef NameData *Name;

   Table, column, type, function, and view names that come into the
   backend via user queries are stored as variable-length,
   null-terminated character strings.
Bruce Momjian's avatar
Bruce Momjian committed
688
   
Bruce Momjian's avatar
Bruce Momjian committed
689 690 691 692 693
   Many functions are called with both types of names, ie. heap_open().
   Because the Name type is null-terminated, it is safe to pass it to a
   function expecting a char *. Because there are many cases where
   on-disk names(Name) are compared to user-supplied names(char *), there
   are many cases where Name and char * are used interchangeably.
Bruce Momjian's avatar
Bruce Momjian committed
694
   
Bruce Momjian's avatar
Bruce Momjian committed
695 696 697 698 699 700
  2.3) Why do we use Node and List to make data structures?
  
   We do this because this allows a consistent way to pass data inside
   the backend in a flexible way. Every node has a NodeTag which
   specifies what type of data is inside the Node. Lists are groups of
   Nodes chained together as a forward-linked list.
Bruce Momjian's avatar
Bruce Momjian committed
701
   
Bruce Momjian's avatar
Bruce Momjian committed
702 703
   Here are some of the List manipulation commands:
   
704
   lfirst(i), lfirst_int(i), lfirst_oid(i)
Bruce Momjian's avatar
Bruce Momjian committed
705
          return the data (a point, integer and OID respectively) at list
706
          element i.
Bruce Momjian's avatar
Bruce Momjian committed
707 708 709 710 711 712 713 714
          
   lnext(i)
          return the next list element after i.
          
   foreach(i, list)
          loop through list, assigning each list element to i. It is
          important to note that i is a List *, not the data in the List
          element. You need to use lfirst(i) to get at the data. Here is
Bruce Momjian's avatar
Bruce Momjian committed
715
          a typical code snippet that loops through a List containing Var
Bruce Momjian's avatar
Bruce Momjian committed
716 717
          *'s and processes each one:
          
Bruce Momjian's avatar
Bruce Momjian committed
718
 List                *list;
719
    ListCell    *i;
Bruce Momjian's avatar
Bruce Momjian committed
720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744

    foreach(i, list)
    {
        Var *var = lfirst(i);

        /* process var here */
    }

   lcons(node, list)
          add node to the front of list, or create a new list with node
          if list is NIL.
          
   lappend(list, node)
          add node to the end of list. This is more expensive that lcons.
          
   nconc(list1, list2)
          Concat list2 on to the end of list1.
          
   length(list)
          return the length of the list.
          
   nth(i, list)
          return the i'th element in list.
          
   lconsi, ...
745 746
          There are integer versions of these: lconsi, lappendi, etc.
          Also versions for OID lists: lconso, lappendo, etc.
Bruce Momjian's avatar
Bruce Momjian committed
747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767
          
   You can print nodes easily inside gdb. First, to disable output
   truncation when you use the gdb print command:
(gdb) set print elements 0

   Instead of printing values in gdb format, you can use the next two
   commands to print out List, Node, and structure contents in a verbose
   format that is easier to understand. List's are unrolled into nodes,
   and nodes are printed in detail. The first prints in a short format,
   and the second in a long format:
(gdb) call print(any_pointer)
    (gdb) call pprint(any_pointer)

   The output appears in the postmaster log file, or on your screen if
   you are running a backend directly without a postmaster.
   
  2.4) I just added a field to a structure. What else should I do?
  
   The structures passing around from the parser, rewrite, optimizer, and
   executor require quite a bit of support. Most structures have support
   routines in src/backend/nodes used to create, copy, read, and output
768 769
   those structures (in particular, the files copyfuncs.c and
   equalfuncs.c. Make sure you add support for your new field to these
770 771
   files. Find any other places the structure might need code for your
   new field. mkid is helpful with this (see 1.9).
Bruce Momjian's avatar
Bruce Momjian committed
772 773 774 775
   
  2.5) Why do we use palloc() and pfree() to allocate memory?
  
   palloc() and pfree() are used in place of malloc() and free() because
Bruce Momjian's avatar
Bruce Momjian committed
776 777 778 779 780
   we find it easier to automatically free all memory allocated when a
   query completes. This assures us that all memory that was allocated
   gets freed even if we have lost track of where we allocated it. There
   are special non-query contexts that memory can be allocated in. These
   affect when the allocated memory is freed by the backend.
Bruce Momjian's avatar
Bruce Momjian committed
781
   
782
  2.6) What is ereport()?
Bruce Momjian's avatar
Bruce Momjian committed
783
  
784
   ereport() is used to send messages to the front-end, and optionally
Bruce Momjian's avatar
Bruce Momjian committed
785
   terminate the current query being processed. The first parameter is an
786 787 788 789 790 791 792 793 794 795 796
   ereport level of DEBUG (levels 1-5), LOG, INFO, NOTICE, ERROR, FATAL,
   or PANIC. NOTICE prints on the user's terminal and the postmaster
   logs. INFO prints only to the user's terminal and LOG prints only to
   the server logs. (These can be changed from postgresql.conf.) ERROR
   prints in both places, and terminates the current query, never
   returning from the call. FATAL terminates the backend process. The
   remaining parameters of ereport are a printf-style set of parameters
   to print.
   
   ereport(ERROR) frees most memory and open file descriptors so you
   don't need to clean these up before the call.
Bruce Momjian's avatar
Bruce Momjian committed
797 798 799 800 801 802 803 804 805 806 807 808
   
  2.7) What is CommandCounterIncrement()?
  
   Normally, transactions can not see the rows they modify. This allows
   UPDATE foo SET x = x + 1 to work correctly.
   
   However, there are cases where a transactions needs to see rows
   affected in previous parts of the transaction. This is accomplished
   using a Command Counter. Incrementing the counter allows transactions
   to be broken into pieces so each piece can see rows modified by
   previous pieces. CommandCounterIncrement() increments the Command
   Counter, creating a new part of the transaction.
809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826
   
  2.8) What debugging features are available?
  
   First, try running configure with the --enable-cassert option, many
   assert()s monitor the progress of the backend and halt the program
   when something unexpected occurs.
   
   The postmaster has a -d option that allows even more detailed
   information to be reported. The -d option takes a number that
   specifies the debug level. Be warned that high debug level values
   generate large log files.
   
   If the postmaster is not running, you can actually run the postgres
   backend from the command line, and type your SQL statement directly.
   This is recommended only for debugging purposes. If you have compiled
   with debugging symbols, you can use a debugger to see what is
   happening. Because the backend was not started from postmaster, it is
   not running in an identical environment and locking/backend
827
   interaction problems might not be duplicated.
828 829 830 831
   
   If the postmaster is running, start psql in one window, then find the
   PID of the postgres process used by psql using SELECT
   pg_backend_pid(). Use a debugger to attach to the postgres PID. You
Bruce Momjian's avatar
Bruce Momjian committed
832 833 834 835 836 837 838
   can set breakpoints in the debugger and issue queries from the other.
   If you are looking to find the location that is generating an error or
   log message, set a breakpoint at errfinish. psql. If you are debugging
   postgres startup, you can set PGOPTIONS="-W n", then start psql. This
   will cause startup to delay for n seconds so you can attach to the
   process with the debugger, set any breakpoints, and continue through
   the startup sequence.
839 840 841
   
   You can also compile with profiling to see what functions are taking
   execution time. The backend profile files will be deposited in the
842 843
   pgsql/data directory. The client profile file will be put in the
   client's current directory. Linux requires a compile with
844
   -DLINUX_PROFILE for proper profiling.