FAQ_DEV 31 KB
Newer Older
Bruce Momjian's avatar
Bruce Momjian committed
1

Bruce Momjian's avatar
Bruce Momjian committed
2 3
          Developer's Frequently Asked Questions (FAQ) for PostgreSQL
                                       
Bruce Momjian's avatar
Bruce Momjian committed
4
   Last updated: Tue Dec 4 01:20:03 EST 2001
Bruce Momjian's avatar
Bruce Momjian committed
5
   
6
   Current maintainer: Bruce Momjian (pgman@candle.pha.pa.us)
Bruce Momjian's avatar
Bruce Momjian committed
7 8
   
   The most recent version of this document can be viewed at the
Bruce Momjian's avatar
Bruce Momjian committed
9
   postgreSQL Web site, http://www.PostgreSQL.org.
Bruce Momjian's avatar
Bruce Momjian committed
10 11
     _________________________________________________________________
   
Bruce Momjian's avatar
Bruce Momjian committed
12
                             General Questions
Bruce Momjian's avatar
Bruce Momjian committed
13
                                      
Bruce Momjian's avatar
Bruce Momjian committed
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
   1.1) How do I get involved in PostgreSQL development?
   1.2) How do I add a feature or fix a bug?
   1.3) How do I download/update the current source tree?
   1.4) How do I test my changes?
   1.5) What tools are available for developers?
   1.6) What books are good for developers?
   1.7) What is configure all about?
   1.8) How do I add a new port?
   1.9) Why don't we use threads in the backend?
   1.10) How are RPM's packaged?
   1.11) How are CVS branches handled?
   
Technical Questions

   2.1) How do I efficiently access information in tables from the
   backend code?
   2.2) Why are table, column, type, function, view names sometimes
Bruce Momjian's avatar
Bruce Momjian committed
31
   referenced as Name or NameData, and sometimes as char *?
Bruce Momjian's avatar
Bruce Momjian committed
32 33 34 35 36
   2.3) Why do we use Node and List to make data structures?
   2.4) I just added a field to a structure. What else should I do?
   2.5) Why do we use palloc() and pfree() to allocate memory?
   2.6) What is elog()?
   2.7) What is CommandCounterIncrement()?
Bruce Momjian's avatar
Bruce Momjian committed
37 38
     _________________________________________________________________
   
Bruce Momjian's avatar
Bruce Momjian committed
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151
                             General Questions
                                      
  1.1) How go I get involved in PostgreSQL development?
  
   This was written by Lamar Owen:
   
   2001-06-22
   What open source development process is used by the PostgreSQL team?
   
   Read HACKERS for six months (or a full release cycle, whichever is
   longer). Really. HACKERS _is_the process. The process is not well
   documented (AFAIK -- it may be somewhere that I am not aware of) --
   and it changes continually.
   What development environment (OS, system, compilers, etc) is required
   to develop code?
   
   Developers Corner on the website has links to this information. The
   distribution tarball itself includes all the extra tools and documents
   that go beyond a good Unix-like development environment. In general, a
   modern unix with a modern gcc, GNU make or equivalent, autoconf (of a
   particular version), and good working knowledge of those tools are
   required.
   What areas need support?
   
   The TODO list.
   
   You've made the first step, by finding and subscribing to HACKERS.
   Once you find an area to look at in the TODO, and have read the
   documentation on the internals, etc, then you check out a current
   CVS,write what you are going to write (keeping your CVS checkout up to
   date in the process), and make up a patch (as a context diff only) and
   send to the PATCHES list, prefereably.
   
   Discussion on the patch typically happens here. If the patch adds a
   major feature, it would be a good idea to talk about it first on the
   HACKERS list, in order to increase the chances of it being accepted,
   as well as toavoid duplication of effort. Note that experienced
   developers with a proven track record usually get the big jobs -- for
   more than one reason. Also note that PostgreSQL is highly portable --
   nonportable code will likely be dismissed out of hand.
   
   Once your contributions get accepted, things move from there.
   Typically, you would be added as a developer on the list on the
   website when one of the other developers recommends it. Membership on
   the steering committee is by invitation only, by the other steering
   committee members, from what I have gathered watching froma distance.
   
   I make these statements from having watched the process for over two
   years.
   
   To see a good example of how one goes about this, search the archives
   for the name 'Tom Lane' and see what his first post consisted of, and
   where he took things. In particular, note that this hasn't been _that_
   long ago -- and his bugfixing and general deep knowledge with this
   codebase is legendary. Take a few days to read after him. And pay
   special attention to both the sheer quantity as well as the
   painstaking quality of his work. Both are in high demand.
   
  1.2) How do I add a feature or fix a bug?
  
   The source code is over 250,000 lines. Many problems/features are
   isolated to one specific area of the code. Others require knowledge of
   much of the source. If you are confused about where to start, ask the
   hackers list, and they will be glad to assess the complexity and give
   pointers on where to start.
   
   Another thing to keep in mind is that many fixes and features can be
   added with surprisingly little code. I often start by adding code,
   then looking at other areas in the code where similar things are done,
   and by the time I am finished, the patch is quite small and compact.
   
   When adding code, keep in mind that it should use the existing
   facilities in the source, for performance reasons and for simplicity.
   Often a review of existing code doing similar things is helpful.
   
  1.3) How do I download/update the current source tree?
  
   There are several ways to obtain the source tree. Occasional
   developers can just get the most recent source tree snapshot from
   ftp.postgresql.org. For regular developers, you can use CVS. CVS
   allows you to download the source tree, then occasionally update your
   copy of the source tree with any new changes. Using CVS, you don't
   have to download the entire source each time, only the changed files.
   Anonymous CVS does not allows developers to update the remote source
   tree, though privileged developers can do this. There is a CVS FAQ on
   our web site that describes how to use remote CVS. You can also use
   CVSup, which has similarly functionality, and is available from
   ftp.postgresql.org.
   
   To update the source tree, there are two ways. You can generate a
   patch against your current source tree, perhaps using the make_diff
   tools mentioned above, and send them to the patches list. They will be
   reviewed, and applied in a timely manner. If the patch is major, and
   we are in beta testing, the developers may wait for the final release
   before applying your patches.
   
   For hard-core developers, Marc(scrappy@postgresql.org) will give you a
   Unix shell account on postgresql.org, so you can use CVS to update the
   main source tree, or you can ftp your files into your account, patch,
   and cvs install the changes directly into the source tree.
   
  1.4) How do I test my changes?
  
   First, use psql to make sure it is working as you expect. Then run
   src/test/regress and get the output of src/test/regress/checkresults
   with and without your changes, to see that your patch does not change
   the regression test in unexpected ways. This practice has saved me
   many times. The regression tests test the code in ways I would never
   do, and has caught many bugs in my patches. By finding the problems
   now, you save yourself a lot of debugging later when things are
   broken, and you can't figure out when it happened.
   
  1.5) What tools are available for developers?
Bruce Momjian's avatar
Bruce Momjian committed
152 153 154 155
  
   Aside from the User documentation mentioned in the regular FAQ, there
   are several development tools available. First, all the files in the
   /tools directory are designed for developers.
Bruce Momjian's avatar
Bruce Momjian committed
156 157
    RELEASE_CHANGES changes we have to make for each release
    SQL_keywords    standard SQL'92 keywords
Bruce Momjian's avatar
Bruce Momjian committed
158 159 160 161
    backend         description/flowchart of the backend directories
    ccsym           find standard defines made by your compiler
    entab           converts tabs to spaces, used by pgindent
    find_static     finds functions that could be made static
Bruce Momjian's avatar
Bruce Momjian committed
162
    find_typedef    finds typedefs in the source code
163
    find_badmacros  finds macros that use braces incorrectly
Bruce Momjian's avatar
Bruce Momjian committed
164 165 166
    make_ctags      make vi 'tags' file in each directory
    make_diff       make *.orig and diffs of source
    make_etags      make emacs 'etags' files
Bruce Momjian's avatar
Bruce Momjian committed
167
    make_keywords   make comparison of our keywords and SQL'92
Bruce Momjian's avatar
Bruce Momjian committed
168 169 170
    make_mkid       make mkid ID files
    mkldexport      create AIX exports file
    pgindent        indents C source files
171
    pgjindent       indents Java source files
Bruce Momjian's avatar
Bruce Momjian committed
172 173
    pginclude       scripts for adding/removing include files
    unused_oids     in pgsql/src/include/catalog
Bruce Momjian's avatar
Bruce Momjian committed
174

Bruce Momjian's avatar
Bruce Momjian committed
175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192
   Let me note some of these. If you point your browser at the
   file:/usr/local/src/pgsql/src/tools/backend/index.html directory, you
   will see few paragraphs describing the data flow, the backend
   components in a flow chart, and a description of the shared memory
   area. You can click on any flowchart box to see a description. If you
   then click on the directory name, you will be taken to the source
   directory, to browse the actual source code behind it. We also have
   several README files in some source directories to describe the
   function of the module. The browser will display these when you enter
   the directory also. The tools/backend directory is also contained on
   our web page under the title How PostgreSQL Processes a Query.
   
   Second, you really should have an editor that can handle tags, so you
   can tag a function call to see the function definition, and then tag
   inside that function to see an even lower-level function, and then
   back out twice to return to the original function. Most editors
   support this via tags or etags files.
   
Bruce Momjian's avatar
Bruce Momjian committed
193
   Third, you need to get id-utils from:
Bruce Momjian's avatar
Bruce Momjian committed
194 195 196
    ftp://alpha.gnu.org/gnu/id-utils-3.2d.tar.gz
    ftp://tug.org/gnu/id-utils-3.2d.tar.gz
    ftp://ftp.enst.fr/pub/gnu/gnits/id-utils-3.2d.tar.gz
Bruce Momjian's avatar
Bruce Momjian committed
197 198 199 200

   By running tools/make_mkid, an archive of source symbols can be
   created that can be rapidly queried like grep or edited. Others prefer
   glimpse.
Bruce Momjian's avatar
Bruce Momjian committed
201 202
   
   make_diff has tools to create patch diff files that can be applied to
Bruce Momjian's avatar
Bruce Momjian committed
203 204
   the distribution. This produces context diffs, which is our preferred
   format.
Bruce Momjian's avatar
Bruce Momjian committed
205
   
206 207 208
   Our standard format is to indent each code level with one tab, where
   each tab is four spaces. You will need to set your editor to display
   tabs as four spaces:
Bruce Momjian's avatar
Bruce Momjian committed
209 210 211 212 213 214 215 216 217 218 219
    vi in ~/.exrc:
            set tabstop=4
            set sw=4
    more:
            more -x4
    less:
            less -x4
    emacs:
        M-x set-variable tab-width
        or
        ; Cmd to set tab stops & indenting for working with PostgreSQL code
220
             (c-add-style "pgsql"
Bruce Momjian's avatar
Bruce Momjian committed
221
                      '("bsd"
222 223 224
                                 (indent-tabs-mode . t)
                                 (c-basic-offset   . 4)
                                 (tab-width . 4)
Bruce Momjian's avatar
Bruce Momjian committed
225
                                 (c-offsets-alist .
226 227
                                            ((case-label . +))))
                       t) ; t = set this mode on
Bruce Momjian's avatar
Bruce Momjian committed
228

Bruce Momjian's avatar
Bruce Momjian committed
229
        and add this to your autoload list (modify file path in macro):
Bruce Momjian's avatar
Bruce Momjian committed
230

Bruce Momjian's avatar
Bruce Momjian committed
231 232 233 234 235 236 237 238 239 240 241
        (setq auto-mode-alist
              (cons '("\\`/usr/local/src/pgsql/.*\\.[chyl]\\'" . pgsql-c-mode)
            auto-mode-alist))
        or
            /*
             * Local variables:
             *  tab-width: 4
             *  c-indent-level: 4
             *  c-basic-offset: 4
             * End:
             */
242 243

   pgindent will the format code by specifying flags to your operating
Bruce Momjian's avatar
Bruce Momjian committed
244 245 246 247 248 249 250 251 252 253 254 255 256 257
   system's utility indent.
   
   pgindent is run on all source files just before each beta test period.
   It auto-formats all source files to make them consistent. Comment
   blocks that need specific line breaks should be formatted as block
   comments, where the comment starts as /*------. These comments will
   not be reformatted in any way.
   
   pginclude contains scripts used to add needed #include's to include
   files, and removed unneeded #include's.
   
   When adding system types, you will need to assign oids to them. There
   is also a script called unused_oids in pgsql/src/include/catalog that
   shows the unused oids.
Bruce Momjian's avatar
Bruce Momjian committed
258
   
Bruce Momjian's avatar
Bruce Momjian committed
259
  1.6) What books are good for developers?
Bruce Momjian's avatar
Bruce Momjian committed
260
  
Bruce Momjian's avatar
Bruce Momjian committed
261 262 263 264 265 266 267
   I have four good books, An Introduction to Database Systems, by C.J.
   Date, Addison, Wesley, A Guide to the SQL Standard, by C.J. Date, et.
   al, Addison, Wesley, Fundamentals of Database Systems, by Elmasri and
   Navathe, and Transaction Processing, by Jim Gray, Morgan, Kaufmann
   
   There is also a database performance site, with a handbook on-line
   written by Jim Gray at http://www.benchmarkresources.com.
268
   
Bruce Momjian's avatar
Bruce Momjian committed
269
  1.7) What is configure all about?
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290
  
   The files configure and configure.in are part of the GNU autoconf
   package. Configure allows us to test for various capabilities of the
   OS, and to set variables that can then be tested in C programs and
   Makefiles. Autoconf is installed on the PostgreSQL main server. To add
   options to configure, edit configure.in, and then run autoconf to
   generate configure.
   
   When configure is run by the user, it tests various OS capabilities,
   stores those in config.status and config.cache, and modifies a list of
   *.in files. For example, if there exists a Makefile.in, configure
   generates a Makefile that contains substitutions for all @var@
   parameters found by configure.
   
   When you need to edit files, make sure you don't waste time modifying
   files generated by configure. Edit the *.in file, and re-run configure
   to recreate the needed file. If you run make distclean from the
   top-level source directory, all files derived by configure are
   removed, so you see only the file contained in the source
   distribution.
   
Bruce Momjian's avatar
Bruce Momjian committed
291
  1.8) How do I add a new port?
292 293 294 295 296 297 298 299 300 301 302 303
  
   There are a variety of places that need to be modified to add a new
   port. First, start in the src/template directory. Add an appropriate
   entry for your OS. Also, use src/config.guess to add your OS to
   src/template/.similar. You shouldn't match the OS version exactly. The
   configure test will look for an exact OS version number, and if not
   found, find a match without version number. Edit src/configure.in to
   add your new OS. (See configure item above.) You will need to run
   autoconf, or patch src/configure too.
   
   Then, check src/include/port and add your new OS file, with
   appropriate values. Hopefully, there is already locking code in
304 305 306
   src/include/storage/s_lock.h for your CPU. There is also a
   src/makefiles directory for port-specific Makefile handling. There is
   a backend/port directory if you need special files for your OS.
Bruce Momjian's avatar
Bruce Momjian committed
307
   
Bruce Momjian's avatar
Bruce Momjian committed
308
  1.9) Why don't we use threads in the backend?
Bruce Momjian's avatar
Bruce Momjian committed
309 310 311 312 313 314 315
  
   There are several reasons threads are not used:
     * Historically, threads were unsupported and buggy.
     * An error in one backend can corrupt other backends.
     * Speed improvements using threads are small compared to the
       remaining backend startup time.
     * The backend code would be more complex.
316
       
Bruce Momjian's avatar
Bruce Momjian committed
317
  1.10) How are RPM's packaged?
318
  
Bruce Momjian's avatar
Bruce Momjian committed
319
   This was written by Lamar Owen:
Bruce Momjian's avatar
Bruce Momjian committed
320 321 322 323 324 325 326
   
   2001-05-03
   
   As to how the RPMs are built -- to answer that question sanely
   requires me to know how much experience you have with the whole RPM
   paradigm. 'How is the RPM built?' is a multifaceted question. The
   obvious simple answer is that I maintain:
Bruce Momjian's avatar
Bruce Momjian committed
327 328
    1. A set of patches to make certain portions of the source tree
       'behave' in the different environment of the RPMset;
Bruce Momjian's avatar
Bruce Momjian committed
329 330 331 332
    2. The initscript;
    3. Any other ancilliary scripts and files;
    4. A README.rpm-dist document that tries to adequately document both
       the differences between the RPM build and the WHY of the
Bruce Momjian's avatar
Bruce Momjian committed
333 334 335
       differences, as well as useful RPM environment operations (like,
       using syslog, upgrading, getting postmaster to start at OS boot,
       etc);
Bruce Momjian's avatar
Bruce Momjian committed
336 337
    5. The spec file that throws it all together. This is not a trivial
       undertaking in a package of this size.
Bruce Momjian's avatar
Bruce Momjian committed
338
       
Bruce Momjian's avatar
Bruce Momjian committed
339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411
   I then download and build on as many different canonical distributions
   as I can -- currently I am able to build on Red Hat 6.2, 7.0, and 7.1
   on my personal hardware. Occasionally I receive opportunity from
   certain commercial enterprises such as Great Bridge and PostgreSQL,
   Inc. to build on other distributions.
   
   I test the build by installing the resulting packages and running the
   regression tests. Once the build passes these tests, I upload to the
   postgresql.org ftp server and make a release announcement. I am also
   responsible for maintaining the RPM download area on the ftp site.
   
   You'll notice I said 'canonical' distributions above. That simply
   means that the machine is as stock 'out of the box' as practical --
   that is, everything (except select few programs) on these boxen are
   installed by RPM; only official Red Hat released RPMs are used (except
   in unusual circumstances involving software that will not alter the
   build -- for example, installing a newer non-RedHat version of the Dia
   diagramming package is OK -- installing Python 2.1 on the box that has
   Python 1.5.2 installed is not, as that alters the PostgreSQL build).
   The RPM as uploaded is built to as close to out-of-the-box pristine as
   is possible. Only the standard released 'official to that release'
   compiler is used -- and only the standard official kernel is used as
   well.
   
   For a time I built on Mandrake for RedHat consumption -- no more.
   Nonstandard RPM building systems are worse than useless. Which is not
   to say that Mandrake is useless! By no means is Mandrake useless --
   unless you are building Red Hat RPMs -- and Red Hat is useless if
   you're trying to build Mandrake or SuSE RPMs, for that matter. But I
   would be foolish to use 'Lamar Owen's Super Special RPM Blend Distro
   0.1.2' to build for public consumption! :-)
   
   I _do_ attempt to make the _source_ RPM compatible with as many
   distributions as possible -- however, since I have limited resources
   (as a volunteer RPM maintainer) I am limited as to the amount of
   testing said build will get on other distributions, architectures, or
   systems.
   
   And, while I understand people's desire to immediately upgrade to the
   newest version, realize that I do this as a side interest -- I have a
   regular, full-time job as a broadcast
   engineer/webmaster/sysadmin/Technical Director which occasionally
   prevents me from making timely RPM releases. This happened during the
   early part of the 7.1 beta cycle -- but I believe I was pretty much on
   the ball for the Release Candidates and the final release.
   
   I am working towards a more open RPM distribution -- I would dearly
   love to more fully document the process and put everything into CVS --
   once I figure out how I want to represent things such as the spec file
   in a CVS form. It makes no sense to maintain a changelog, for
   instance, in the spec file in CVS when CVS does a better job of
   changelogs -- I will need to write a tool to generate a real spec file
   from a CVS spec-source file that would add version numbers, changelog
   entries, etc to the result before building the RPM. IOW, I need to
   rethink the process -- and then go through the motions of putting my
   long RPM history into CVS one version at a time so that version
   history information isn't lost.
   
   As to why all these files aren't part of the source tree, well, unless
   there was a large cry for it to happen, I don't believe it should.
   PostgreSQL is very platform-agnostic -- and I like that. Including the
   RPM stuff as part of the Official Tarball (TM) would, IMHO, slant that
   agnostic stance in a negative way. But maybe I'm too sensitive to
   that. I'm not opposed to doing that if that is the consensus of the
   core group -- and that would be a sneaky way to get the stuff into CVS
   :-). But if the core group isn't thrilled with the idea (and my
   instinct says they're not likely to be), I am opposed to the idea --
   not to keep the stuff to myself, but to not hinder the
   platform-neutral stance. IMHO, of course.
   
   Of course, there are many projects that DO include all the files
   necessary to build RPMs from their Official Tarball (TM).
   
Bruce Momjian's avatar
Bruce Momjian committed
412
  1.11) How are CVS branches managed?
Bruce Momjian's avatar
Bruce Momjian committed
413 414
  
   This was written by Tom Lane:
Bruce Momjian's avatar
Bruce Momjian committed
415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430
   
   2001-05-07
   
   If you just do basic "cvs checkout", "cvs update", "cvs commit", then
   you'll always be dealing with the HEAD version of the files in CVS.
   That's what you want for development, but if you need to patch past
   stable releases then you have to be able to access and update the
   "branch" portions of our CVS repository. We normally fork off a branch
   for a stable release just before starting the development cycle for
   the next release.
   
   The first thing you have to know is the branch name for the branch you
   are interested in getting at. To do this, look at some long-lived
   file, say the top-level HISTORY file, with "cvs status -v" to see what
   the branch names are. (Thanks to Ian Lance Taylor for pointing out
   that this is the easiest way to do it.) Typical branch names are:
Bruce Momjian's avatar
Bruce Momjian committed
431 432 433
    REL7_1_STABLE
    REL7_0_PATCHES
    REL6_5_PATCHES
Bruce Momjian's avatar
Bruce Momjian committed
434

Bruce Momjian's avatar
Bruce Momjian committed
435 436 437 438 439 440 441 442 443 444 445
   OK, so how do you do work on a branch? By far the best way is to
   create a separate checkout tree for the branch and do your work in
   that. Not only is that the easiest way to deal with CVS, but you
   really need to have the whole past tree available anyway to test your
   work. (And you *better* test your work. Never forget that dot-releases
   tend to go out with very little beta testing --- so whenever you
   commit an update to a stable branch, you'd better be doubly sure that
   it's correct.)
   
   Normally, to checkout the head branch, you just cd to the place you
   want to contain the toplevel "pgsql" directory and say
Bruce Momjian's avatar
Bruce Momjian committed
446
    cvs ... checkout pgsql
Bruce Momjian's avatar
Bruce Momjian committed
447

Bruce Momjian's avatar
Bruce Momjian committed
448
   To get a past branch, you cd to whereever you want it and say
Bruce Momjian's avatar
Bruce Momjian committed
449
    cvs ... checkout -r BRANCHNAME pgsql
Bruce Momjian's avatar
Bruce Momjian committed
450

Bruce Momjian's avatar
Bruce Momjian committed
451
   For example, just a couple days ago I did
Bruce Momjian's avatar
Bruce Momjian committed
452 453 454
    mkdir ~postgres/REL7_1
    cd ~postgres/REL7_1
    cvs ... checkout -r REL7_1_STABLE pgsql
Bruce Momjian's avatar
Bruce Momjian committed
455

Bruce Momjian's avatar
Bruce Momjian committed
456 457 458 459 460 461 462 463 464 465 466 467 468 469 470
   and now I have a maintenance copy of 7.1.*.
   
   When you've done a checkout in this way, the branch name is "sticky":
   CVS automatically knows that this directory tree is for the branch,
   and whenever you do "cvs update" or "cvs commit" in this tree, you'll
   fetch or store the latest version in the branch, not the head version.
   Easy as can be.
   
   So, if you have a patch that needs to apply to both the head and a
   recent stable branch, you have to make the edits and do the commit
   twice, once in your development tree and once in your stable branch
   tree. This is kind of a pain, which is why we don't normally fork the
   tree right away after a major release --- we wait for a dot-release or
   two, so that we won't have to double-patch the first wave of fixes.
   
Bruce Momjian's avatar
Bruce Momjian committed
471 472 473
                            Technical Questions
                                      
  2.1) How do I efficiently access information in tables from the backend code?
Bruce Momjian's avatar
Bruce Momjian committed
474
  
Bruce Momjian's avatar
Bruce Momjian committed
475 476 477 478 479 480 481 482 483 484
   You first need to find the tuples(rows) you are interested in. There
   are two ways. First, SearchSysCache() and related functions allow you
   to query the system catalogs. This is the preferred way to access
   system tables, because the first call to the cache loads the needed
   rows, and future requests can return the results without accessing the
   base table. The caches use system table indexes to look up tuples. A
   list of available caches is located in
   src/backend/utils/cache/syscache.c.
   src/backend/utils/cache/lsyscache.c contains many column-specific
   cache lookup functions.
Bruce Momjian's avatar
Bruce Momjian committed
485
   
Bruce Momjian's avatar
Bruce Momjian committed
486 487 488 489 490 491 492 493
   The rows returned are cache-owned versions of the heap rows.
   Therefore, you must not modify or delete the tuple returned by
   SearchSysCache(). What you should do is release it with
   ReleaseSysCache() when you are done using it; this informs the cache
   that it can discard that tuple if necessary. If you neglect to call
   ReleaseSysCache(), then the cache entry will remain locked in the
   cache until end of transaction, which is tolerable but not very
   desirable.
Bruce Momjian's avatar
Bruce Momjian committed
494
   
Bruce Momjian's avatar
Bruce Momjian committed
495 496 497 498
   If you can't use the system cache, you will need to retrieve the data
   directly from the heap table, using the buffer cache that is shared by
   all backends. The backend automatically takes care of loading the rows
   into the buffer cache.
Bruce Momjian's avatar
Bruce Momjian committed
499
   
Bruce Momjian's avatar
Bruce Momjian committed
500 501 502 503 504
   Open the table with heap_open(). You can then start a table scan with
   heap_beginscan(), then use heap_getnext() and continue as long as
   HeapTupleIsValid() returns true. Then do a heap_endscan(). Keys can be
   assigned to the scan. No indexes are used, so all rows are going to be
   compared to the keys, and only the valid rows returned.
Bruce Momjian's avatar
Bruce Momjian committed
505
   
Bruce Momjian's avatar
Bruce Momjian committed
506 507 508 509
   You can also use heap_fetch() to fetch rows by block number/offset.
   While scans automatically lock/unlock rows from the buffer cache, with
   heap_fetch(), you must pass a Buffer pointer, and ReleaseBuffer() it
   when completed.
Bruce Momjian's avatar
Bruce Momjian committed
510
   
Bruce Momjian's avatar
Bruce Momjian committed
511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530
   Once you have the row, you can get data that is common to all tuples,
   like t_self and t_oid, by merely accessing the HeapTuple structure
   entries. If you need a table-specific column, you should take the
   HeapTuple pointer, and use the GETSTRUCT() macro to access the
   table-specific start of the tuple. You then cast the pointer as a
   Form_pg_proc pointer if you are accessing the pg_proc table, or
   Form_pg_type if you are accessing pg_type. You can then access the
   columns by using a structure pointer:
((Form_pg_class) GETSTRUCT(tuple))->relnatts

   You must not directly change live tuples in this way. The best way is
   to use heap_modifytuple() and pass it your original tuple, and the
   values you want changed. It returns a palloc'ed tuple, which you pass
   to heap_replace(). You can delete tuples by passing the tuple's t_self
   to heap_destroy(). You use t_self for heap_update() too. Remember,
   tuples can be either system cache copies, which may go away after you
   call ReleaseSysCache(), or read directly from disk buffers, which go
   away when you heap_getnext(), heap_endscan, or ReleaseBuffer(), in the
   heap_fetch() case. Or it may be a palloc'ed tuple, that you must
   pfree() when finished.
Bruce Momjian's avatar
Bruce Momjian committed
531
   
Bruce Momjian's avatar
Bruce Momjian committed
532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547
  2.2) Why are table, column, type, function, view names sometimes referenced
  as Name or NameData, and sometimes as char *?
  
   Table, column, type, function, and view names are stored in system
   tables in columns of type Name. Name is a fixed-length,
   null-terminated type of NAMEDATALEN bytes. (The default value for
   NAMEDATALEN is 32 bytes.)
typedef struct nameData
    {
        char        data[NAMEDATALEN];
    } NameData;
    typedef NameData *Name;

   Table, column, type, function, and view names that come into the
   backend via user queries are stored as variable-length,
   null-terminated character strings.
Bruce Momjian's avatar
Bruce Momjian committed
548
   
Bruce Momjian's avatar
Bruce Momjian committed
549 550 551 552 553
   Many functions are called with both types of names, ie. heap_open().
   Because the Name type is null-terminated, it is safe to pass it to a
   function expecting a char *. Because there are many cases where
   on-disk names(Name) are compared to user-supplied names(char *), there
   are many cases where Name and char * are used interchangeably.
Bruce Momjian's avatar
Bruce Momjian committed
554
   
Bruce Momjian's avatar
Bruce Momjian committed
555 556 557 558 559 560
  2.3) Why do we use Node and List to make data structures?
  
   We do this because this allows a consistent way to pass data inside
   the backend in a flexible way. Every node has a NodeTag which
   specifies what type of data is inside the Node. Lists are groups of
   Nodes chained together as a forward-linked list.
Bruce Momjian's avatar
Bruce Momjian committed
561
   
Bruce Momjian's avatar
Bruce Momjian committed
562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661
   Here are some of the List manipulation commands:
   
   lfirst(i)
          return the data at list element i.
          
   lnext(i)
          return the next list element after i.
          
   foreach(i, list)
          loop through list, assigning each list element to i. It is
          important to note that i is a List *, not the data in the List
          element. You need to use lfirst(i) to get at the data. Here is
          a typical code snipped that loops through a List containing Var
          *'s and processes each one:
          
List *i, *list;

    foreach(i, list)
    {
        Var *var = lfirst(i);

        /* process var here */
    }

   lcons(node, list)
          add node to the front of list, or create a new list with node
          if list is NIL.
          
   lappend(list, node)
          add node to the end of list. This is more expensive that lcons.
          
   nconc(list1, list2)
          Concat list2 on to the end of list1.
          
   length(list)
          return the length of the list.
          
   nth(i, list)
          return the i'th element in list.
          
   lconsi, ...
          There are integer versions of these: lconsi, lappendi, nthi.
          List's containing integers instead of Node pointers are used to
          hold list of relation object id's and other integer quantities.
          
   You can print nodes easily inside gdb. First, to disable output
   truncation when you use the gdb print command:
(gdb) set print elements 0

   Instead of printing values in gdb format, you can use the next two
   commands to print out List, Node, and structure contents in a verbose
   format that is easier to understand. List's are unrolled into nodes,
   and nodes are printed in detail. The first prints in a short format,
   and the second in a long format:
(gdb) call print(any_pointer)
    (gdb) call pprint(any_pointer)

   The output appears in the postmaster log file, or on your screen if
   you are running a backend directly without a postmaster.
   
  2.4) I just added a field to a structure. What else should I do?
  
   The structures passing around from the parser, rewrite, optimizer, and
   executor require quite a bit of support. Most structures have support
   routines in src/backend/nodes used to create, copy, read, and output
   those structures. Make sure you add support for your new field to
   these files. Find any other places the structure may need code for
   your new field. mkid is helpful with this (see above).
   
  2.5) Why do we use palloc() and pfree() to allocate memory?
  
   palloc() and pfree() are used in place of malloc() and free() because
   we automatically free all memory allocated when a transaction
   completes. This makes it easier to make sure we free memory that gets
   allocated in one place, but only freed much later. There are several
   contexts that memory can be allocated in, and this controls when the
   allocated memory is automatically freed by the backend.
   
  2.6) What is elog()?
  
   elog() is used to send messages to the front-end, and optionally
   terminate the current query being processed. The first parameter is an
   elog level of NOTICE, DEBUG, ERROR, or FATAL. NOTICE prints on the
   user's terminal and the postmaster logs. DEBUG prints only in the
   postmaster logs. ERROR prints in both places, and terminates the
   current query, never returning from the call. FATAL terminates the
   backend process. The remaining parameters of elog are a printf-style
   set of parameters to print.
   
  2.7) What is CommandCounterIncrement()?
  
   Normally, transactions can not see the rows they modify. This allows
   UPDATE foo SET x = x + 1 to work correctly.
   
   However, there are cases where a transactions needs to see rows
   affected in previous parts of the transaction. This is accomplished
   using a Command Counter. Incrementing the counter allows transactions
   to be broken into pieces so each piece can see rows modified by
   previous pieces. CommandCounterIncrement() increments the Command
   Counter, creating a new part of the transaction.