- 15 Feb, 2019 1 commit
-
-
Tom Lane authored
In commit 1a8d5afb, I thought it'd be a good idea to define IndexClause.indexquals as NIL in the most common case where the given clause (IndexClause.rinfo) is usable exactly as-is. It'd be more consistent to define the indexquals in that case as being a one-element list containing IndexClause.rinfo, but I thought saving the palloc overhead for making such a list would be worthwhile. In hindsight, that was a great example of "premature optimization is the root of all evil": it's complicated everyplace that needs to deal with the indexquals, requiring duplicative code to handle both the simple case and the not-simple case. I'd initially found that tolerable but it's getting less so as I mop up some areas that I'd not touched in 1a8d5afb. In any case, two more pallocs during a planner run are surely at the noise level (a conclusion confirmed by a bit of microbenchmarking). So let's change this decision before it becomes set in stone, and insist that IndexClause.indexquals always be a valid list of the actual index quals for the clause. Discussion: https://postgr.es/m/24586.1550106354@sss.pgh.pa.us
-
- 14 Feb, 2019 3 commits
-
-
Peter Eisentraut authored
This also makes the code in read_client_first_message() more similar to read_client_final_message(). Reported-by: Mark Dilger <hornschnorter@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/53a28052-f9f3-1808-fed9-460fd43035ab%402ndquadrant.com
-
Peter Eisentraut authored
A small API change makes it unnecessary. Reported-by: Mark Dilger <hornschnorter@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/53a28052-f9f3-1808-fed9-460fd43035ab%402ndquadrant.com
-
Tom Lane authored
While at it, refactor patternsel() a bit so that it can be used from the LIKE/regex planner support functions as well. This makes the planner able to deal equally well with either operator or function syntax for these operations. I'm not excited about that as a feature in itself, but it provides a nice model for extensions to follow if they want such behavior for their operations. This change localizes the use of pattern_fixed_prefix() and make_greater_string() so that they no longer need be exported. (We might get pushback from extensions about that, perhaps, in which case I'd be inclined to re-export them in a new header file like_support.h.) This reduces the bulk of selfuncs.c a fair amount, removing ~1370 lines or about one-sixth of that file; it's still too big, but this is progress. Discussion: https://postgr.es/m/24537.1550093915@sss.pgh.pa.us
-
- 13 Feb, 2019 12 commits
-
-
Alvaro Herrera authored
We were using uint64 function arguments as "long int" arguments to compiler builtins, which fails on machines where long ints are 32 bits: the upper half of the uint64 was being ignored. Fix by using the "ll" builtin variants instead, which on those machines take 64 bit arguments. Also, remove configure tests for __builtin_popcountl() (as well as "long" variants for ctz and clz): the theory here is that any compiler version will provide all widths or none, so one test suffices. Were this theory to be wrong, we'd have to add tests for __builtin_popcountll() and friends, which would be tedious. Per failures in buildfarm member lapwing and ensuing discussion.
-
Andrew Gierth authored
We don't care to assume that input of subnormal float values works, but a stray subnormal value from the upstream Ryu regression test had been left in the test data by mistake. Remove it. Per buildfarm member fulmar.
-
Alvaro Herrera authored
The way this makefile works, we need to specify it three times.
-
Andrew Gierth authored
Avoid assuming exact results in tstypes test; some platforms vary. (per buildfarm members eulachon, danio, lapwing) Avoid dubious usage (inherited from upstream) of bool parameters to copy_special_str, to see if this fixes the mac/ppc failures (per buildfarm members prariedog and locust). (Isolated test programs on a ppc mac don't seem to show any other cause that would explain them.)
-
Alvaro Herrera authored
These opcodes have been around in the AMD world since 2007, and 2008 in the case of intel. They're supported in GCC and Clang via some __builtin macros. The opcodes may be unavailable during runtime, in which case we fall back on a C-based implementation of the code. In order to get the POPCNT instruction we must pass the -mpopcnt option to the compiler. We do this only for the pg_bitutils.c file. David Rowley (with fragments taken from a patch by Thomas Munro) Discussion: https://postgr.es/m/CAKJS1f9WTAGG1tPeJnD18hiQW5gAk59fQ6WK-vfdAKEHyRg2RA@mail.gmail.com
-
Andrew Gierth authored
Replace with PG_UINT32_MAX. Per buildfarm members dory and woodlouse.
-
Andrew Gierth authored
Previously, floating-point output was done by rounding to a specific decimal precision; by default, to 6 or 15 decimal digits (losing information) or as requested using extra_float_digits. Drivers that wanted exact float values, and applications like pg_dump that must preserve values exactly, set extra_float_digits=3 (or sometimes 2 for historical reasons, though this isn't enough for float4). Unfortunately, decimal rounded output is slow enough to become a noticable bottleneck when dealing with large result sets or COPY of large tables when many floating-point values are involved. Floating-point output can be done much faster when the output is not rounded to a specific decimal length, but rather is chosen as the shortest decimal representation that is closer to the original float value than to any other value representable in the same precision. The recently published Ryu algorithm by Ulf Adams is both relatively simple and remarkably fast. Accordingly, change float4out/float8out to output shortest decimal representations if extra_float_digits is greater than 0, and make that the new default. Applications that need rounded output can set extra_float_digits back to 0 or below, and take the resulting performance hit. We make one concession to portability for systems with buggy floating-point input: we do not output decimal values that fall exactly halfway between adjacent representable binary values (which would rely on the reader doing round-to-nearest-even correctly). This is known to be a problem at least for VS2013 on Windows. Our version of the Ryu code originates from https://github.com/ulfjack/ryu/ at commit c9c3fb1979, but with the following (significant) modifications: - Output format is changed to use fixed-point notation for small exponents, as printf would, and also to use lowercase 'e', a minimum of 2 exponent digits, and a mandatory sign on the exponent, to keep the formatting as close as possible to previous output. - The output of exact midpoint values is disabled as noted above. - The integer fast-path code is changed somewhat (since we have fixed-point output and the upstream did not). - Our project style has been largely applied to the code with the exception of C99 declaration-after-statement, which has been retained as an exception to our present policy. - Most of upstream's debugging and conditionals are removed, and we use our own configure tests to determine things like uint128 availability. Changing the float output format obviously affects a number of regression tests. This patch uses an explicit setting of extra_float_digits=0 for test output that is not expected to be exactly reproducible (e.g. due to numerical instability or differing algorithms for transcendental functions). Conversions from floats to numeric are unchanged by this patch. These may appear in index expressions and it is not yet clear whether any change should be made, so that can be left for another day. This patch assumes that the only supported floating point format is now IEEE format, and the documentation is updated to reflect that. Code by me, adapting the work of Ulf Adams and other contributors. References: https://dl.acm.org/citation.cfm?id=3192369 Reviewed-by: Tom Lane, Andres Freund, Donald Dong Discussion: https://postgr.es/m/87r2el1bx6.fsf@news-spur.riddles.org.uk
-
Andrew Gierth authored
Using strtod() creates a double-rounding problem; the input decimal value is first rounded to the nearest double; rounding that to the nearest float may then give an incorrect result. An example is that 7.038531e-26 when input via strtod and then rounded to float4 gives 0xAE43FEp-107 instead of the correct 0xAE43FDp-107. Values output by earlier PG versions with extra_float_digits=3 should all be read in with the same values as previously. However, values supplied by other software using shortest representations could be mis-read. On platforms that lack a strtof() entirely, we fall back to the old incorrect rounding behavior. (As strtof() is required by C99, such platforms are considered of primarily historical interest.) On VS2013, some workarounds are used to get correct error handling. The regression tests now test for the correct input values, so platforms that lack strtof() will need resultmap entries. An entry for HP-UX 10 is included (more may be needed). Reviewed-By: Tom Lane Discussion: https://postgr.es/m/871s5emitx.fsf@news-spur.riddles.org.uk Discussion: https://postgr.es/m/87d0owlqpv.fsf@news-spur.riddles.org.uk
-
Peter Eisentraut authored
Replace casts whose only purpose is to cast away const with the unconstify() macro. Discussion: https://www.postgresql.org/message-id/flat/53a28052-f9f3-1808-fed9-460fd43035ab%402ndquadrant.com
-
Peter Eisentraut authored
Some of these were uselessly casting away "const", some were just nearby, but they where all unnecessary anyway. Discussion: https://www.postgresql.org/message-id/flat/53a28052-f9f3-1808-fed9-460fd43035ab%402ndquadrant.com
-
Michael Paquier authored
As of commit c6e4133f, the calculation happens in make_one_rel() and not query_planner(). Author: Amit Langote Discussion: https://postgr.es/m/c7a04a90-42e6-28a4-811a-a7e352831ba1@lab.ntt.co.jp
-
Thomas Munro authored
In a corner case, a btree page was allocated during a clean-up operation that could cause the tracking of the largest contiguous span of free space to get out of whack. That was supposed to be prevented by the use of the "soft" flag to avoid allocating internal pages during incidental clean-up work, but the flag was ignored in the case where the FPM was promoted from singleton format to btree format. Repair. Remove an obsolete comment in passing. Back-patch to 10, where freepage.c arrived (as support for dsa.c). Author: Robert Haas Diagnosed-by: Thomas Munro and Robert Haas Reported-by: Justin Pryzby, Rick Otten, Sand Stone, Arne Roland and others Discussion: https://postgr.es/m/CAMAYy4%2Bw3NTBM5JLWFi8twhWK4%3Dk_5L4nV5%2BbYDSPu8r4b97Zg%40mail.gmail.com
-
- 12 Feb, 2019 10 commits
-
-
Tom Lane authored
We're only going to consider key columns when creating indexquals, so there is no point in having the outer loops in indxpath.c iterate further than nkeycolumns. Doing so in match_pathkeys_to_index() is actually wrong, and would have caused crashes by now, except that we have no index AMs supporting both amcanorderbyop and amcaninclude. It's also wrong in relation_has_unique_index_for(). The effect there is to fail to prove uniqueness even when the index does prove it, if there are extra columns. Also future-proof examine_variable() for the day when extra columns can be expressions, and fix what's either a thinko or just an oversight in btcostestimate(): we should consider the number of key columns, not the total, when deciding whether to derate correlation. None of these things seemed important enough to risk changing in a just-before-wrap patch, but since we're past the release wrap window, time to fix 'em. Discussion: https://postgr.es/m/25526.1549847928@sss.pgh.pa.us
-
Alvaro Herrera authored
Ever since its birth, ReorderBufferBuildTupleCidHash() has contained an assertion that a catalog tuple cannot change Cmax after acquiring one. But that's wrong: if a subtransaction executes DDL that affects that catalog tuple, and later aborts and another DDL affects the same tuple, it will change Cmax. Relax the assertion to merely verify that the Cmax remains valid and monotonically increasing, instead. Add a test that tickles the relevant code. Diagnosed by, and initial patch submitted by: Arseny Sher Co-authored-by: Arseny Sher Discussion: https://postgr.es/m/874l9p8hyw.fsf@ars-thinkpad
-
Alvaro Herrera authored
Broken since fe33a196.
-
Alvaro Herrera authored
Replace hand-rolled option parsing with the Getopt module. This is shorter and easier to read. In passing, make some cosmetic adjustments for consistency. Author: John Naylor Reviewed-by: David Fetter Discussion: https://postgr.es/m/CACPNZCvRjepXh5b2N50njN+rO_2Nzcf=jhMkKX7=79XWUKJyKA@mail.gmail.com
-
Tom Lane authored
It's pretty unhelpful to report the wrong file name in a complaint about syscall failure, but SnapBuildSerialize managed to do that twice in a span of 50 lines. Also fix half a dozen missing or poorly-chosen errcode assignments; that's mostly cosmetic, but still wrong. Noted while studying recent failures on buildfarm member nightjar. I'm not sure whether those reports are actually giving the wrong filename, because there are two places here with identically spelled error messages. The other one is specifically coded not to report ENOENT, but if it's this one, how could we be getting ENOENT from open() with O_CREAT? Need to sit back and await results. However, these ereports are clearly broken from birth, so back-patch.
-
Michael Paquier authored
max_wal_senders and max_worker_processes got reversed in the output generated because of ea92368c. Reported-by: Kevin Hale Boyes Discussion: https://postgr.es/m/CADAecHVAD4=26KAx4nj5DBvxqqvJkuwsy+riiiNhQqwnZg2K8Q@mail.gmail.com
-
Tom Lane authored
partprune.h failed to compile by itself; needs to include partdefs.h. I think I must've broken this in fa2cf164, though I'd swear I ran the appropriate tests when removing #includes. Anyway, it's very sensible for this file to include partdefs.h, so let's just do that. Per cpluspluscheck.
-
Michael Paquier authored
The current wording can confuse the reader about constraint exclusion being available at query execution, but this only applies to partition pruning. Reported-by: Shouyu Luo Author: David Rowley Reviewed-by: Chapman Flack, Amit Langote Discussion: https://postgr.es/m/15629-2ef8b22e61f8333f@postgresql.org
-
Tom Lane authored
For a long time, indxpath.c has had the ability to extract derived (lossy) index conditions from certain operators such as LIKE. For just as long, it's been obvious that we really ought to make that capability available to extensions. This commit finally accomplishes that, by adding another API for planner support functions that lets them create derived index conditions for their functions. As proof of concept, the hardwired "special index operator" code formerly present in indxpath.c is pushed out to planner support functions attached to LIKE and other relevant operators. A weak spot in this design is that an extension needs to know OIDs for the operators, datatypes, and opfamilies involved in the transformation it wants to make. The core-code prototypes use hard-wired OID references but extensions don't have that option for their own operators etc. It's usually possible to look up the required info, but that may be slow and inconvenient. However, improving that situation is a separate task. I want to do some additional refactorization around selfuncs.c, but that also seems like a separate task. Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
-
Michael Paquier authored
Since its introduction, max_wal_senders is counted as part of max_connections when it comes to define how many connection slots can be used for replication connections with a WAL sender context. This can lead to confusion for some users, as it could be possible to block a base backup or replication from happening because other backend sessions are already taken for other purposes by an application, and superuser-only connection slots are not a correct solution to handle that case. This commit makes max_wal_senders independent of max_connections for its handling of PGPROC entries in ProcGlobal, meaning that connection slots for WAL senders are handled using their own free queue, like autovacuum workers and bgworkers. One compatibility issue that this change creates is that a standby now requires to have a value of max_wal_senders at least equal to its primary. So, if a standby created enforces the value of max_wal_senders to be lower than that, then this could break failovers. Normally this should not be an issue though, as any settings of a standby are inherited from its primary as postgresql.conf gets normally copied as part of a base backup, so parameters would be consistent. Author: Alexander Kukushkin Reviewed-by: Kyotaro Horiguchi, Petr Jelínek, Masahiko Sawada, Oleksii Kliukin Discussion: https://postgr.es/m/CAFh8B=nBzHQeYAu0b8fjK-AF1X4+_p6GRtwG+cCgs6Vci2uRuQ@mail.gmail.com
-
- 11 Feb, 2019 5 commits
-
-
Tom Lane authored
The original setup for dependencies of partitioned objects had serious problems: 1. It did not verify that a drop cascading to a partition-child object also cascaded to at least one of the object's partition parents. Now, normally a child object would share all its dependencies with one or another parent (e.g. a child index's opclass dependencies would be shared with the parent index), so that this oversight is usually harmless. But if some dependency failed to fit this pattern, the child could be dropped while all its parents remain, creating a logically broken situation. (It's easy to construct artificial cases that break it, such as attaching an unrelated extension dependency to the child object and then dropping the extension. I'm not sure if any less-artificial cases exist.) 2. Management of partition dependencies during ATTACH/DETACH PARTITION was complicated and buggy; for example, after detaching a partition table it was possible to create cases where a formerly-child index should be dropped and was not, because the correct set of dependencies had not been reconstructed. Less seriously, because multiple partition relationships were represented identically in pg_depend, there was an order-of-traversal dependency on which partition parent was cited in error messages. We also had some pre-existing order-of-traversal hazards for error messages related to internal and extension dependencies. This is cosmetic to users but causes testing problems. To fix #1, add a check at the end of the partition tree traversal to ensure that at least one partition parent got deleted. To fix #2, establish a new policy that partition dependencies are in addition to, not instead of, a child object's usual dependencies; in this way ATTACH/DETACH PARTITION need not cope with adding or removing the usual dependencies. To fix the cosmetic problem, distinguish between primary and secondary partition dependency entries in pg_depend, by giving them different deptypes. (They behave identically except for having different priorities for being cited in error messages.) This means that the former 'I' dependency type is replaced with new 'P' and 'S' types. This also fixes a longstanding bug that after handling an internal dependency by recursing to the owning object, findDependentObjects did not verify that the current target was now scheduled for deletion, and did not apply the current recursion level's objflags to it. Perhaps that should be back-patched; but in the back branches it would only matter if some concurrent transaction had removed the internal-linkage pg_depend entry before the recursive call found it, or the recursive call somehow failed to find it, both of which seem unlikely. Catversion bump because the contents of pg_depend change for partitioning relationships. Patch HEAD only. It's annoying that we're not fixing #2 in v11, but there seems no practical way to do so given that the problem is exactly a poor choice of what entries to put in pg_depend. We can't really fix that while staying compatible with what's in pg_depend in existing v11 installations. Discussion: https://postgr.es/m/CAH2-Wzkypv1R+teZrr71U23J578NnTBt2X8+Y=Odr4pOdW1rXg@mail.gmail.com
-
Alvaro Herrera authored
The old verbiage indicated that PG_RE_THROW is optional, which is not really true. This has confused many people, so it seems worth fixing. Discussion: https://postgr.es/m/20190206160958.GA22304@alvherre.pgsql
-
Peter Eisentraut authored
-
Peter Eisentraut authored
Last use was removed in 2c66f992.
-
Tom Lane authored
indxpath.c needs a good deal more attention for covering indexes than it's gotten. But so far as I can tell, the only really awful breakage is in expand_indexqual_rowcompare (nee adjust_rowcompare_for_index), which was only half fixed in c266ed31. The other problems aren't bad enough to take the risk of a just-before-wrap fix. The problem here is that if the leading column of a row comparison matches an index (allowing this code to be reached), and some later column doesn't match the index, it'll nonetheless believe that that column matches the first included index column. Typically that'll lead to an error like "operator M is not a member of opfamily N" as a result of fetching a garbage opfamily OID. But with enough bad luck, maybe a broken plan would be generated. Discussion: https://postgr.es/m/25526.1549847928@sss.pgh.pa.us
-
- 10 Feb, 2019 4 commits
-
-
Tom Lane authored
It seems useful to have this information available, so that it's easier to tell when a test script is taking a disproportionate amount of time. Discussion: https://postgr.es/m/16646.1549770618@sss.pgh.pa.us
-
Alvaro Herrera authored
After commit 123cc697a8eb, we remove redundant FK action triggers during partition ATTACH by merely deleting the catalog tuple, but that's wrong: it should use performDeletion() instead. Repair, and make the comments more explicit. Per code review from Tom Lane. Discussion: https://postgr.es/m/18885.1549642539@sss.pgh.pa.us
-
Tom Lane authored
Renaming varchar_transform to varchar_support had a side effect I hadn't foreseen: the core regression tests leave around a transform object that relies on that function, so the name change breaks cross-version upgrade tests, because the name used in the older branches doesn't match. Since the dependency on varchar_transform was chosen with the aid of a dartboard anyway (it would surely not work as a language transform support function), fix by just choosing a different random builtin function with the right signature. Also add some comments explaining why this isn't horribly unsafe. I chose to make the same substitution in a couple of other copied-and-pasted test cases, for consistency, though those aren't directly contributing to the testing problem. Per buildfarm. Back-patch, else it doesn't fix the problem.
-
Tom Lane authored
warn_or_exit_horribly() was blithely passing a potentially-NULL string pointer to a %s format specifier. That works (at least to the extent of not crashing) on some platforms, but not all, and since we switched to our own snprintf.c it doesn't work for us anywhere. Of the three string fields being handled this way here, I think that only "owner" is supposed to be nullable ... but considering that this is error-reporting code, it has very little business assuming anything, so put in defenses for all three. Per a crash observed on buildfarm member crake and then reproduced here. Because of the portability aspect, back-patch to all supported versions.
-
- 09 Feb, 2019 5 commits
-
-
Tom Lane authored
Add support function requests for estimating the selectivity, cost, and number of result rows (if a SRF) of the target function. The lack of a way to estimate selectivity of a boolean-returning function in WHERE has been a recognized deficiency of the planner since Berkeley days. This commit finally fixes it. In addition, non-constant estimates of cost and number of output rows are now possible. We still fall back to looking at procost and prorows if the support function doesn't service the request, of course. To make concrete use of the possibility of estimating output rowcount for SRFs, this commit adds support functions for array_unnest(anyarray) and the integer variants of generate_series; the lack of plausible rowcount estimates for those, even when it's obvious to a human, has been a repeated subject of complaints. Obviously, much more could now be done in this line, but I'm mostly just trying to get the infrastructure in place. Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
-
Tom Lane authored
Rename/repurpose pg_proc.protransform as "prosupport". The idea is still that it names an internal function that provides knowledge to the planner about the behavior of the function it's attached to; but redesign the API specification so that it's not limited to doing just one thing, but can support an extensible set of requests. The original purpose of simplifying a function call is handled by the first request type to be invented, SupportRequestSimplify. Adjust all the existing transform functions to handle this API, and rename them fron "xxx_transform" to "xxx_support" to reflect the potential generalization of what they do. (Since we never previously provided any way for extensions to add transform functions, this change doesn't create an API break for them.) Also add DDL and pg_dump support for attaching a support function to a user-defined function. Unfortunately, DDL access has to be restricted to superusers, at least for now; but seeing that support functions will pretty much have to be written in C, that limitation is just theoretical. (This support is untested in this patch, but a follow-on patch will add cases that exercise it.) Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
-
Tom Lane authored
In place of three separate but interrelated lists (indexclauses, indexquals, and indexqualcols), an IndexPath now has one list "indexclauses" of IndexClause nodes. This holds basically the same information as before, but in a more useful format: in particular, there is now a clear connection between an indexclause (an original restriction clause from WHERE or JOIN/ON) and the indexquals (directly usable index conditions) derived from it. We also change the ground rules a bit by mandating that clause commutation, if needed, be done up-front so that what is stored in the indexquals list is always directly usable as an index condition. This gets rid of repeated re-determination of which side of the clause is the indexkey during costing and plan generation, as well as repeated lookups of the commutator operator. To minimize the added up-front cost, the typical case of commuting a plain OpExpr is handled by a new special-purpose function commute_restrictinfo(). For RowCompareExprs, generating the new clause properly commuted to begin with is not really any more complex than before, it's just different --- and we can save doing that work twice, as the pretty-klugy original implementation did. Tracking the connection between original and derived clauses lets us also track explicitly whether the derived clauses are an exact or lossy translation of the original. This provides a cheap solution to getting rid of unnecessary rechecks of boolean index clauses, which previously seemed like it'd be more expensive than it was worth. Another pleasant (IMO) side-effect is that EXPLAIN now always shows index clauses with the indexkey on the left; this seems less confusing. This commit leaves expand_indexqual_conditions() and some related functions in a slightly messy state. I didn't bother to change them any more than minimally necessary to work with the new data structure, because all that code is going to be refactored out of existence in a follow-on patch. Discussion: https://postgr.es/m/22182.1549124950@sss.pgh.pa.us
-
Tom Lane authored
The previous ordering of these steps satisfied the nominal requirement that set_rel_pathlist_hook could editorialize on the whole set of Paths constructed for a base relation. In practice, though, trying to change the set of partial paths was impossible. Adding one didn't work because (a) it was too late to be included in Gather paths made by the core code, and (b) calling add_partial_path after generate_gather_paths is unsafe, because it might try to delete a path it thinks is dominated, but that is already embedded in some Gather path(s). Nor could the hook safely remove partial paths, for the same reason that they might already be embedded in Gathers. Better to call extensions first, let them add partial paths as desired, and then gather. In v11 and up, we already doubled down on that ordering by postponing gathering even further for single-relation queries; so even if the hook wished to editorialize on Gather path construction, it could not. Report and patch by KaiGai Kohei. Back-patch to 9.6 where Gather paths were added. Discussion: https://postgr.es/m/CAOP8fzahwpKJRTVVTqo2AE=mDTz_efVzV6Get_0=U3SO+-ha1A@mail.gmail.com
-
Peter Eisentraut authored
The comment marker "#" is copied to the output, so it's only appropriate for comments that make sense in the shell output. For comments about the Autoconf language, "dnl" should be used.
-