- 20 Mar, 2017 1 commit
-
-
Tom Lane authored
TidScan plan nodes were not systematically tested before. These additions raise the LOC coverage number for the basic regression tests from 52% to 92% in nodeTidscan.c, and from 60% to 93% in tidpath.c. Andres Freund, tweaked a bit by me Discussion: https://postgr.es/m/20170320062511.hp5qeurtxrwsvfxr@alap3.anarazel.de
-
- 15 Mar, 2017 1 commit
-
-
Stephen Frost authored
This adds in support for EUI-64 MAC addresses by adding a new data type called 'macaddr8' (using our usual convention of indicating the number of bytes stored). This was largely a copy-and-paste from the macaddr data type, with appropriate adjustments for having 8 bytes instead of 6 and adding support for converting a provided EUI-48 (6 byte format) to the EUI-64 format. Conversion from EUI-48 to EUI-64 inserts FFFE as the 4th and 5th bytes but does not perform the IPv6 modified EUI-64 action of flipping the 7th bit, but we add a function to perform that specific action for the user as it may be commonly done by users who wish to calculate their IPv6 address based on their network prefix and 48-bit MAC address. Author: Haribabu Kommi, with a good bit of rework of macaddr8_in by me. Reviewed by: Vitaly Burovoy, Kuntal Ghosh Discussion: https://postgr.es/m/CAJrrPGcUi8ZH+KkK+=TctNQ+EfkeCEHtMU_yo1mvX8hsk_ghNQ@mail.gmail.com
-
- 11 Mar, 2017 1 commit
-
-
Andres Freund authored
Upcoming patches are revamping expression evaluation significantly. It therefore seems prudent to try to ensure that the coverage of the existing evaluation code is high. This commit adds coverage for the cases that can reasonably be tested. There's still a bunch of unreachable error messages and such, but otherwise this achieves nearly full regression test coverage (with the exception of the unused GetAttributeByNum/GetAttributeByName). Author: Andres Freund Discussion: https://postgr.es/m/20170310194021.ek4bs4bl2khxkmll@alap3.anarazel.de
-
- 07 Mar, 2017 1 commit
-
-
Heikki Linnakangas authored
Michael Paquier.
-
- 06 Mar, 2017 1 commit
-
-
Stephen Frost authored
When performing a pg_upgrade, we copy the files behind pg_largeobject and pg_largeobject_metadata, allowing us to avoid having to dump out and reload the actual data for large objects and their ACLs. Unfortunately, that isn't all of the information which can be associated with large objects. Currently, we also support COMMENTs and SECURITY LABELs with large objects and these were being silently dropped during a pg_upgrade as pg_dump would skip everything having to do with a large object and pg_upgrade only copied the tables mentioned to the new cluster. As the file copies happen after the catalog dump and reload, we can't simply include the COMMENTs and SECURITY LABELs in pg_dump's binary-mode output but we also have to include the actual large object definition as well. With the definition, comments, and security labels in the pg_dump output and the file copies performed by pg_upgrade, all of the data and metadata associated with large objects is able to be successfully pulled forward across a pg_upgrade. In 9.6 and master, we can simply adjust the dump bitmask to indicate which components we don't want. In 9.5 and earlier, we have to put explciit checks in in dumpBlob() and dumpBlobs() to not include the ACL or the data when in binary-upgrade mode. Adjustments made to the privileges regression test to allow another test (large_object.sql) to be added which explicitly leaves a large object with a comment in place to provide coverage of that case with pg_upgrade. Back-patch to all supported branches. Discussion: https://postgr.es/m/20170221162655.GE9812@tamriel.snowman.net
-
- 30 Jan, 2017 1 commit
-
-
Tom Lane authored
Quite a few of our built-in system views were not exercised anywhere in the regression tests. This is perhaps not so exciting for the ones that are simple projections/joins of system catalogs, but for the ones that are wrappers for set-returning C functions, the omission translates directly to lack of test coverage for those functions. In many cases, the reason for the omission is that the view doesn't have much to do with any specific SQL feature, so there's no natural place to test it. To remedy that, invent a new script sysviews.sql that's dedicated to testing SRF-based views. Move a couple of tests that did fit this charter into the new script, and add simple "count(*)" based tests of other views within the charter. That's enough to ensure we at least exercise the main code path through the SRF, although it does little to prove that the output is sane. More could be done here, no doubt, and I hope someone will think about how we can test these views more thoroughly. But this is a starting point. Discussion: https://postgr.es/m/19359.1485723741@sss.pgh.pa.us
-
- 20 Jan, 2017 1 commit
-
-
Peter Eisentraut authored
- Add PUBLICATION catalogs and DDL - Add SUBSCRIPTION catalog and DDL - Define logical replication protocol and output plugin - Add logical replication workers From: Petr Jelinek <petr@2ndquadrant.com> Reviewed-by:
Steve Singer <steve@ssinger.info> Reviewed-by:
Andres Freund <andres@anarazel.de> Reviewed-by:
Erik Rijkers <er@xs4all.nl> Reviewed-by:
Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
-
- 13 Sep, 2016 1 commit
-
-
Andres Freund authored
We're considering changing the implementation of targetlist SRFs considerably, and a lot of the current behaviour isn't tested in our regression tests. Thus it seems useful to increase coverage to avoid accidental behaviour changes. It's quite possible that some of the plans here will require adjustments to avoid falling afoul of ordering differences (e.g. hashed group bys). The buildfarm will tell us. Reviewed-By: Tom Lane Discussion: <20160827214829.zo2dfb5jaikii5nw@alap3.anarazel.de>
-
- 22 Aug, 2016 1 commit
-
-
Peter Eisentraut authored
Remove the plpgsql wrapping that hides the context. So now the test will fail if the work doesn't actually happen in a parallel worker. Run the test in its own test group to ensure it won't run out of resources for that.
-
- 13 Aug, 2016 1 commit
-
-
Tom Lane authored
Per discussion, we should provide such functions to replace the lost ability to discover AM properties by inspecting pg_am (cf commit 65c5fcd3). The added functionality is also meant to displace any code that was looking directly at pg_index.indoption, since we'd rather not believe that the bit meanings in that field are part of any client API contract. As future-proofing, define the SQL API to not assume that properties that are currently AM-wide or index-wide will remain so unless they logically must be; instead, expose them only when inquiring about a specific index or even specific index column. Also provide the ability for an index AM to override the behavior. In passing, document pg_am.amtype, overlooked in commit 473b9328. Andrew Gierth, with kibitzing by me and others Discussion: <87mvl5on7n.fsf@news-spur.riddles.org.uk>
-
- 08 Jun, 2016 1 commit
-
-
Noah Misch authored
Clément Prévost and Peter Eisentraut
-
- 09 Apr, 2016 1 commit
-
-
Alvaro Herrera authored
It cannot run in the same parallel group as misc, because it creates a table which is unpredictably visible in that test. Per buildfarm member crake.
-
- 08 Apr, 2016 2 commits
-
-
Teodor Sigaev authored
It's not ready yet, revert two commits 690c5435 - unstable test output 386e3d76 - patch itself
-
Teodor Sigaev authored
Now indexes (but only B-tree for now) can contain "extra" column(s) which doesn't participate in index structure, they are just stored in leaf tuples. It allows to use index only scan by using single index instead of two or more indexes. Author: Anastasia Lubennikova with minor editorializing by me Reviewers: David Rowley, Peter Geoghegan, Jeff Janes
-
- 07 Apr, 2016 1 commit
-
-
Stephen Frost authored
Now that all of the infrastructure exists, add in the ability to dump out the ACLs of the objects inside of pg_catalog or the ACLs for objects which are members of extensions, but only if they have been changed from their original values. The original values are tracked in pg_init_privs. When pg_dump'ing 9.6-and-above databases, we will dump out the ACLs for all objects in pg_catalog and the ACLs for all extension members, where the ACL has been changed from the original value which was set during either initdb or CREATE EXTENSION. This should not change dumps against pre-9.6 databases. Reviews by Alexander Korotkov, Jose Luis Tallon
-
- 25 Mar, 2016 1 commit
-
-
Tom Lane authored
This avoids leaving dangling links in pg_operator; which while fairly harmless are also unsightly. While we're at it, simplify OperatorUpd, which went through heap_modify_tuple for no very good reason considering it had already made a tuple copy it could just scribble on. Roma Sokolov, reviewed by Tomas Vondra, additional hacking by Robert Haas and myself.
-
- 24 Mar, 2016 1 commit
-
-
Alvaro Herrera authored
This enables external code to create access methods. This is useful so that extensions can add their own access methods which can be formally tracked for dependencies, so that DROP operates correctly. Also, having explicit support makes pg_dump work correctly. Currently only index AMs are supported, but we expect different types to be added in the future. Authors: Alexander Korotkov, Petr Jelínek Reviewed-By: Teodor Sigaev, Petr Jelínek, Jim Nasby Commitfest-URL: https://commitfest.postgresql.org/9/353/ Discussion: https://www.postgresql.org/message-id/CAPpHfdsXwZmojm6Dx+TJnpYk27kT4o7Ri6X_4OSWcByu1Rm+VA@mail.gmail.com
-
- 05 Feb, 2016 1 commit
-
-
Tom Lane authored
An example use-case is "CHECK(num_nonnulls(a,b,c) = 1)" to assert that exactly one of a,b,c isn't NULL. The functions are variadic, so they can also be pressed into service to count the number of null or nonnull elements in an array. Marko Tiikkaja, reviewed by Pavel Stehule
-
- 27 Nov, 2015 1 commit
-
-
Teodor Sigaev authored
Attached is a patch for being able to do COPY (query) without a CTE. Author: Marko Tiikkaja Review: Michael Paquier
-
- 06 Nov, 2015 1 commit
-
-
Robert Haas authored
Previously, negative values were always displayed in bytes, regardless of how large they were. Adrian Vondendriesch, reviewed by Julien Rouhaud and myself
-
- 08 Oct, 2015 1 commit
-
-
Andrew Dunstan authored
This lets us remove the large alternative results files for the main json and jsonb tests, which makes modifying those tests simpler for committers and patch submitters. Backpatch to 9.4 for jsonb and 9.3 for json.
-
- 21 Aug, 2015 1 commit
-
-
Stephen Frost authored
When reworking bypassrls in AlterRole to operate the same way the other attribute handling is done, I missed that the variable was incorrectly a bool rather than an int. This meant that on platforms with an unsigned char, we could end up with incorrect behavior during ALTER ROLE. Pointed out by Andres thanks to tests he did changing our bool to be the one from stdbool.h which showed this and a number of other issues. Add regression tests to test CREATE/ALTER role for the various role attributes. Arrange to leave roles behind for testing pg_dumpall, but none which have the LOGIN attribute. Back-patch to 9.5 where the AlterRole bug exists.
-
- 14 Jul, 2015 1 commit
-
-
Heikki Linnakangas authored
Other options cannot be changed, as it's not totally clear if cached plans would need to be invalidated if one of the other options change. Selectivity estimator functions only change plan costs, not correctness of plans, so those should be safe. Original patch by Uriy Zhuravlev, heavily edited by me.
-
- 16 May, 2015 1 commit
-
-
Andres Freund authored
This SQL standard functionality allows to aggregate data by different GROUP BY clauses at once. Each grouping set returns rows with columns grouped by in other sets set to NULL. This could previously be achieved by doing each grouping as a separate query, conjoined by UNION ALLs. Besides being considerably more concise, grouping sets will in many cases be faster, requiring only one scan over the underlying data. The current implementation of grouping sets only supports using sorting for input. Individual sets that share a sort order are computed in one pass. If there are sets that don't share a sort order, additional sort & aggregation steps are performed. These additional passes are sourced by the previous sort step; thus avoiding repeated scans of the source data. The code is structured in a way that adding support for purely using hash aggregation or a mix of hashing and sorting is possible. Sorting was chosen to be supported first, as it is the most generic method of implementation. Instead of, as in an earlier versions of the patch, representing the chain of sort and aggregation steps as full blown planner and executor nodes, all but the first sort are performed inside the aggregation node itself. This avoids the need to do some unusual gymnastics to handle having to return aggregated and non-aggregated tuples from underlying nodes, as well as having to shut down underlying nodes early to limit memory usage. The optimizer still builds Sort/Agg node to describe each phase, but they're not part of the plan tree, but instead additional data for the aggregation node. They're a convenient and preexisting way to describe aggregation and sorting. The first (and possibly only) sort step is still performed as a separate execution step. That retains similarity with existing group by plans, makes rescans fairly simple, avoids very deep plans (leading to slow explains) and easily allows to avoid the sorting step if the underlying data is sorted by other means. A somewhat ugly side of this patch is having to deal with a grammar ambiguity between the new CUBE keyword and the cube extension/functions named cube (and rollup). To avoid breaking existing deployments of the cube extension it has not been renamed, neither has cube been made a reserved keyword. Instead precedence hacking is used to make GROUP BY cube(..) refer to the CUBE grouping sets feature, and not the function cube(). To actually group by a function cube(), unlikely as that might be, the function name has to be quoted. Needs a catversion bump because stored rules may change. Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
-
- 15 May, 2015 1 commit
-
-
Simon Riggs authored
Add a TABLESAMPLE clause to SELECT statements that allows user to specify random BERNOULLI sampling or block level SYSTEM sampling. Implementation allows for extensible sampling functions to be written, using a standard API. Basic version follows SQLStandard exactly. Usable concrete use cases for the sampling API follow in later commits. Petr Jelinek Reviewed by Michael Paquier and Simon Riggs
-
- 08 May, 2015 1 commit
-
-
Andres Freund authored
The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
-
- 09 Mar, 2015 1 commit
-
-
Alvaro Herrera authored
Commands such as ALTER USER, ALTER GROUP, ALTER ROLE, GRANT, and the various ALTER OBJECT / OWNER TO, as well as ad-hoc clauses related to roles such as the AUTHORIZATION clause of CREATE SCHEMA, the FOR clause of CREATE USER MAPPING, and the FOR ROLE clause of ALTER DEFAULT PRIVILEGES can now take the keywords CURRENT_USER and SESSION_USER as user specifiers in place of an explicit user name. This commit also fixes some quite ugly handling of special standards- mandated syntax in CREATE USER MAPPING, which in particular would fail to work in presence of a role named "current_user". The special role specifiers PUBLIC and NONE also have more consistent handling now. Also take the opportunity to add location tracking to user specifiers. Authors: Kyotaro Horiguchi. Heavily reworked by Álvaro Herrera. Reviewed by: Rushabh Lathia, Adam Brightwell, Marti Raudsepp.
-
- 08 Jan, 2015 1 commit
-
-
Stephen Frost authored
The event trigger test for rowsecurity can cause problems for other tests which are run in parallel with it. Instead of running that test in the rowsecurity set, move it to the event_trigger set, which runs isolated from other tests. Also reverts 7161b082, which moved rowsecurity into its own test group. That's no longer necessary, now that the event trigger test is gone from the rowsecurity set of tests. Pointed out by Tom.
-
- 31 Dec, 2014 1 commit
-
-
Tom Lane authored
The short-lived event trigger in the rowsecurity test causes irreproducible failures when the concurrent tests do something that the event trigger can't cope with. Per buildfarm.
-
- 23 Dec, 2014 1 commit
-
-
Alvaro Herrera authored
This allows access to get_object_address from SQL, which is useful to obtain OID addressing information from data equivalent to that emitted by the parser. This is necessary infrastructure of a project to let replication systems propagate object dropping events to remote servers, where the schema might be different than the server originating the DROP. This patch also adds support for OBJECT_DEFAULT to get_object_address; that is, it is now possible to refer to a column's default value. Catalog version bumped due to the new function. Reviewed by Stephen Frost, Heikki Linnakangas, Robert Haas, Andres Freund, Abhijit Menon-Sen, Adam Brightwell.
-
- 07 Dec, 2014 1 commit
-
-
Simon Riggs authored
Generate a table_rewrite event when ALTER TABLE attempts to rewrite a table. Provide helper functions to identify table and reason. Intended use case is to help assess or to react to schema changes that might hold exclusive locks for long periods. Dimitri Fontaine, triggering an edit by Simon Riggs Reviewed in detail by Michael Paquier
-
- 19 Nov, 2014 1 commit
-
-
Heikki Linnakangas authored
That includes VACUUM on GIN, GiST and SP-GiST indexes, and B-tree indexes large enough to cause page deletions in B-tree. Plus some other special cases. After this patch, the regression tests generate all different WAL record types. Not all branches within the redo functions are covered, but it's a step forward.
-
- 07 Nov, 2014 1 commit
-
-
Alvaro Herrera authored
BRIN is a new index access method intended to accelerate scans of very large tables, without the maintenance overhead of btrees or other traditional indexes. They work by maintaining "summary" data about block ranges. Bitmap index scans work by reading each summary tuple and comparing them with the query quals; all pages in the range are returned in a lossy TID bitmap if the quals are consistent with the values in the summary tuple, otherwise not. Normal index scans are not supported because these indexes do not store TIDs. As new tuples are added into the index, the summary information is updated (if the block range in which the tuple is added is already summarized) or not; in the latter case, a subsequent pass of VACUUM or the brin_summarize_new_values() function will create the summary information. For data types with natural 1-D sort orders, the summary info consists of the maximum and the minimum values of each indexed column within each page range. This type of operator class we call "Minmax", and we supply a bunch of them for most data types with B-tree opclasses. Since the BRIN code is generalized, other approaches are possible for things such as arrays, geometric types, ranges, etc; even for things such as enum types we could do something different than minmax with better results. In this commit I only include minmax. Catalog version bumped due to new builtin catalog entries. There's more that could be done here, but this is a good step forwards. Loosely based on ideas from Simon Riggs; code mostly by Álvaro Herrera, with contribution by Heikki Linnakangas. Patch reviewed by: Amit Kapila, Heikki Linnakangas, Robert Haas. Testing help from Jeff Janes, Erik Rijkers, Emanuel Calvo. PS: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 318633.
-
- 01 Oct, 2014 1 commit
-
-
Tom Lane authored
As of commit a87c7291 (which later got backpatched as far as 9.1), we're explicitly supporting the notion that append relations can be nested; this can occur when UNION ALL constructs are nested, or when a UNION ALL contains a table with inheritance children. Bug #11457 from Nelson Page, as well as an earlier report from Elvis Pranskevichus, showed that there were still nasty bugs associated with such cases: in particular the EquivalenceClass mechanism could try to generate "join" clauses connecting an appendrel child to some grandparent appendrel, which would result in assertion failures or bogus plans. Upon investigation I concluded that all current callers of find_childrel_appendrelinfo() need to be fixed to explicitly consider multiple levels of parent appendrels. The most complex fix was in processing of "broken" EquivalenceClasses, which are ECs for which we have been unable to generate all the derived equality clauses we would like to because of missing cross-type equality operators in the underlying btree operator family. That code path is more or less entirely untested by the regression tests to date, because no standard opfamilies have such holes in them. So I wrote a new regression test script to try to exercise it a bit, which turned out to be quite a worthwhile activity as it exposed existing bugs in all supported branches. The present patch is essentially the same as far back as 9.2, which is where parameterized paths were introduced. In 9.0 and 9.1, we only need to back-patch a small fragment of commit 5b7b5518, which fixes failure to propagate out the original WHERE clauses when a broken EC contains constant members. (The regression test case results show that these older branches are noticeably stupider than 9.2+ in terms of the quality of the plans generated; but we don't really care about plan quality in such cases, only that the plan not be outright wrong. A more invasive fix in the older branches would not be a good idea anyway from a plan-stability standpoint.)
-
- 19 Sep, 2014 1 commit
-
-
Stephen Frost authored
Building on the updatable security-barrier views work, add the ability to define policies on tables to limit the set of rows which are returned from a query and which are allowed to be added to a table. Expressions defined by the policy for filtering are added to the security barrier quals of the query, while expressions defined to check records being added to a table are added to the with-check options of the query. New top-level commands are CREATE/ALTER/DROP POLICY and are controlled by the table owner. Row Security is able to be enabled and disabled by the owner on a per-table basis using ALTER TABLE .. ENABLE/DISABLE ROW SECURITY. Per discussion, ROW SECURITY is disabled on tables by default and must be enabled for policies on the table to be used. If no policies exist on a table with ROW SECURITY enabled, a default-deny policy is used and no records will be visible. By default, row security is applied at all times except for the table owner and the superuser. A new GUC, row_security, is added which can be set to ON, OFF, or FORCE. When set to FORCE, row security will be applied even for the table owner and superusers. When set to OFF, row security will be disabled when allowed and an error will be thrown if the user does not have rights to bypass row security. Per discussion, pg_dump sets row_security = OFF by default to ensure that exports and backups will have all data in the table or will error if there are insufficient privileges to bypass row security. A new option has been added to pg_dump, --enable-row-security, to ask pg_dump to export with row security enabled. A new role capability, BYPASSRLS, which can only be set by the superuser, is added to allow other users to be able to bypass row security using row_security = OFF. Many thanks to the various individuals who have helped with the design, particularly Robert Haas for his feedback. Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean Rasheed, with additional changes and rework by me. Reviewers have included all of the above, Greg Smith, Jeff McCormick, and Robert Haas.
-
- 08 Apr, 2014 1 commit
-
-
Robert Haas authored
These functions won't throw an error if the object doesn't exist, or if (for functions and operators) there's more than one matching object. Yugo Nagata and Nozomi Anzai, reviewed by Amit Khandekar, Marti Raudsepp, Amit Kapila, and me.
-
- 23 Mar, 2014 1 commit
-
-
Andrew Dunstan authored
The new format accepts exactly the same data as the json type. However, it is stored in a format that does not require reparsing the orgiginal text in order to process it, making it much more suitable for indexing and other operations. Insignificant whitespace is discarded, and the order of object keys is not preserved. Neither are duplicate object keys kept - the later value for a given key is the only one stored. The new type has all the functions and operators that the json type has, with the exception of the json generation functions (to_json, json_agg etc.) and with identical semantics. In addition, there are operator classes for hash and btree indexing, and two classes for GIN indexing, that have no equivalent in the json type. This feature grew out of previous work by Oleg Bartunov and Teodor Sigaev, which was intended to provide similar facilities to a nested hstore type, but which in the end proved to have some significant compatibility issues. Authors: Oleg Bartunov, Teodor Sigaev, Peter Geoghegan and Andrew Dunstan. Review: Andres Freund
-
- 19 Feb, 2014 1 commit
-
-
Robert Haas authored
Robert Haas and Michael Paquier
-
- 08 Nov, 2013 1 commit
-
-
Robert Haas authored
Pending patches for logical replication will use this to determine which columns of a tuple ought to be considered as its candidate key. Andres Freund, with minor, mostly cosmetic adjustments by me
-
- 10 Oct, 2013 1 commit
-
-
Peter Eisentraut authored
Change the input/output format to {A,B,C}, to match the internal representation. Complete the implementations of line_in, line_out, line_recv, line_send. Remove comments and error messages about the line type not being implemented. Add regression tests for existing line operators and functions. Reviewed-by:
rui hua <365507506hua@gmail.com> Reviewed-by:
Álvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by:
Jeevan Chalke <jeevan.chalke@enterprisedb.com>
-