- 20 Sep, 2021 3 commits
-
-
Peter Eisentraut authored
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 10b675b81a3a04bac460cb049e0b7b6e17fb4795
-
Peter Eisentraut authored
A few characters in the acknowledgments list cannot be handled by the PDF build, so replace with a similar ASCII character.
-
Peter Eisentraut authored
current through 66061077
-
- 19 Sep, 2021 3 commits
-
-
Tomas Vondra authored
Since introduction of extended statistics, we've disallowed references to system columns. So for example CREATE STATISTICS s ON ctid FROM t; would fail. But with extended statistics on expressions, it was possible to work around this limitation quite easily CREATE STATISTICS s ON (ctid::text) FROM t; This is an oversight in a4d75c86, fixed by adding a simple check. Backpatch to PostgreSQL 14, where support for extended statistics on expressions was introduced. Backpatch-through: 14 Discussion: https://postgr.es/m/20210816013255.GS10479%40telsasoft.com
-
Tom Lane authored
A recent question reminded me that the notes' description of commit 86dc9005 rather undersold its benefits. Discussion: https://postgr.es/m/4a3115d4-0fb2-e214-93e3-9a9d0974b883@deepbluecap.com
-
Tom Lane authored
"PGcon" should be "PGconn". Noted by D. Frey. Discussion: https://postgr.es/m/163191739352.4680.16994248583642672629@wrigleys.postgresql.org
-
- 18 Sep, 2021 2 commits
-
-
Tom Lane authored
Improve various item descriptions. Rearrange some things into (IMO) more logical order. Fix missing markup and dubious choices of link destinations. Drop a couple of items that were later back-patched or otherwise don't seem to need to be documented here.
-
- 17 Sep, 2021 2 commits
-
-
Peter Geoghegan authored
An elog that reports the value of a transaction ID stored on a deleted nbtree page was added by commit e5d8a999, which taught page deletion to store full 64-bit XIDs. It seems very chatty on further reflection, so lower its elevel from NOTICE to DEBUG2. Author: Peter Geoghegan <pg@bowt.ie> Backpatch: 14-, just like the nbtree XID enhancement.
-
Tom Lane authored
Commit 55dc86ec changed pull_varnos to use (if possible) the associated ph_eval_at for a PlaceHolderVar. I missed a fine point though: we might be looking at a PHV in the quals or tlist of a child appendrel, in which case we need to compute a ph_eval_at value that's been translated in the same way that the PHV itself has been (cf. adjust_appendrel_attrs). Fortunately, enough info is available in the PlaceHolderInfo to make such translation possible without additional outside data, so we don't need another round of uglification of planner APIs. This is a little bit complicated, but since it's a hard-to-hit corner case, I'm not much worried about adding cycles here. Per report from Jaime Casanova. Back-patch to v12, like the previous commit. Discussion: https://postgr.es/m/20210915230959.GB17635@ahch-to
-
- 16 Sep, 2021 5 commits
-
-
Tom Lane authored
The rewriter transformation for SEARCH BREADTH FIRST produces a FieldSelect on a Var of type RECORD, where the Var references the recursive union's worktable output. EXPLAIN VERBOSE failed to handle this case, because it only expected such Vars to appear in CteScans not WorkTableScans. Fix that, and add some test cases exercising EXPLAIN on SEARCH and CYCLE queries. In principle this oversight is an old bug, but it seems that the case is unreachable without SEARCH BREADTH FIRST, because the parser fails when attempting to create such a reference manually. So for today I'll just patch HEAD/v14. Someday we might find that the code portion of this patch needs to be back-patched further. Per report from Atsushi Torikoshi. Discussion: https://postgr.es/m/5bafa66ad529e11860339565c9e7c166@oss.nttdata.com
-
Peter Eisentraut authored
-
Andres Freund authored
Session statistics, as introduced by 960869da, had several shortcomings: - an additional GetCurrentTimestamp() call that also impaired the accuracy of the data collected This can be avoided by passing the current timestamp we already have in pgstat_report_stat(). - an additional statistics UDP packet sent every 500ms This is solved by adding the new statistics to PgStat_MsgTabstat. This is conceptually ugly, because session statistics are not table statistics. But the struct already contains data unrelated to tables, so there is not much damage done. Connection and disconnection are reported in separate messages, which reduces the number of additional messages to two messages per session and a slight increase in PgStat_MsgTabstat size (but the same number of table stats fit). - Session time computation could overflow on systems where long is 32 bit. Reported-By: Andres Freund <andres@anarazel.de> Author: Andres Freund <andres@anarazel.de> Author: Laurenz Albe <laurenz.albe@cybertec.at> Discussion: https://postgr.es/m/20210801205501.nyxzxoelqoo4x2qc%40alap3.anarazel.de Backpatch: 14-, where the feature was introduced.
-
Fujii Masao authored
ProcArrayGroupClearXid function has a parameter named "proc", but the same name was used for its local variables. This commit fixes this variable shadowing, to improve code readability. Back-patch to all supported versions, to make future back-patching easy though this patch is classified as refactoring only. Reported-by: Ranier Vilela Author: Ranier Vilela, Aleksander Alekseev https://postgr.es/m/CAEudQAqyoTZC670xWi6w-Oe2_Bk1bfu2JzXz6xRfiOUzm7xbyQ@mail.gmail.com
-
Fujii Masao authored
All size_t variables declared in procarray.c are actually int ones. Let's use int instead of size_t for those variables. Which would reduce Wsign-compare compiler warnings. Back-patch to v14 where commit 941697c3 added size_t variables in procarray.c, to make future back-patching easy though this patch is classified as refactoring only. Reported-by: Ranier Vilela Author: Ranier Vilela, Aleksander Alekseev https://postgr.es/m/CAEudQAqyoTZC670xWi6w-Oe2_Bk1bfu2JzXz6xRfiOUzm7xbyQ@mail.gmail.com
-
- 15 Sep, 2021 3 commits
-
-
Tom Lane authored
It's possible to execute user-defined SQL in some background processes; for example, logical replication workers can fire triggers. This opens the possibility that someone would try to execute LISTEN in such a context. But since only regular backends ever call ProcessNotifyInterrupt, no messages would actually be received, and thus the registered listener would simply prevent the message queue from being cleaned. Eventually NOTIFY would stop working, which is bad. Perhaps someday somebody will invent infrastructure to make listening in a background worker actually useful. In the meantime, forbid it. Back-patch to v13, which is where we introduced the MyBackendType variable. It'd be a lot harder to implement the check without that, and it doesn't seem worth the trouble. Discussion: https://postgr.es/m/153243441449.1404.2274116228506175596@wrigleys.postgresql.org
-
Peter Eisentraut authored
Commit 054adca6 neglected to initialize the type_id field of the synthesized type cache entry, so it would make a new one on every call. Also, better use the per-function memory context for this; otherwise it leaks memory. Discussion: https://www.postgresql.org/message-id/flat/17158-8a2ba823982537a4%40postgresql.org
-
Daniel Gustafsson authored
The available refresh options are specified as refresh_options under REFRESH PUBLICATION, and DROP PUBLICATION itself has an option named refresh. Clarify what we mean by refresh options to avoid confusion. Backpatch through v14 where ALTER SUBSCRIPTION ... DROP PUBLICATION was introduced. Author: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Reviewed-by: Peter Smith <smithpb2250@gmail.com> Discussion: https://postgr.es/m/CAD21AoCm1wJ3A8Q9EmBjRbShYkJ+o+Oa_z9O0hvwhvhUa2BSyg@mail.gmail.com Backpatch-through: 14
-
- 14 Sep, 2021 3 commits
-
-
Tom Lane authored
Formerly, we sent signals for outgoing NOTIFY messages within ProcessCompletedNotifies, which was also responsible for sending relevant ones of those messages to our connected client. It therefore had to run during the main-loop processing that occurs just before going idle. This arrangement had two big disadvantages: * Now that procedures allow intra-command COMMITs, it would be useful to send NOTIFYs to other sessions immediately at COMMIT (though, for reasons of wire-protocol stability, we still shouldn't forward them to our client until end of command). * Background processes such as replication workers would not send NOTIFYs at all, since they never execute the client communication loop. We've had requests to allow triggers running in replication workers to send NOTIFYs, so that's a problem. To fix these things, move transmission of outgoing NOTIFY signals into AtCommit_Notify, where it will happen during CommitTransaction. Also move the possible call of asyncQueueAdvanceTail there, to ensure we don't bloat the async SLRU if a background worker sends many NOTIFYs with no one listening. We can also drop the call of asyncQueueReadAllNotifications, allowing ProcessCompletedNotifies to go away entirely. That's because commit 79002697 added a call of ProcessNotifyInterrupt adjacent to PostgresMain's call of ProcessCompletedNotifies, and that does its own call of asyncQueueReadAllNotifications, meaning that we were uselessly doing two such calls (inside two separate transactions) whenever inbound notify signals coincided with an outbound notify. We need only set notifyInterruptPending to ensure that ProcessNotifyInterrupt runs, and we're done. The existing documentation suggests that custom background workers should call ProcessCompletedNotifies if they want to send NOTIFY messages. To avoid an ABI break in the back branches, reduce it to an empty routine rather than removing it entirely. Removal will occur in v15. Although the problems mentioned above have existed for awhile, I don't feel comfortable back-patching this any further than v13. There was quite a bit of churn in adjacent code between 12 and 13. At minimum we'd have to also backpatch 51004c71, and a good deal of other adjustment would also be needed, so the benefit-to-risk ratio doesn't look attractive. Per bug #15293 from Michael Powers (and similar gripes from others). Artur Zakirov and Tom Lane Discussion: https://postgr.es/m/153243441449.1404.2274116228506175596@wrigleys.postgresql.org
-
Tom Lane authored
It's possible for us to copy an AlternativeSubPlan expression node into multiple places, for example the scan quals of several partition children. Then it's possible that we choose a different one of the alternatives as optimal in each place. Commit 41efb834 failed to consider this scenario, so its attempt to remove "unused" subplans could remove subplans that were still used elsewhere. Fix by delaying the removal logic until we've examined all the AlternativeSubPlans in a given query level. (This does assume that AlternativeSubPlans couldn't get copied to other query levels, but for the foreseeable future that's fine; cf qual_is_pushdown_safe.) Per report from Rajkumar Raghuwanshi. Back-patch to v14 where the faulty logic came in. Discussion: https://postgr.es/m/CAKcux6==O3NNZC3bZ2prRYv3cjm3_Zw1GfzmOjEVqYN4jub2+Q@mail.gmail.com
-
Andres Freund authored
If an allocation failed within LLVM it is not safe to call back into LLVM as LLVM is not generally safe against exceptions / stack-unwinding. Thus errors while in LLVM code are promoted to FATAL. However llvm_shutdown() did call back into LLVM even in such cases, while llvm_release_context() was careful not to do so. We cannot generally skip shutting down LLVM, as that can break profiling. But it's OK to do so if there was an error from within LLVM. Reported-By: Jelte Fennema <Jelte.Fennema@microsoft.com> Author: Andres Freund <andres@anarazel.de> Author: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/AM5PR83MB0178C52CCA0A8DEA0207DC14F7FF9@AM5PR83MB0178.EURPRD83.prod.outlook.com Backpatch: 11-, where jit was introduced
-
- 13 Sep, 2021 7 commits
-
-
Andres Freund authored
In d9d8aa9bb9a I added a defensive NULL assignment to protect against a not-too-smart compiler warning about unitialized variable use after the switch. Unfortunately I only did so on master and forgot to adjust that for 14. Stephen noticed that there actually is a compiler warning :(. Reported-By: Stephen Frost <sfrost@snowman.net> Discussion: https://postgr.es/m/20210827224639.GX17906@tamriel.snowman.net
-
Tom Lane authored
Commits ffa2e467 and 52a10224 caused libpq's connection-establishment functions to usually leave a nonempty string in the connection's errorMessage buffer, even after a successful connection. While that was intentional on my part, more sober reflection says that it wasn't a great idea: the string would be a bit confusing. Also this broke at least one application that checked for connection success by examining the errorMessage, instead of using PQstatus() as documented. Let's clear the buffer at success exit, restoring the pre-v14 behavior. Discussion: https://postgr.es/m/4170264.1620321747@sss.pgh.pa.us
-
Tom Lane authored
Ordinarily, using EXIT this way would draw "control reached end of function without RETURN". However, if the function is one where we don't require an explicit RETURN (such as a DO block), that should not happen. It did anyway, because add_dummy_return() neglected to account for the case. Per report from Herwig Goemans. Back-patch to all supported branches. Discussion: https://postgr.es/m/868ae948-e3ca-c7ec-95a6-83cfc08ef750@gmail.com
-
Bruce Momjian authored
Reported-by: Robert Haas Backpatch-through: 14 only
-
Etsuro Fujita authored
The type information for FDW options is only added to HEAD; remove this from back branches. Oversight in commit aa769f80e. Apply the patch to v12, v13, and v14. Discussion: https://postgr.es/m/CAPmGK14z92twaKwRoccHbbh5Va5vbRDZcTYYTx50+0JTQ8xx_g@mail.gmail.com
-
Amit Kapila authored
While processing toast changes in logical decoding, we rejigger the tuple change to point to in-memory toast tuples instead to on-disk toast tuples. And, to make sure the memory accounting is correct, we were subtracting the old change size and then after re-computing the new tuple, re-adding its size at the end. Now, if there is any error before we add the new size, we will release the changes and that will update the accounting info (subtracting the size from the counters). And we were underflowing there which leads to an assertion failure in assert enabled builds and wrong memory accounting in reorder buffer otherwise. Author: Bertrand Drouvot Reviewed-by: Amit Kapila Backpatch-through: 13, where memory accounting was introduced Discussion: https://postgr.es/m/92b0ee65-b8bd-e42d-c082-4f3f4bf12d34@amazon.com
-
Michael Paquier authored
An out-of-memory failure happening when allocating the structures to store the connection parameter keywords and values would mess up with the set of connections saved, as on failure the pthread mutex would still be hold with the new connection object listed but free()'d. Rather than just unlocking the mutex, which would leave the static list of connections into an inconsistent state, move the allocation for the structures of the connection parameters before beginning the test manipulation. This ensures that the list of connections and the connection mutex remain consistent all the time in this code path. This error is unlikely going to happen, but this could mess up badly with ECPG clients in surprising ways, so backpatch all the way down. Reported-by: ryancaicse Discussion: https://postgr.es/m/17186-b4cfd8f0eb4d1dee@postgresql.org Backpatch-through: 9.6
-
- 11 Sep, 2021 1 commit
-
-
Tom Lane authored
If search_start is greater than the length of the string, we should just return REG_NOMATCH immediately. (Note that the equality case should *not* be rejected, since the pattern might be able to match zero characters.) This guards various internal assumptions that the min of a range of string positions is not more than the max. Violation of those assumptions could allow an attempt to fetch string[search_start-1], possibly causing a crash. Jaime Casanova pointed out that this situation is reachable with the new regexp_xxx functions that accept a user-specified start position. I don't believe it's reachable via any in-core call site in v14 and below. However, extensions could possibly call pg_regexec with an out-of-range search_start, so let's back-patch the fix anyway. Discussion: https://postgr.es/m/20210911180357.GA6870@ahch-to
-
- 10 Sep, 2021 1 commit
-
-
Tom Lane authored
We have long forbidden fetching backwards from a NO SCROLL cursor, but the prohibition didn't extend to cases in which we rewind the query altogether and then re-fetch forwards. I think the reason is that this logic was mainly meant to protect plan nodes that can't be run in the reverse direction. However, re-reading the query output is problematic if the query is volatile (which includes SELECT FOR UPDATE, not just queries with volatile functions): the re-read can produce different results, which confuses the cursor navigation logic completely. Another reason for disliking this approach is that some code paths will either fetch backwards or rewind-and-fetch-forwards depending on the distance to the target row; so that seemingly identical use-cases may or may not draw the "cursor can only scan forward" error. Hence, let's clean things up by disallowing rewind as well as fetch-backwards in a NO SCROLL cursor. Ordinarily we'd only make such a definitional change in HEAD, but there is a third reason to consider this change now. Commit ba2c6d6c created some new user-visible anomalies for non-scrollable cursors WITH HOLD, in that navigation in the cursor result got confused if the cursor had been partially read before committing. The only good way to resolve those anomalies is to forbid rewinding such a cursor, which allows removal of the incorrect cursor state manipulations that ba2c6d6c added to PersistHoldablePortal. To minimize the behavioral change in the back branches (including v14), refuse to rewind a NO SCROLL cursor only when it has a holdStore, ie has been held over from a previous transaction due to WITH HOLD. This should avoid breaking most applications that have been sloppy about whether to declare cursors as scrollable. We'll enforce the prohibition across-the-board beginning in v15. Back-patch to v11, as ba2c6d6c was. Discussion: https://postgr.es/m/3712911.1631207435@sss.pgh.pa.us
-
- 09 Sep, 2021 4 commits
-
-
Tom Lane authored
Some plan node types don't react well to being called again after they've already returned NULL. PortalRunSelect() has long dealt with this by calling the executor with NoMovementScanDirection if it sees that we've already run the portal to the end. However, commit ba2c6d6c overlooked this point, so that persisting an already-fully-fetched cursor would fail if it had such a plan. Per report from Tomas Barton. Back-patch to v11, as the faulty commit was. (I've omitted a test case because the type of plan that causes a problem isn't all that stable.) Discussion: https://postgr.es/m/CAPV2KRjd=ErgVGbvO2Ty20tKTEZZr6cYsYLxgN_W3eAo9pf5sw@mail.gmail.com
-
Fujii Masao authored
When throttling is used, transactions that lag behind schedule by more than the latency limit are counted and reported as skipped. Previously, there was the case where pgbench counted all skipped transactions even if the timer specified in -T option was exceeded. This could take a very long time to do that especially when unrealistically high rate setting in -R option caused quite a lot of transactions that lagged behind schedule. This could prevent pgbench from ending immediately, and so pgbench could look like it got stuck to users. To fix the issue, this commit changes pgbench so that it stops counting skipped transactions as soon as the timer is exceeded. The timer can make pgbench end soon even when there are lots of skipped transactions that have not been counted yet. Note that there is no guarantee that all skipped transactions are counted under -T though there is under -t. This is OK in practice because it's very unlikely to happen with realistic setting. Also this is not the issue that this commit newly introdues. There used to be the case where pgbench ended without counting all skipped transactions since before. Back-patch to v14. Per discussion, we decided not to bother back-patch to the stable branches because it's hard to imagine the issue happens in practice (with realistic setting). Author: Yugo Nagata, Fabien COELHO Reviewed-by: Greg Sabino Mullane, Fujii Masao Discussion: https://postgr.es/m/20210613040151.265ff59d832f835bbcf8b3ba@sraoss.co.jp
-
Tom Lane authored
We don't allow relations to exceed 2^32-1 blocks, because block numbers are 32 bits and the last possible block number is reserved to mean InvalidBlockNumber. There is a check for this in mdextend, but that's really way too late, because the smgr API requires us to create a buffer for the block-to-be-added, and we do not want to have any buffer with blocknum InvalidBlockNumber. (Such a case can trigger assertions in bufmgr.c, plus I think it might confuse ReadBuffer's logic for data-past-EOF later on.) So put the check into ReadBuffer. Per report from Christoph Berg. It's been like this forever, so back-patch to all supported branches. Discussion: https://postgr.es/m/YTn1iTkUYBZfcODk@msg.credativ.de
-
Fujii Masao authored
Previously, walreceiver always closed the currently-opened WAL segment and created its archive notification file, after it finished writing the current segment up and received any WAL data that should be written into the next segment. If walreceiver exited just before any WAL data in the next segment arrived at standby, it did not create the archive notification file of the current segment even though that's known completed. This behavior could cause WAL archiving of the segment to be delayed until subsequent restartpoints or checkpoints created its notification file. To fix the issue, this commit changes walreceiver so that it creates an archive notification file of a current WAL segment immediately if that's known completed before receiving next WAL data. Back-patch to all supported branches. Reported-by: Kyotaro Horiguchi Author: Fujii Masao Reviewed-by: Kyotaro Horiguchi Discussion: https://postgr.es/m/20200630.165503.1465894182551545886.horikyota.ntt@gmail.com
-
- 08 Sep, 2021 5 commits
-
-
Tom Lane authored
Coverity complained that one caller of getFormattedTypeName() failed to free the returned string. Which is true, but rather than fixing that one, let's get rid of this tedious and error-prone requirement. Now that getFormattedTypeName() caches its result, strdup'ing that result and expecting the caller to free it accomplishes little except to waste cycles. We do create a leak in the case where getTypes didn't make a TypeInfo for the type, but that basically shouldn't ever happen. Back-patch, as commit 6c450a861 was. This isn't a particularly interesting bug fix, but the API change seems like a hazard for future back-patching activity if we don't back-patch it.
-
Tom Lane authored
Seems to have been my error in commit aeb1631e. Noted by Christoph Berg. Discussion: https://postgr.es/m/YTeLipdnSOg4NNcI@msg.df7cb.de
-
Tom Lane authored
If we copy data-modifying CTEs from the original query to a replacement query (from a DO INSTEAD rule), we must set hasModifyingCTE properly in the replacement query. Failure to do this can cause various unpleasantness, such as unsafe usage of parallel plans. The code also neglected to propagate hasRecursive, though that's only cosmetic at the moment. A difficulty arises if the rule action is an INSERT...SELECT. We attach the original query's RTEs and CTEs to the sub-SELECT Query, but data-modifying CTEs are only allowed to appear in the topmost Query. For the moment, throw an error in such cases. It would probably be possible to avoid this error by attaching the CTEs to the top INSERT Query instead; but that would require a bunch of new code to adjust ctelevelsup references. Given the narrowness of the use-case, and the need to back-patch this fix, it does not seem worth the trouble for now. We can revisit this if we get field complaints. Per report from Greg Nancarrow. Back-patch to all supported branches. (The test case added here does not fail before v10, but there are plenty of places checking top-level hasModifyingCTE in 9.6, so I have no doubt that this code change is necessary there too.) Greg Nancarrow and Tom Lane Discussion: https://postgr.es/m/CAJcOf-f68DT=26YAMz_i0+Au3TcLO5oiHY5=fL6Sfuits6r+_w@mail.gmail.com Discussion: https://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com
-
Peter Eisentraut authored
Commit 01e658fa added hash support for row types. This also added support for hashing anonymous record types, using the same approach that the type cache uses for comparison support for record types: It just reports that it works, but it might fail at run time if a component type doesn't actually support the operation. We get away with that for comparison because most types support that. But some types don't support hashing, so the current state can result in failures at run time where the planner chooses hashing over sorting, whereas that previously worked if only sorting was an option. We do, however, want the record hashing support for path tracking in recursive unions, and the SEARCH and CYCLE clauses built on that. In that case, hashing is the only plan option. So enable that, this commit implements the following approach: The type cache does not report that hashing is available for the record type. This undoes that part of 01e658fa. Instead, callers that require hashing no matter what can override that result themselves. This patch only touches the callers to make the aforementioned recursive query cases work, namely the parse analysis of unions, as well as the hash_array() function. Reported-by: Sait Talha Nisanci <sait.nisanci@microsoft.com> Bug: #17158 Discussion: https://www.postgresql.org/message-id/flat/17158-8a2ba823982537a4%40postgresql.org
-
Amit Kapila authored
Updates/Deletes on a relation were allowed even without replica identity after we define the publication for all tables. This would later lead to an error on subscribers. The reason was that for such publications we were not invalidating the relcache and the publication information for relations was not getting rebuilt. Similarly, we were not invalidating the relcache after dropping of such publications which will prohibit Updates/Deletes without replica identity even without any publication. Author: Vignesh C and Hou Zhijie Reviewed-by: Hou Zhijie, Kyotaro Horiguchi, Amit Kapila Backpatch-through: 10, where it was introduced Discussion: https://postgr.es/m/CALDaNm0pF6zeWqCA8TCe2sDuwFAy8fCqba=nHampCKag-qLixg@mail.gmail.com
-
- 07 Sep, 2021 1 commit
-
-
Magnus Hagander authored
This affects one message and some documentation that used the format "read only", unlike everything else that used read-only. Backpatch-through: 14 Discussion: https://postgr.es/m/CABUevExuxKwn0YM3+wdSeQSvK6CRrJ-hewocGVX3R4-xVX4eMw@mail.gmail.com
-