1
0
mirror of https://github.com/postgres/postgres.git synced 2025-07-30 11:03:19 +03:00
Commit Graph

37203 Commits

Author SHA1 Message Date
28a935149f Move view reloptions into their own varlena struct
Per discussion after a gripe from me in
http://www.postgresql.org/message-id/20140611194633.GH18688@eldon.alvh.no-ip.org

Jaime Casanova
2014-07-14 17:24:40 -04:00
543f57fc37 Prevent bitmap heap scans from showing unnecessary block info in EXPLAIN ANALYZE.
EXPLAIN ANALYZE shows the information of the numbers of exact/lossy blocks which
bitmap heap scan processes. But, previously, when those numbers were both zero,
it displayed only the prefix "Heap Blocks:" in TEXT output format. This is strange
and would confuse the users. So this commit suppresses such unnecessary information.

Backpatch to 9.4 where EXPLAIN ANALYZE was changed so that such information was
displayed.

Etsuro Fujita
2014-07-14 20:42:53 +09:00
22ccce5206 Fix decoding of consecutive MULTI_INSERTs emitted by one heap_multi_insert().
Commit 1b86c81d2d fixed the decoding of toasted columns for the rows
contained in one xl_heap_multi_insert record. But that's not actually
enough, because heap_multi_insert() will actually first toast all
passed in rows and then emit several *_multi_insert records; one for
each page it fills with tuples.

Add a XLOG_HEAP_LAST_MULTI_INSERT flag which is set in
xl_heap_multi_insert->flag denoting that this multi_insert record is
the last emitted by one heap_multi_insert() call. Then use that flag
in decode.c to only set clear_toast_afterwards in the right situation.

Expand the number of rows inserted via COPY in the corresponding
regression test to make sure that more than one heap page is filled
with tuples by one heap_multi_insert() call.

Backpatch to 9.4 like the previous commit.
2014-07-12 14:30:43 +02:00
bb4d05c026 Add autocompletion of locale keywords for CREATE DATABASE
Adds support for autocomplete of LC_COLLATE and LC_CTYPE to
the CREATE DATABASE command in psql.
2014-07-12 14:19:57 +02:00
f280eff949 Fix bug with whole-row references to append subplans.
ExecEvalWholeRowVar incorrectly supposed that it could "bless" the source
TupleTableSlot just once per query.  But if the input is coming from an
Append (or, perhaps, other cases?) more than one slot might be returned
over the query run.  This led to "record type has not been registered"
errors when a composite datum was extracted from a non-blessed slot.

This bug has been there a long time; I guess it escaped notice because when
dealing with subqueries the planner tends to expand whole-row Vars into
RowExprs, which don't have the same problem.  It is possible to trigger
the problem in all active branches, though, as illustrated by the added
regression test.
2014-07-11 19:12:38 -04:00
6fe24405fa Rename logical decoding's pg_llog directory to pg_logical.
The old name wasn't very descriptive as of actual contents of the
directory, which are historical snapshots in the snapshots/
subdirectory and mappingdata for rewritten tuples in
mappings/. There's been a fair amount of discussion what would be a
good name. I'm settling for pg_logical because it's likely that
further data around logical decoding and replication will need saving
in the future.

Also add the missing entry for the directory into storage.sgml's list
of PGDATA contents.

Bumps catversion as the data directories won't be compatible.

Backpatch to 9.4, which I missed to do as Michael Paquier luckily
noticed. As there already has been a catversion bump after 9.4beta1,
there's no reasons for having 9.4 diverge from master.
2014-07-09 13:25:25 +02:00
8adb476995 Fix whitespace 2014-07-08 23:29:09 -04:00
845b3c3bb5 Update key words table for 9.4 2014-07-08 14:54:32 -04:00
44850ec32a doc: Link text to table by id 2014-07-08 14:14:37 -04:00
ac45aa1dde Don't assume a subquery's output is unique if there's a SRF in its tlist.
While the x output of "select x from t group by x" can be presumed unique,
this does not hold for "select x, generate_series(1,10) from t group by x",
because we may expand the set-returning function after the grouping step.
(Perhaps that should be re-thought; but considering all the other oddities
involved with SRFs in targetlists, it seems unlikely we'll change it.)
Put a check in query_is_distinct_for() so it's not fooled by such cases.

Back-patch to all supported branches.

David Rowley
2014-07-08 14:03:45 -04:00
443dd97001 doc: Fix spacing in verbatim environments 2014-07-08 11:39:07 -04:00
f64fe2cbe6 pg_upgrade: allow upgrades for new-only TOAST tables
Previously, when calculations on the need for toast tables changed,
pg_upgrade could not handle cases where the new cluster needed a TOAST
table and the old cluster did not.  (It already handled the opposite
case.)  This fixes the "OID mismatch" error typically generated in this
case.

Backpatch through 9.2
2014-07-07 13:24:08 -04:00
d54712e7cf Fix decoding of MULTI_INSERTs when rows other than the last are toasted.
When decoding the results of a HEAP2_MULTI_INSERT (currently only
generated by COPY FROM) toast columns for all but the last tuple
weren't replaced by their actual contents before being handed to the
output plugin. The reassembled toast datums where disregarded after
every REORDER_BUFFER_CHANGE_(INSERT|UPDATE|DELETE) which is correct
for plain inserts, updates, deletes, but not multi inserts - there we
generate several REORDER_BUFFER_CHANGE_INSERTs for a single
xl_heap_multi_insert record.

To solve the problem add a clear_toast_afterwards boolean to
ReorderBufferChange's union member that's used by modifications. All
row changes but multi_inserts always set that to true, but
multi_insert sets it only for the last change generated.

Add a regression test covering decoding of multi_inserts - there was
none at all before.

Backpatch to 9.4 where logical decoding was introduced.

Bug found by Petr Jelinek.
2014-07-06 15:59:53 +02:00
49c279efe7 Consistently pass an "unsigned char" to ctype.h functions.
The isxdigit() calls relied on undefined behavior.  The isascii() call
was well-defined, but our prevailing style is to include the cast.
Back-patch to 9.4, where the isxdigit() calls were introduced.
2014-07-06 00:30:11 -04:00
e44964c709 Don't cache per-group context across the whole query in orderedsetaggs.c.
Although nodeAgg.c currently uses the same per-group memory context for
all groups of a query, that might change in future.  Avoid assuming it.
This costs us an extra AggCheckCallContext() call per group, but that's
pretty cheap and is probably good from a safety standpoint anyway.

Back-patch to 9.4 in case any third-party code copies this logic.

Andrew Gierth
2014-07-03 18:47:22 -04:00
f688cf548b Redesign API presented by nodeAgg.c for ordered-set and similar aggregates.
The previous design exposed the input and output ExprContexts of the
Agg plan node, but work on grouping sets has suggested that we'll regret
doing that.  Instead provide more narrowly-defined APIs that can be
implemented in multiple ways, namely a way to get a short-term memory
context and a way to register an aggregate shutdown callback.

Back-patch to 9.4 where the bad APIs were introduced, since we don't
want third-party code using these APIs and then having to change in 9.5.

Andrew Gierth
2014-07-03 18:25:37 -04:00
6b56bc16cd Use a separate temporary directory for the Unix-domain socket
Creating the Unix-domain socket in the build directory can run into
name-length limitations.  Therefore, create the socket file in the
default temporary directory of the operating system.  Keep the temporary
data directory etc. in the build tree.
2014-07-02 21:48:35 -04:00
04163ff1b9 Support vpath builds in TAP tests 2014-07-02 21:48:35 -04:00
93787ce747 Smooth reporting of commit/rollback statistics.
If a connection committed or rolled back any transactions within a
PGSTAT_STAT_INTERVAL pacing interval without accessing any tables,
the reporting of those statistics would be held up until the
connection closed or until it ended a PGSTAT_STAT_INTERVAL interval
in which it had accessed a table.  This could result in under-
reporting of transactions for an extended period, followed by a
spike in reported transactions.

While this is arguably a bug, the impact is minimal, primarily
affecting, and being affected by, monitoring software.  It might
cause more confusion than benefit to change the existing behavior
in released stable branches, so apply only to master and the 9.4
beta.

Gurjeet Singh, with review and editing by Kevin Grittner,
incorporating suggested changes from Abhijit Menon-Sen and Tom
Lane.
2014-07-02 15:03:57 -05:00
b446a384b7 pg_upgrade: preserve database and relation minmxid values
Also set these values for pre-9.3 old clusters that don't have values to
preserve.

Analysis by Alvaro

Backpatch through 9.3
2014-07-02 15:29:38 -04:00
fbbb65daa2 pg_upgrade: no need to remove "members" files for pre-9.3 upgrades
Per analysis by Alvaro

Backpatch through 9.3
2014-07-02 13:11:05 -04:00
cec3be05f6 Add some errdetail to checkRuleResultList().
This function wasn't originally thought to be really user-facing,
because converting a table to a view isn't something we expect people
to do manually.  So not all that much effort was spent on the error
messages; in particular, while the code will complain that you got
the column types wrong it won't say exactly what they are.  But since
we repurposed the code to also check compatibility of rule RETURNING
lists, it's definitely user-facing.  It now seems worthwhile to add
errdetail messages showing exactly what the conflict is when there's
a mismatch of column names or types.  This is prompted by bug #10836
from Matthias Raffelsieper, which might have been forestalled if the
error message had reported the wrong column type as being "record".

Back-patch to 9.4, but not into older branches where the set of
translatable error strings is supposed to be stable.
2014-07-02 12:31:27 -04:00
5520006b5b Prevent psql from issuing BEGIN before ALTER SYSTEM when AUTOCOMMIT is off.
The autocommit-off mode works by issuing an implicit BEGIN just before
any command that is not already in a transaction block and is not itself
a BEGIN or other transaction-control command, nor a command that
cannot be executed inside a transaction block. This commit prevents psql
from issuing such an implicit BEGIN before ALTER SYSTEM because it's
not allowed inside a transaction block.

Backpatch to 9.4 where ALTER SYSTEM was added.

Report by Feike Steenbergen
2014-07-02 12:43:09 +09:00
49bca9ea53 Fix inadequately-sized output buffer in contrib/unaccent.
The output buffer size in unaccent_lexize() was calculated as input string
length times pg_database_encoding_max_length(), which effectively assumes
that replacement strings aren't more than one character.  While that was
all that we previously documented it to support, the code actually has
always allowed replacement strings of arbitrary length; so if you tried
to make use of longer strings, you were at risk of buffer overrun.  To fix,
use an expansible StringInfo buffer instead of trying to determine the
maximum space needed a-priori.

This would be a security issue if unaccent rules files could be installed
by unprivileged users; but fortunately they can't, so in the back branches
the problem can be labeled as improper configuration by a superuser.
Nonetheless, a memory stomp isn't a nice way of reacting to improper
configuration, so let's back-patch the fix.
2014-07-01 11:22:46 -04:00
4dc3df9d19 pg_upgrade: update C comments about pg_dumpall
There were some C comments that hadn't been updated from the switch of
using only pg_dumpall to using pg_dump and pg_dumpall, so update them.
Also, don't bother using --schema-only for pg_dumpall --globals-only.

Backpatch through 9.4
2014-06-30 19:57:47 -04:00
37a4d3d706 Don't prematurely free the BufferAccessStrategy in pgstat_heap().
This function continued to use it after heap_endscan() freed it.  In
passing, don't explicit create a strategy here.  Instead, use the one
created by heap_beginscan_strat(), if any.  Back-patch to 9.2, where use
of a BufferAccessStrategy here was introduced.
2014-06-30 16:59:44 -04:00
6ad903d70a Check interrupts during logical decoding more frequently.
When reading large amounts of preexisting WAL during logical decoding
using the SQL interface we possibly could fail to check interrupts in
due time. Similarly the same could happen on systems with a very high
WAL volume while creating a new logical replication slot, independent
of the used interface.

Previously these checks where only performed in xlogreader's read_page
callbacks, while waiting for new WAL to be produced. That's not
sufficient though, if there's never a need to wait.  Walsender's send
loop already contains a interrupt check.

Backpatch to 9.4 where the logical decoding feature was introduced.
2014-06-30 12:02:15 +02:00
d27d493a4e Revert the assertion of no palloc's in critical section.
Per discussion, it still fires too often to be safe to enable in
production. Keep it in master, so that we find the issues, but disable it
in the stable branch.
2014-06-30 10:25:05 +03:00
3130b8505f Remove use_json_as_text options from json_to_record/json_populate_record.
The "false" case was really quite useless since all it did was to throw
an error; a definition not helped in the least by making it the default.
Instead let's just have the "true" case, which emits nested objects and
arrays in JSON syntax.  We might later want to provide the ability to
emit sub-objects in Postgres record or array syntax, but we'd be best off
to drive that off a check of the target field datatype, not a separate
argument.

For the functions newly added in 9.4, we can just remove the flag arguments
outright.  We can't do that for json_populate_record[set], which already
existed in 9.3, but we can ignore the argument and always behave as if it
were "true".  It helps that the flag arguments were optional and not
documented in any useful fashion anyway.
2014-06-29 13:51:02 -04:00
56f86bb764 Have multixact be truncated by checkpoint, not vacuum
Instead of truncating pg_multixact at vacuum time, do it only at
checkpoint time.  The reason for doing it this way is twofold: first, we
want it to delete only segments that we're certain will not be required
if there's a crash immediately after the removal; and second, we want to
do it relatively often so that older files are not left behind if
there's an untimely crash.

Per my proposal in
http://www.postgresql.org/message-id/20140626044519.GJ7340@eldon.alvh.no-ip.org
we now execute the truncation in the checkpointer process rather than as
part of vacuum.  Vacuum is in only charge of maintaining in shared
memory the value to which it's possible to truncate the files; that
value is stored as part of checkpoints also, and so upon recovery we can
reuse the same value to re-execute truncate and reset the
oldest-value-still-safe-to-use to one known to remain after truncation.

Per bug reported by Jeff Janes in the course of his tests involving
bug #8673.

While at it, update some comments that hadn't been updated since
multixacts were changed.

Backpatch to 9.3, where persistency of pg_multixact files was
introduced by commit 0ac5ad5134.
2014-06-27 14:43:52 -04:00
9eecc8a7ca Don't allow relminmxid to go backwards during VACUUM FULL
We were allowing a table's pg_class.relminmxid value to move backwards
when heaps were swapped by VACUUM FULL or CLUSTER.  There is a
similar protection against relfrozenxid going backwards, which we
neglected to clone when the multixact stuff was rejiggered by commit
0ac5ad5134.

Backpatch to 9.3, where relminmxid was introduced.

As reported by Heikki in
http://www.postgresql.org/message-id/52401AEA.9000608@vmware.com
2014-06-27 14:43:46 -04:00
4c888a6290 Fix broken Assert() introduced by 8e9a16ab8f
Don't assert MultiXactIdIsRunning if the multi came from a tuple that
had been share-locked and later copied over to the new cluster by
pg_upgrade.  Doing that causes an error to be raised unnecessarily:
MultiXactIdIsRunning is not open to the possibility that its argument
came from a pg_upgraded tuple, and all its other callers are already
checking; but such multis cannot, obviously, have transactions still
running, so the assert is pointless.

Noticed while investigating the bogus pg_multixact/offsets/0000 file
left over by pg_upgrade, as reported by Andres Freund in
http://www.postgresql.org/message-id/20140530121631.GE25431@alap3.anarazel.de

Backpatch to 9.3, as the commit that introduced the buglet.
2014-06-27 14:43:39 -04:00
6327f25ddd Disallow pushing volatile qual expressions down into DISTINCT subqueries.
A WHERE clause applied to the output of a subquery with DISTINCT should
theoretically be applied only once per distinct row; but if we push it
into the subquery then it will be evaluated at each row before duplicate
elimination occurs.  If the qual is volatile this can give rise to
observably wrong results, so don't do that.

While at it, refactor a little bit to allow subquery_is_pushdown_safe
to report more than one kind of restrictive condition without indefinitely
expanding its argument list.

Although this is a bug fix, it seems unwise to back-patch it into released
branches, since it might de-optimize plans for queries that aren't giving
any trouble in practice.  So apply to 9.4 but not further back.
2014-06-27 11:08:51 -07:00
c3096d57c8 Get rid of bogus separate pg_proc entries for json_extract_path operators.
These should not have existed to begin with, but there was apparently some
misunderstanding of the purpose of the opr_sanity regression test item
that checks for operator implementation functions with their own comments.
The idea there is to check for unintentional violations of the rule that
operator implementation functions shouldn't be documented separately
.... but for these functions, that is in fact what we want, since the
variadic option is useful and not accessible via the operator syntax.
Get rid of the extra pg_proc entries and fix the regression test and
documentation to be explicit about what we're doing here.
2014-06-26 16:22:18 -07:00
973837450c Forward-patch regression test for "could not find pathkey item to sort".
Commit a87c729153 already fixed the bug this
is checking for, but the regression test case it added didn't cover this
scenario.  Since we managed to miss the fact that there was a bug at all,
it seems like a good idea to propagate the extra test case forward to HEAD.
2014-06-26 10:41:54 -07:00
2a741ca765 Remove obsolete example of CSV log file name from log_filename document.
7380b63 changed log_filename so that epoch was not appended to it
when no format specifier is given. But the example of CSV log file name
with epoch still left in log_filename document. This commit removes
such obsolete example.

This commit also documents the defaults of log_directory and
log_filename.

Backpatch to all supported versions.

Christoph Berg
2014-06-26 14:29:30 +09:00
119fed85f5 Rationalize error messages within jsonfuncs.c.
I noticed that the functions in jsonfuncs.c sometimes printed error
messages that claimed I'd called some other function.  Investigation showed
that this was from repurposing code into "worker" functions without taking
much care as to whether it would mention the right SQL-level function if it
threw an error.  Moreover, there was a weird mismash of messages that
contained a fixed function name, messages that used %s for a function name,
and messages that constructed a function name out of spare parts, like
"json%s_populate_record" (which, quite aside from being ugly as sin, wasn't
even sufficient to cover all the cases).  This would put an undue burden on
our long-suffering translators.  Standardize on inserting the SQL function
name with %s so as to reduce the number of translatable strings, and pass
function names around as needed to make sure we can report the right one.
Fix up some gratuitous variations in wording, too.
2014-06-25 15:25:26 -07:00
199aa71087 Cosmetic improvements in jsonfuncs.c.
Re-pgindent, remove a lot of random vertical whitespace, remove useless
(if not counterproductive) inline markings, get rid of unnecessary
zero-padding of strings for hashtable searches.  No functional changes.
2014-06-25 11:22:21 -07:00
a331512dec Fix handling of nested JSON objects in json_populate_recordset and friends.
populate_recordset_object_start() improperly created a new hash table
(overwriting the link to the existing one) if called at nest levels
greater than one.  This resulted in previous fields not appearing in
the final output, as reported by Matti Hameister in bug #10728.
In 9.4 the problem also affects json_to_recordset.

This perhaps missed detection earlier because the default behavior is to
throw an error for nested objects: you have to pass use_json_as_text = true
to see the problem.

In addition, fix query-lifespan leakage of the hashtable created by
json_populate_record().  This is pretty much the same problem recently
fixed in dblink: creating an intended-to-be-temporary context underneath
the executor's per-tuple context isn't enough to make it go away at the
end of the tuple cycle, because MemoryContextReset is not
MemoryContextResetAndDeleteChildren.

Michael Paquier and Tom Lane
2014-06-24 21:22:43 -07:00
dd5369047f pg_upgrade: remove pg_multixact files left by initdb
This fixes a bug that caused vacuum to fail when the '0000' files left
by initdb were accessed as part of vacuum's cleanup of old pg_multixact
files.

Backpatch through 9.3
2014-06-24 16:11:06 -04:00
1818ae0a78 Don't allow foreign tables with OIDs.
The syntax doesn't let you specify "WITH OIDS" for foreign tables, but it
was still possible with default_with_oids=true. But the rest of the system,
including pg_dump, isn't prepared to handle foreign tables with OIDs
properly.

Backpatch down to 9.1, where foreign tables were introduced. It's possible
that there are databases out there that already have foreign tables with
OIDs. There isn't much we can do about that, but at least we can prevent
them from being created in the future.

Patch by Etsuro Fujita, reviewed by Hadi Moshayedi.
2014-06-24 13:31:06 +03:00
e9b182404b Fix typo in replication slot function doc. 2014-06-24 03:53:58 +09:00
c6d0df9492 Add missing closing parenthesis into max_replication_slots doc. 2014-06-24 03:25:32 +09:00
a2586dece1 doc: adjust JSONB GIN index description
Backpatch through 9.4
2014-06-21 15:33:22 -04:00
750c0351ea 9.4 release notes: adjust some entry wording
Backpatch to 9.4
2014-06-21 10:56:51 -04:00
bcd67a2fe9 Fix documentation template for CREATE TRIGGER.
By using curly braces, the template had specified that one of
"NOT DEFERRABLE", "INITIALLY IMMEDIATE", or "INITIALLY DEFERRED"
was required on any CREATE TRIGGER statement, which is not
accurate.  Change to square brackets makes that optional.

Backpatch to 9.1, where the error was introduced.
2014-06-21 09:17:14 -05:00
9d884a34c3 Clean up data conversion short-lived memory context.
dblink uses a short-lived data conversion memory context. However it
was not deleted when no longer needed, leading to a noticeable memory
leak under some circumstances. Plug the hole, along with minor
refactoring. Backpatch to 9.2 where the leak was introduced.

Report and initial patch by MauMau. Reviewed/modified slightly by
Tom Lane and me.
2014-06-20 12:26:26 -07:00
4bc0a13a8a Do all-visible handling in lazy_vacuum_page() outside its critical section.
Since fdf9e21196 lazy_vacuum_page() rechecks the all-visible status
of pages in the second pass over the heap. It does so inside a
critical section, but both visibilitymap_test() and
heap_page_is_all_visible() perform operations that should not happen
inside one. The former potentially performs IO and both potentially do
memory allocations.

To fix, simply move all the all-visible handling outside the critical
section. Doing so means that the PD_ALL_VISIBLE on the page won't be
included in the full page image of the HEAP2_CLEAN record anymore. But
that's fine, the flag will be set by the HEAP2_VISIBLE logged later.

Backpatch to 9.3 where the problem was introduced. The bug only came
to light due to the assertion added in 4a170ee9 and isn't likely to
cause problems in production scenarios. The worst outcome is a
avoidable PANIC restart.

This also gets rid of the difference in the order of operations
between master and standby mentioned in 2a8e1ac5.

Per reports from David Leverton and Keith Fiske in bug #10533.
2014-06-20 11:06:48 +02:00
1044e79a0b Avoid leaking memory while evaluating arguments for a table function.
ExecMakeTableFunctionResult evaluated the arguments for a function-in-FROM
in the query-lifespan memory context.  This is insignificant in simple
cases where the function relation is scanned only once; but if the function
is in a sub-SELECT or is on the inside of a nested loop, any memory
consumed during argument evaluation can add up quickly.  (The potential for
trouble here had been foreseen long ago, per existing comments; but we'd
not previously seen a complaint from the field about it.)  To fix, create
an additional temporary context just for this purpose.

Per an example from MauMau.  Back-patch to all active branches.
2014-06-19 22:13:44 -04:00
b488daf0d5 Document SQL functions' behavior of parsing the whole function at once.
Haribabu Kommi, somewhat rewritten by me
2014-06-19 12:34:07 -04:00