These should not have existed to begin with, but there was apparently some
misunderstanding of the purpose of the opr_sanity regression test item
that checks for operator implementation functions with their own comments.
The idea there is to check for unintentional violations of the rule that
operator implementation functions shouldn't be documented separately
.... but for these functions, that is in fact what we want, since the
variadic option is useful and not accessible via the operator syntax.
Get rid of the extra pg_proc entries and fix the regression test and
documentation to be explicit about what we're doing here.
Commit a87c729153e372f3731689a7be007bc2b53f1410 already fixed the bug this
is checking for, but the regression test case it added didn't cover this
scenario. Since we managed to miss the fact that there was a bug at all,
it seems like a good idea to propagate the extra test case forward to HEAD.
7380b63 changed log_filename so that epoch was not appended to it
when no format specifier is given. But the example of CSV log file name
with epoch still left in log_filename document. This commit removes
such obsolete example.
This commit also documents the defaults of log_directory and
log_filename.
Backpatch to all supported versions.
Christoph Berg
I noticed that the functions in jsonfuncs.c sometimes printed error
messages that claimed I'd called some other function. Investigation showed
that this was from repurposing code into "worker" functions without taking
much care as to whether it would mention the right SQL-level function if it
threw an error. Moreover, there was a weird mismash of messages that
contained a fixed function name, messages that used %s for a function name,
and messages that constructed a function name out of spare parts, like
"json%s_populate_record" (which, quite aside from being ugly as sin, wasn't
even sufficient to cover all the cases). This would put an undue burden on
our long-suffering translators. Standardize on inserting the SQL function
name with %s so as to reduce the number of translatable strings, and pass
function names around as needed to make sure we can report the right one.
Fix up some gratuitous variations in wording, too.
Re-pgindent, remove a lot of random vertical whitespace, remove useless
(if not counterproductive) inline markings, get rid of unnecessary
zero-padding of strings for hashtable searches. No functional changes.
populate_recordset_object_start() improperly created a new hash table
(overwriting the link to the existing one) if called at nest levels
greater than one. This resulted in previous fields not appearing in
the final output, as reported by Matti Hameister in bug #10728.
In 9.4 the problem also affects json_to_recordset.
This perhaps missed detection earlier because the default behavior is to
throw an error for nested objects: you have to pass use_json_as_text = true
to see the problem.
In addition, fix query-lifespan leakage of the hashtable created by
json_populate_record(). This is pretty much the same problem recently
fixed in dblink: creating an intended-to-be-temporary context underneath
the executor's per-tuple context isn't enough to make it go away at the
end of the tuple cycle, because MemoryContextReset is not
MemoryContextResetAndDeleteChildren.
Michael Paquier and Tom Lane
This fixes a bug that caused vacuum to fail when the '0000' files left
by initdb were accessed as part of vacuum's cleanup of old pg_multixact
files.
Backpatch through 9.3
The syntax doesn't let you specify "WITH OIDS" for foreign tables, but it
was still possible with default_with_oids=true. But the rest of the system,
including pg_dump, isn't prepared to handle foreign tables with OIDs
properly.
Backpatch down to 9.1, where foreign tables were introduced. It's possible
that there are databases out there that already have foreign tables with
OIDs. There isn't much we can do about that, but at least we can prevent
them from being created in the future.
Patch by Etsuro Fujita, reviewed by Hadi Moshayedi.
By using curly braces, the template had specified that one of
"NOT DEFERRABLE", "INITIALLY IMMEDIATE", or "INITIALLY DEFERRED"
was required on any CREATE TRIGGER statement, which is not
accurate. Change to square brackets makes that optional.
Backpatch to 9.1, where the error was introduced.
dblink uses a short-lived data conversion memory context. However it
was not deleted when no longer needed, leading to a noticeable memory
leak under some circumstances. Plug the hole, along with minor
refactoring. Backpatch to 9.2 where the leak was introduced.
Report and initial patch by MauMau. Reviewed/modified slightly by
Tom Lane and me.
Since fdf9e21196a lazy_vacuum_page() rechecks the all-visible status
of pages in the second pass over the heap. It does so inside a
critical section, but both visibilitymap_test() and
heap_page_is_all_visible() perform operations that should not happen
inside one. The former potentially performs IO and both potentially do
memory allocations.
To fix, simply move all the all-visible handling outside the critical
section. Doing so means that the PD_ALL_VISIBLE on the page won't be
included in the full page image of the HEAP2_CLEAN record anymore. But
that's fine, the flag will be set by the HEAP2_VISIBLE logged later.
Backpatch to 9.3 where the problem was introduced. The bug only came
to light due to the assertion added in 4a170ee9 and isn't likely to
cause problems in production scenarios. The worst outcome is a
avoidable PANIC restart.
This also gets rid of the difference in the order of operations
between master and standby mentioned in 2a8e1ac5.
Per reports from David Leverton and Keith Fiske in bug #10533.
ExecMakeTableFunctionResult evaluated the arguments for a function-in-FROM
in the query-lifespan memory context. This is insignificant in simple
cases where the function relation is scanned only once; but if the function
is in a sub-SELECT or is on the inside of a nested loop, any memory
consumed during argument evaluation can add up quickly. (The potential for
trouble here had been foreseen long ago, per existing comments; but we'd
not previously seen a complaint from the field about it.) To fix, create
an additional temporary context just for this purpose.
Per an example from MauMau. Back-patch to all active branches.
Commit ea9df812d8502fff74e7bc37d61bdc7d66d77a7f failed to include
NUM_BUFFER_PARTITIONS in this offset, resulting in a bad offset.
Ultimately this threw off NUM_FIXED_LWLOCKS which is based on
earlier offsets, leading to memory allocation problems. It seems
likely to have also caused increased LWLOCK contention when
serializable transactions were used, because lightweight locks used
for that overlapped others.
Reported by Amit Kapila with analysis and fix.
Backpatch to 9.4, where the bug was introduced.
data_directory could be set both in postgresql.conf and postgresql.auto.conf so far.
This could cause some problematic situations like circular definition. To avoid such
situations, this commit forbids a user to set data_directory in postgresql.auto.conf.
Backpatch this to 9.4 where ALTER SYSTEM command was introduced.
Amit Kapila, reviewed by Abhijit Menon-Sen, with minor adjustments by me.
Catch up with commit b8cc8f94730610c0189aa82dfec4ae6ce9b13e34's
introduction of the HAVE_UUID_OSSP symbol to the principal build
process. Back-patch to 9.4, where that commit appeared.
Any OS user able to access the socket can connect as the bootstrap
superuser and proceed to execute arbitrary code as the OS user running
the test. Protect against that by placing the socket in a temporary,
mode-0700 subdirectory of /tmp. The pg_regress-based test suites and
the pg_upgrade test suite were vulnerable; the $(prove_check)-based test
suites were already secure. Back-patch to 8.4 (all supported versions).
The hazard remains wherever the temporary cluster accepts TCP
connections, notably on Windows.
As a convenient side effect, this lets testing proceed smoothly in
builds that override DEFAULT_PGSOCKET_DIR. Popular non-default values
like /var/run/postgresql are often unwritable to the build user.
Security: CVE-2014-0067
This function is pervasive on free software operating systems; import
NetBSD's implementation. Back-patch to 8.4, like the commit that will
harness it.
Prior to 9.0, pg_dump handled comments on large objects by dumping a bunch
of COMMENT commands into a single BLOB COMMENTS archive object. With
sufficiently many such comments, some of the commands would likely get
split across bufferloads when restoring, causing failures in
direct-to-database restores (though no problem would be evident in text
output). This is the same type of issue we have with table data dumped as
INSERT commands, and it can be fixed in the same way, by using a mini SQL
lexer to figure out where the command boundaries are. Fortunately, the
COMMENT commands are no more complex to lex than INSERTs, so we can just
re-use the existing lexer for INSERTs.
Per bug #10611 from Jacek Zalewski. Back-patch to all active branches.
We should report the errno when we get a failure from functions like
BufFileWrite. "ERROR: write failed" is unreasonably taciturn for a
case that's well within the realm of possibility; I've seen it a
couple times in the buildfarm recently, in situations that were
probably out-of-disk-space, but it'd be good to see the errno
to confirm it.
I think this code was originally written without assuming that
the buffile.c functions would return useful errno; but most other
callers *are* assuming that, and a quick look at the buffile code
gives no reason to suppose otherwise.
Also, a couple of the old messages were phrased on the assumption
that a short read might indicate a logic bug in tuplestore itself;
but that code's pretty well tested by now, so a filesystem-level
problem seems much more likely.
Since we commonly test pg_dump/pg_restore by seeing whether they can dump
and restore the regression test database, it behooves us to include some
large objects in that test scenario.
I tried to include a comment on one of these large objects to improve
the test scenario further ... but it turns out that pg_upgrade fails to
preserve comments on large objects, and its regression test notices
the discrepancy. So uncommenting that COMMENT is a TODO for later.
Robert Frost is no longer with us, but his copyrights still are, so
let's stop using "Stopping by Woods on a Snowy Evening" as test data
before somebody decides to sue us. Wordsworth is more safely dead.
Memorialize the expected output of the query that libpq has been using for
many years to get the OIDs of large-object support functions. Although
we really ought to change the way libpq does this, we must expect that
this query will remain in use in the field for the foreseeable future,
so until we're ready to break compatibility with old libpq versions
we'd better check the results stay the same. See the recent lo_create()
fiasco.
The previous naming broke the query that libpq's lo_initialize() uses
to collect the OIDs of the server-side functions it requires, because
that query effectively assumes that there is only one function named
lo_create in the pg_catalog schema (and likewise only one lo_open, etc).
While we should certainly make libpq more robust about this, the naive
query will remain in use in the field for the foreseeable future, so it
seems the only workable choice is to use a different name for the new
function. lo_from_bytea() won a small straw poll.
Back-patch into 9.4 where the new function was introduced.
Change the order of checks in similar functions to be the same; remove
a parameter that's not needed anymore; rename a memory context and
expand a couple of comments.
Per review comments from Amit Kapila
When we grabbed this file off the Snowball project's website, we mistakenly
supposed that it was in LATIN1 encoding, but evidently it was actually in
LATIN2. This resulted in ő (o-double-acute, U+0151, which is code 0xF5 in
LATIN2) being misconverted into õ (o-tilde, U+00F5), as complained of in
bug #10589 from Zoltán Sörös. We'd have messed up u-double-acute too,
but there aren't any of those in the file. Other characters used in the
file have the same codes in LATIN1 and LATIN2, which no doubt helped hide
the problem for so long.
The error is not only ours: the Snowball project also was confused about
which encoding is required for Hungarian. But dealing with that will
require source-code changes that I'm not at all sure we'll wish to
back-patch. Fixing the stopword file seems reasonably safe to back-patch
however.
Although this bug is already fixed in post-9.2 branches, the case
triggering it is quite different from what was under consideration
at the time. It seems worth memorializing this example in HEAD
just to make sure it doesn't get broken again in future.
Extracted from commit 187ae17300776f48b2bd9d0737923b1bf70f606e.
Previously, the code used a node label of zero both for strings that
contain no bytes beyond the inner tuple's prefix, and for cases where an
"allTheSame" inner tuple has to be split to allow a string with a different
next byte to be inserted into it. Failing to distinguish these cases meant
that if a string ending with the current prefix needed to be inserted into
an allTheSame tuple, we got into an infinite loop, because after splitting
the tuple we'd descend into the child allTheSame tuple and then find we
need to split again.
To fix, instead use -1 and -2 as the node labels for these two cases.
This requires widening the node label type from "char" to int2, but
fortunately SPGiST stores all pass-by-value node label types in their
Datum representation, which means that this change is transparently upward
compatible so far as the on-disk representation goes. We continue to
recognize zero as a dummy node label for reading purposes, but will not
attempt to push new index entries down into such a label, so that the loop
won't occur even when dealing with an existing index.
Per report from Teodor Sigaev. Back-patch to 9.2 where the faulty
code was introduced.
In a50d97625497b7 I already changed this, but got it wrong for the case
where the number of members is larger than the number of entries that
fit in the last page of the last segment.
As reported by Serge Negodyuck in a followup to bug #8673.
A ReorderBufferTransaction's end_lsn, the sentPtr advocated by
walsender keepalive messages, and the end location remembered by the
decoding get_*changes* SQL functions all use the location of the last
read record + 1. I.e. the LSN points to the beginning of the next
record. That cannot realistically be changed without changing the
replication protocol because that's how keepalive messages have worked
since 9.0.
The bug is that the logic inside the snapshot builder, which decides
whether a transaction's contents should be decoded, assumed the start
location would point towards the last byte of the last record. The
reason this didn't actually cause visible problems is that currently
that decision is only made for commit records. Since interesting
transactions always have at least one additional record - containing
actual data - we'd never skip a transaction.
But if there ever were transactions, or other events, with just one
record containing important information, we'd skip them after stopping
and restarting logical decoding.
It's critical that the backend's idea of LOBLKSIZE match the way data has
actually been divided up in pg_largeobject. While we don't provide any
direct way to adjust that value, doing so is a one-line source code change
and various people have expressed interest recently in changing it. So,
just as with TOAST_MAX_CHUNK_SIZE, it seems prudent to record the value in
pg_control and cross-check that the backend's compiled-in setting matches
the on-disk data.
Also tweak the code in inv_api.c so that fetches from pg_largeobject
explicitly verify that the length of the data field is not more than
LOBLKSIZE. Formerly we just had Asserts() for that, which is no protection
at all in production builds. In some of the call sites an overlength data
value would translate directly to a security-relevant stack clobber, so it
seems worth one extra runtime comparison to be sure.
In the back branches, we can't change the contents of pg_control; but we
can still make the extra checks in inv_api.c, which will offer some amount
of protection against running with the wrong value of LOBLKSIZE.
Previously there's been a mix between 'slotname' and 'slot_name'. It's
not nice to be unneccessarily inconsistent in a new feature. As a post
beta1 initdb now is required in the wake of eeca4cd35e, fix the
inconsistencies.
Most the changes won't affect usage of replication slots because the
majority of changes is around function parameter names. The prominent
exception to that is that the recovery.conf parameter
'primary_slotname' is now named 'primary_slot_name'.
The original location in create_function_3.sql didn't invite the close
structinity warranted for adding new leakproof functions. Add comments
to the test explaining that functions should only be added after
careful consideration and understanding what a leakproof function is.
Per complaint from Tom Lane after 5eebb8d954ad.
The way the code was written, the padding was copied from uninitialized
memory areas.. Because the structs are local variables in the code where
the WAL records are constructed, making them larger and zeroing the padding
bytes would not make the code very pretty, so rather than fixing this
directly by zeroing out the padding bytes, it seems more clear to not try to
align the tuples in the WAL records. The redo functions are taught to copy
the tuple header to a local variable to avoid unaligned access.
Stable-branches have the same problem, but we can't change the WAL format
there, so fix in master only. Reading a few random extra bytes at the stack
is harmless in practice, so it's not worth crafting a different
back-patchable fix.
Per reports from Kevin Grittner and Andres Freund, using clang static
analyzer and Valgrind, respectively.
Buildfarm says we get different plans on 32-bit and 64-bit platforms,
probably because of MAXALIGN-related differences in memory-consumption
calculations. Add some dummy WHERE clauses so that the planner estimates
different sizes for the three generate_series() relations; that should
stabilize the choice of join order.
This is needed to allow ORDER BY, DISTINCT, etc to work as expected for
pg_lsn values.
We had previously decided to put this off for 9.5, but in view of commit
eeca4cd35e284c72b2ea1b4494e64e7738896e81 there's no reason to avoid a
catversion bump for 9.4beta2, and this does make a pretty significant
usability difference for pg_lsn.
Michael Paquier, with fixes from Andres Freund and Tom Lane
This should have been done in 6bc8ef0b7f1f1df3998745a66e1790e27424aa0c
and/or 50e547096c4858a68abf09894667a542cc418315, but better late than
never. If we don't change this then we risk 9.3 pg_controldata or
pg_resetxlog being inappropriately used against a 9.4 pg_control file,
or vice versa.
HeapTupleSatisfiesVacuum() didn't properly discern between
DELETE_IN_PROGRESS and INSERT_IN_PROGRESS for rows that have been
inserted in the current transaction and deleted in a aborted
subtransaction of the current backend. At the very least that caused
problems for CLUSTER and CREATE INDEX in transactions that had
aborting subtransactions producing rows, leading to warnings like:
WARNING: concurrent delete in progress within table "..."
possibly in an endless, uninterruptible, loop.
Instead of treating *InProgress xmins the same as *IsCurrent ones,
treat them as being distinct like the other visibility routines. As
implemented this separatation can cause a behaviour change for rows
that have been inserted and deleted in another, still running,
transaction. HTSV will now return INSERT_IN_PROGRESS instead of
DELETE_IN_PROGRESS for those. That's both, more in line with the other
visibility routines and arguably more correct. The latter because a
INSERT_IN_PROGRESS will make callers look at/wait for xmin, instead of
xmax.
The only current caller where that's possibly worse than the old
behaviour is heap_prune_chain() which now won't mark the page as
prunable if a row has concurrently been inserted and deleted. That's
harmless enough.
As a cautionary measure also insert a interrupt check before the gotos
in IndexBuildHeapScan() that lead to the uninterruptible loop. There
are other possible causes, like a row that several sessions try to
update and all fail, for repeated loops and the cost of doing so in
the retry case is low.
As this bug goes back all the way to the introduction of
subtransactions in 573a71a5da backpatch to all supported releases.
Reported-By: Sandro Santilli