We had an Assert() preventing whole-row expressions from being used in
the SET clause of INSERT ON CONFLICT, but it seems unnecessary, given
some tests, so remove it. Add a new test to exercise the case.
Still at ExecInitPartitionInfo, we used map_partition_varattnos (which
constructs an attribute map, then calls map_variable_attnos) using
the same two relations many times in different expressions and with
different parameters. Constructing the map over and over is a waste.
To avoid this repeated work, construct the map once, and use
map_variable_attnos() directly instead.
Author: Amit Langote, per comments by me (Álvaro)
Discussion: https://postgr.es/m/20180326142016.m4st5e34chrzrknk@alvherre.pgsql
Spelling access(2)'s second argument as "2" is just horrid.
POSIX makes no promises as to the numeric values of W_OK and related
macros. Even if it accidentally works as intended on every supported
platform, it's still unreadable and inconsistent with adjacent code.
In passing, don't spell "NULL" as "0" either. Yes, that's legal C;
no, it's not project style.
Back-patch, just in case the unportability is real and not theoretical.
(Most likely, even if a platform had different bit assignments for
access()'s modes, there'd not be an observable behavior difference
here; but I'm being paranoid today.)
Coverity complained about the lack of a check on the return value in
parse_jsonb_index_flags' last call of JsonbIteratorNext. Seems like
a reasonable gripe to me, especially since the code is depending on
that being WJB_DONE to not leak memory, so add a check.
In passing, improve a couple other places where the result was being
ignored, either by adding an assert or at least a cast to void.
Also, don't spell "WJB_DONE" as "0". That's horrid coding style,
and it wasn't consistent either.
Teach both base backups and pg_verify_checksums that if a page is new,
it does not have a checksum yet, so it shouldn't be verified.
Noted by Tomas Vondra, review by David Steele.
They were accidentally excluded when reverting the backend online
checksum functionality, and since they weren't built the incorrect
reference to a removed section also did not trigger a problem.
Author: Christoph Berg
Make it clear that a cluster has to be shut down cleanly before
pg_verify_checksum can be run against it.
Author: Michael Paquier
Review: Daniel Gustafsson
This option makes no sense when the cluster checksum state cannot be
changed, and should have been removed in the revert.
Author: Daniel Gustafsson
Review: Michael Paquier
In the wake of commit 50c6bb022, it's not necessary for ApplyRetrieveRule
to have a forUpdatePushedDown parameter. By the time control gets here for
any given view-referencing RTE, we should already have pushed down the
effects of any FOR UPDATE/SHARE clauses affecting the view from outer query
levels. Hence if we don't find a RowMarkClause at the current query level,
that's sufficient proof that there is no outer one either. This in turn
means we need no forUpdatePushedDown parameter for fireRIRrules.
I wonder whether we oughtn't also revert commit cba2d2717, since it now
seems likely that that was band-aiding around the bad effects of doing
FOR UPDATE pushdown and view expansion in the wrong order. However,
in the absence of evidence that the current coding of markQueryForLocking
is actually buggy (i.e. missing RTEs it ought to mark), it seems best to
leave it alone.
Discussion: https://postgr.es/m/24db7b8f-3de5-e25f-7ab9-d8848351d42c@gmail.com
I was dissatisfied with the code coverage report for expand_tuple() in the
wake of commit 7c44c46de: while better than no coverage at all, it was
still not exercising the core function of inserting out-of-line default
values, nor was the HeapTuple-output path covered. So far as I can find,
the only code path that reaches the latter at present is EvalPlanQual
fetches for non-locked tables. Hence, extend eval-plan-qual.spec to
test cases where out-of-line defaults must be inserted into a tuple
fetched from a non-locked table.
Discussion: https://postgr.es/m/87woxi24uw.fsf@ansel.ydns.eu
SELECT FOR UPDATE on a view should require UPDATE (as well as SELECT)
permissions on the view, and then the view's owner needs those same
permissions against the relations it references, and so on all the way
down to base tables. But ApplyRetrieveRule did things in the wrong order,
resulting in failure to mark intermediate view levels as needing UPDATE
permission. Thus for example, if user A creates a table T and an updatable
view V1 on T, then grants only SELECT permissions on V1 to user B, B could
create a second view V2 on V1 and then would be allowed to perform SELECT
FOR UPDATE via V2 (since V1 wouldn't be checked for UPDATE permissions).
To fix, just switch the order of expanding sub-views and marking referenced
objects as needing UPDATE permission. I think additional simplifications
are now possible, but that's distinct from the bug fix proper.
This is certainly a security issue, but the consequences are pretty minor
(just the ability to lock rows that shouldn't be lockable). Against that
we have a small risk of breaking applications that are working as-desired,
since nested views have behaved this way since such cases worked at all.
On balance I'm inclined not to back-patch.
Per report from Alexander Lakhin.
Discussion: https://postgr.es/m/24db7b8f-3de5-e25f-7ab9-d8848351d42c@gmail.com
MaxIndexTuplesPerPage ignores the fact that btree indexes sometimes
store tuples with no data payload. But it also ignores the possibility
of "special space" on index pages, which offsets that, so that the
result isn't an underestimate. This all seems worth documenting, though.
In passing, remove #define MinIndexTupleSize, which was added by
commit 2c03216d8 but not used in that commit nor later ones.
Comment text by me; issue noticed by Peter Geoghegan.
Discussion: https://postgr.es/m/CAH2-WzkQmb54Kbx-YHXstRKXcNc+_87jwV3DRb54xcybLR7Oig@mail.gmail.com
We need to call expand_function_arguments() to expand named and default
arguments.
In PL/pgSQL, we also need to deal with named and default INOUT arguments
when receiving the output values into variables.
Author: Pavel Stehule <pavel.stehule@gmail.com>
Commit 16828d5c forgot to check that it had a set of missing values
before trying to retrieve a value from it.
An additional query to add coverage for this code is added to the
regression test.
Per bug report from Andreas Seltenreich.
In passing, throw an error if the AF count is too small, rather than
just silently discarding extra affix entries.
Note that the new regression test cases require installing the
updated src/backend/tsearch/dicts files.
Arthur Zakirov
Discussion: https://postgr.es/m/20180413113447.GA32474@zakirov.localdomain
We'd throw away the partial result anyway after parsing the error message.
Throwing it away beforehand costs nothing and reduces the risk of
out-of-memory failure. Also, at least in systems that behave like
glibc/Linux, if the partial result was very large then the error PGresult
would get allocated at high heap addresses, preventing the heap storage
used by the partial result from being released to the OS until the error
PGresult is freed.
In psql >= 9.6, we hold onto the error PGresult until another error is
received (for \errverbose), so that this behavior causes a seeming
memory leak to persist for awhile, as in a recent complaint from
Darafei Praliaskouski. This is a potential performance regression from
older versions, justifying back-patching at least that far. But similar
behavior may occur in other client applications, so it seems worth just
back-patching to all supported branches.
Discussion: https://postgr.es/m/CAC8Q8tJ=7cOkPePyAbJE_Pf691t8nDFhJp0KZxHvnq_uicfyVg@mail.gmail.com
This custom opclass was already in use in other tests -- defined
independently in every such file. Move the definition to the earliest
test that uses it, and keep it around so that later tests can reuse it.
Use it in the tests for pruning of hash partitioning, and since this
makes the second expected file unnecessary, put those tests back in
partition_prune.sql whence they sprang.
Author: Amit Langote
Discussion: https://postgr.es/m/CA%2BTgmoZ0D5kJbt8eKXtvVdvTcGGWn6ehWCRSZbWytD-uzH92mQ%40mail.gmail.com
NISortAffixes() compared successive compound affixes incorrectly,
thus possibly failing to merge identical affixes, or (less likely)
merging ones that shouldn't be merged. The user-visible effects
of this are unclear, to me anyway.
Per bug #15150 from Alexander Lakhin. It's been broken for a long time,
so back-patch to all supported branches.
Arthur Zakirov
Discussion: https://postgr.es/m/152353327780.31225.13445405496721177988@wrigleys.postgresql.org
We've made multiple attempts to stabilize the plans shown by commit
1bc0100d2, with little success so far. The reason for the remaining
instability seems to be that if a transaction (such as auto-analyze)
is running concurrently with the test, then get_actual_variable_range may
return a maximum value for "T 1"."C 1" that's far away from the actual max,
as a result of our having transiently inserted such a value earlier in
the test. Because we use a non-MVCC snapshot to fetch the value (for
performance reasons), the presence of other transactions can cause that
function to return entries that are actually dead.
To fix, use a less extreme value in the earlier transient insertion, so
that whether it is visible or not won't affect the selectivity estimate.
The use of 9999 there seems to have been picked with the aid of a
dartboard anyway, rather than having a specific reason.
Discussion: https://postgr.es/m/16962.1523551784@sss.pgh.pa.us
We were using CurrentMemoryContext to put the partsupfunc fmgr_info
into, which isn't right, because we want the PartitionKey as a whole to
be in the isolated Relation->rd_partkeycxt context. This can cause a
crash with user-defined support functions in the operator classes used
by partitioning keys. (Maybe this can cause problems with core-supplied
opclasses too, not sure.)
This is demonstrably broken in Postgres 10, too, but the initial
proposed fix runs afoul of a problem discussed back when 8a0596cb656e
("Get rid of copy_partition_key") reorganized that code: namely that it
is possible to jump out of RelationBuildPartitionKey because of some
error and leave a dangling memory context child of CacheMemoryContext.
Also, while reviewing this I noticed that the removed-in-pg11
copy_partition_key was doing something wrong, unfixed in pg10, namely
doing memcpy() on the FmgrInfo, which is bogus (should be doing
fmgr_info_copy). Therefore, in branch pg10, the sane fix seems to be to
backpatch both the aforementioned 8a0596cb656e and its followup
be2343221fb7 ("Protect against hypothetical memory leaks in
RelationGetPartitionKey"), so do that, then apply the fmgr_info memcxt
bugfix on top.
Add a test case exercising btree-based custom operator classes, which
causes a crash prior to this fix. This is not a security problem,
because in order to create an operator class you need superuser
privileges anyway.
Authors: Álvaro Herrera and Amit Langote
Reported and diagnosed by: Amit Langote
Discussion: https://postgr.es/m/3041e853-b1dd-a0c6-ff21-7cc5633bffd0@lab.ntt.co.jp
The bug is caused due to the original IndexStmt that DefineIndex receives
being overwritten when processing the INCLUDE columns. Use separate list of
index params to propagate to child tables. Add tests covering this case.
Amit Langote and Alexander Korotkov.
Re-commit 5c6110c6a960ad6fe1b0d0fec6ae36ef4eb913f5 because it discovered a bug
fixed in c266ed31a8a3beed3533e6a78faeca78234cbd43
Discussion: https://www.postgresql.org/message-id/CAJGNTeO%3DBguEyG8wxMpU_Vgvg3nGGzy71zUQ0RpzEn_mb0bSWA%40mail.gmail.com
- Explicitly forbids opclass, collation and indoptions (like DESC/ASC etc) for
including columns. Throw an error if user points that.
- Truncated storage arrays for such attributes to store only key atrributes,
added assertion checks.
- Do not check opfamily and collation for including columns in
CompareIndexInfo()
Discussion: https://www.postgresql.org/message-id/5ee72852-3c4e-ee35-e2ed-c1d053d45c08@sigaev.ru
This reverts commits d204ef63776b8a00ca220adec23979091564e465,
83454e3c2b28141c0db01c7d2027e01040df5249 and a few more commits thereafter
(complete list at the end) related to MERGE feature.
While the feature was fully functional, with sufficient test coverage and
necessary documentation, it was felt that some parts of the executor and
parse-analyzer can use a different design and it wasn't possible to do that in
the available time. So it was decided to revert the patch for PG11 and retry
again in the future.
Thanks again to all reviewers and bug reporters.
List of commits reverted, in reverse chronological order:
f1464c5380 Improve parse representation for MERGE
ddb4158579 MERGE syntax diagram correction
530e69e59b Allow cpluspluscheck to pass by renaming variable
01b88b4df5 MERGE minor errata
3af7b2b0d4 MERGE fix variable warning in non-assert builds
a5d86181ec MERGE INSERT allows only one VALUES clause
4b2d44031f MERGE post-commit review
4923550c20 Tab completion for MERGE
aa3faa3c7a WITH support in MERGE
83454e3c2b New files for MERGE
d204ef6377 MERGE SQL Command following SQL:2016
Author: Pavan Deolasee
Reviewed-by: Michael Paquier
Oversight in commit 8b08f7d4820f: pg_class.relispartition was not
being set for index partitions, which is a bit odd, and was also causing
the code to unnecessarily call has_superclass() when simply checking the
flag was enough.
Author: Álvaro Herrera
Reported-by: Amit Langote
Discussion: https://postgr.es/m/12085bc4-0bc6-0f3a-4c43-57fe0681772b@lab.ntt.co.jp
The nextOid value is from the start of the checkpoint and may well be stale
compared to values from more recent XLOG_NEXTOID records. Previously, we
adopted it anyway, allowing the OID counter to go backwards during a crash.
While this should be harmless, it contributed to the severity of the bug
fixed in commit 0408e1ed5, by allowing duplicate TOAST OIDs to be assigned
immediately following a crash. Without this error, that issue would only
have arisen when TOAST objects just younger than a multiple of 2^32 OIDs
were deleted and then not vacuumed in time to avoid a conflict.
Pavan Deolasee
Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4OLosXHf9w@mail.gmail.com
When selecting a new OID, we take care to avoid picking one that's already
in use in the target table, so as not to create duplicates after the OID
counter has wrapped around. However, up to now we used SnapshotDirty when
scanning for pre-existing entries. That ignores committed-dead rows, so
that we could select an OID matching a deleted-but-not-yet-vacuumed row.
While that mostly worked, it has two problems:
* If recently deleted, the dead row might still be visible to MVCC
snapshots, creating a risk for duplicate OIDs when examining the catalogs
within our own transaction. Such duplication couldn't be visible outside
the object-creating transaction, though, and we've heard few if any field
reports corresponding to such a symptom.
* When selecting a TOAST OID, deleted toast rows definitely *are* visible
to SnapshotToast, and will remain so until vacuumed away. This leads to
a conflict that will manifest in errors like "unexpected chunk number 0
(expected 1) for toast value nnnnn". We've been seeing reports of such
errors from the field for years, but the cause was unclear before.
The fix is simple: just use SnapshotAny to search for conflicting rows.
This results in a slightly longer window before object OIDs can be
recycled, but that seems unlikely to create any large problems.
Pavan Deolasee
Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4OLosXHf9w@mail.gmail.com
This fixes a bug whereby the st_appname, st_clienthostname, and
st_activity_raw fields for auxiliary processes point beyond the end of
their respective shared memory segments. As a result, the application_name
of a backend might show up as the client hostname of an auxiliary process.
Backpatch to v10, where this bug was introduced, when the auxiliary
processes were added to the array.
Author: Edmund Horner
Reviewed-by: Michael Paquier
Discussion: https://www.postgresql.org/message-id/CAMyN-kA7aOJzBmrYFdXcc7Z0NmW%2B5jBaf_m%3D_-77uRNyKC9r%3DA%40mail.gmail.com
The other strings, application_name and query string, were snapshotted to
local memory in pgstat_read_current_status(), but we forgot to do that for
client hostnames. As a result, the client hostname would appear to change in
the local copy, if the client disconnected.
Backpatch to all supported versions.
Author: Edmund Horner
Reviewed-by: Michael Paquier
Discussion: https://www.postgresql.org/message-id/CAMyN-kA7aOJzBmrYFdXcc7Z0NmW%2B5jBaf_m%3D_-77uRNyKC9r%3DA%40mail.gmail.com
If the table being attached contained values that contradict the default
partition's partition constraint, it would fail to complain, because
CommandCounterIncrement changes in 4dba331cb3dc coupled with some bogus
coding in the existing ValidatePartitionConstraints prevented the
partition constraint from being validated after all -- or rather, it
caused to constraint to become an empty one, always succeeding.
Fix by not re-reading the OID of the default partition in
ATExecAttachPartition. To forestall similar problems, revise the
existing code:
* rename routine from ValidatePartitionConstraints() to
QueuePartitionConstraintValidation, to better represent what it
actually does.
* add an Assert() to make sure that when queueing a constraint for a
partition we're not overwriting a constraint previously queued.
* add an Assert() that we don't try to invoke the special-purpose
validation of the default partition when attaching the default
partition itself.
While at it, change some loops to obtain partition OIDs from
partdesc->oids rather than find_all_inheritors; reduce the lock level
of partitions being scanned from AccessExclusiveLock to ShareLock;
rewrite QueuePartitionConstraintValidation in a recursive fashion rather
than repetitive.
Author: Álvaro Herrera. Tests written by Amit Langote
Reported-by: Rushabh Lathia
Diagnosed-by: Kyotaro HORIGUCHI, who also provided the initial fix.
Reviewed-by: Kyotaro HORIGUCHI, Amit Langote, Jeevan Ladhe
Discussion: https://postgr.es/m/CAGPqQf0W+v-Ci_qNV_5R3A=Z9LsK4+jO7LzgddRncpp_rrnJqQ@mail.gmail.com
The MAKELEVEL hack to prevent submake-generated-headers from doing
anything in child make runs means that we have to explicitly invoke
it at top level for "make check", too, in case somebody proceeds
directly to that without an explicit "make all". (I think this
usage had parallel-make hazards even before the addition of more
generated headers; but it was totally broken as of 3b8f6e75f.)
Out of paranoia, force the submake-libpq target to depend on
submake-generated-headers, too. This seems to not be absolutely
necessary today, but it's not really saving us anything to omit
the ordering dependency, and it'll likely break someday without it.
Discussion: https://postgr.es/m/20180411103930.GB31461@momjian.us
The bug is caused due to the original IndexStmt that DefineIndex receives
being overwritten when processing the INCLUDE columns. Use separate list of
index params to propagate to child tables. Add tests covering this case.
Amit Langote and Alexander Korotkov.
In particular, the requirement to have SELECT privilege for the initial
table copy was previously not documented.
Author: Shinoda, Noriyoshi <noriyoshi.shinoda@hpe.com>
One improbable error-exit path in this function used close() where
it should have used CloseTransientFile(). This is unlikely to be
hit in the field, and I think the consequences wouldn't be awful
(just an elog(LOG) bleat later). But a bug is a bug, so back-patch
to 9.4 where this code came in.
Pan Bian
Discussion: https://postgr.es/m/152056616579.4966.583293218357089052@wrigleys.postgresql.org
This optimization was introduced in commit 2b272734. The changes include
some additional comments and documentation, and also these more
substantive changes:
. ensure the optimization is only applied on the leaf node of a tree
whose root is on level 2 or more. It's of little value on small trees.
. Delay calling RelationSetTargetBlock() until after the critical
section of _bt_insertonpg
. ensure the optimization is also applied to unlogged tables.
Pavan Deolasee and Peter Geoghegan with some very light editing from me.
Discussion: https://postgr.es/m/CABOikdO8jhRarNC60nZLktZYhxt+TK8z_V97+Ny499YQdyAfug@mail.gmail.com
I'd hoped that commit 3b8f6e75f was sufficient to ensure parallel safety
even when a build started in a subdirectory requires rebuilding of
generated headers. This isn't so, because making submake-generated-headers
a prerequisite of "all" isn't enough to ensure it's completed before
starting on "all"'s other prerequisites. The explicit dependencies we put
on the recursive make targets ensure safe ordering before we recurse into
child directories, but they don't protect targets to be made in the current
directory. Hence, put back some ordering dependencies in directories that
we've traditionally expected to be starting points for "standalone" builds,
to wit src/pl/plpython and src/test/regress. (The former needs this in
order to minimize the work involved in building for both python 2 and
python 3; the latter to support packagings that make the regression tests
available for out-of-build-tree execution.) Adjust some other dependencies
so that these two cases work correctly even at high -j settings.
I'm not terribly happy with this partial solution, but I don't see a
way to do better without massive makefile restructuring, which we surely
aren't doing at this point in the development cycle. In any case, it's
little if any worse than what we had in prior releases.
Discussion: https://postgr.es/m/1523353963.8169.26.camel@gunduz.org
The HeapFetches counter was using a simple value in IndexOnlyScanState,
which fails to propagate values from parallel workers; so the counts are
wrong when IndexOnlyScan runs in parallel. Move it to Instrumentation,
like all the other counters.
While at it, change INSERT ON CONFLICT conflicting tuple counter to use
the new ntuples2 instead of nfiltered2, which is a blatant misuse.
Discussion: https://postgr.es/m/20180409215851.idwc75ct2bzi6tea@alvherre.pgsql
Per Julien Rouhaud and the buildfarm. This is not quite Julien's
patch: there's no need to lobotomize this build rule when building
contrib modules in-tree, so set NO_GENERATED_HEADERS only if PGXS.
In passing, also set NO_TEMP_INSTALL in external builds. This doesn't
seem to be fixing any live bug, because "make check" in an external
build just produces the expected error message without first trying to
make a temp install ... but it's far from obvious why it doesn't, so
this change seems like good future-proofing.
Julien Rouhaud and Tom Lane
Discussion: https://postgr.es/m/CAOBaU_YH=g68opbbMk8is3jNwhoXGa8ckRSre1nx0Obe1C7i-Q@mail.gmail.com
The comment earlier in the function correctly states "and the insertion
key is strictly greater than the first key in this page". That is what
we check here, not "greater than or equal".