This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
When accessing a table with RLS via a view, the RLS checks are
performed as the view owner. However, the code neglected to propagate
that to any subqueries in the RLS checks. Fix that by calling
setRuleCheckAsUser() for all RLS policy quals and withCheckOption
checks for RTEs with RLS.
Back-patch to 9.5 where RLS was added.
Per bug #15708 from daurnimator.
Discussion: https://postgr.es/m/15708-d65cab2ce9b1717a@postgresql.org
This unifies the various ad hoc logging (message printing, error
printing) systems used throughout the command-line programs.
Features:
- Program name is automatically prefixed.
- Message string does not end with newline. This removes a common
source of inconsistencies and omissions.
- Additionally, a final newline is automatically stripped, simplifying
use of PQerrorMessage() etc., another common source of mistakes.
- I converted error message strings to use %m where possible.
- As a result of the above several points, more translatable message
strings can be shared between different components and between
frontends and backend, without gratuitous punctuation or whitespace
differences.
- There is support for setting a "log level". This is not meant to be
user-facing, but can be used internally to implement debug or
verbose modes.
- Lazy argument evaluation, so no significant overhead if logging at
some level is disabled.
- Some color in the messages, similar to gcc and clang. Set
PG_COLOR=auto to try it out. Some colors are predefined, but can be
customized by setting PG_COLORS.
- Common files (common/, fe_utils/, etc.) can handle logging much more
simply by just using one API without worrying too much about the
context of the calling program, requiring callbacks, or having to
pass "progname" around everywhere.
- Some programs called setvbuf() to make sure that stderr is
unbuffered, even on Windows. But not all programs did that. This
is now done centrally.
Soft goals:
- Reduces vertical space use and visual complexity of error reporting
in the source code.
- Encourages more deliberate classification of messages. For example,
in some cases it wasn't clear without analyzing the surrounding code
whether a message was meant as an error or just an info.
- Concepts and terms are vaguely aligned with popular logging
frameworks such as log4j and Python logging.
This is all just about printing stuff out. Nothing affects program
flow (e.g., fatal exits). The uses are just too varied to do that.
Some existing code had wrappers that do some kind of print-and-exit,
and I adapted those.
I tried to keep the output mostly the same, but there is a lot of
historical baggage to unwind and special cases to consider, and I
might not always have succeeded. One significant change is that
pg_rewind used to write all error messages to stdout. That is now
changed to stderr.
Reviewed-by: Donald Dong <xdong@csumb.edu>
Reviewed-by: Arthur Zakirov <a.zakirov@postgrespro.ru>
Discussion: https://www.postgresql.org/message-id/flat/6a609b43-4f57-7348-6480-bd022f924310@2ndquadrant.com
jsonb_path_match() checks if jsonb document matches jsonpath query. Therefore,
jsonpath query should return single boolean. Currently, if result of jsonpath
is not a single boolean, NULL is returned independently whether silent mode
is on or off. But that appears to be wrong when silent mode is off. This
commit makes jsonb_path_match() throw an error in this case.
Author: Nikita Glukhov
Jsonpath now accepts integers with leading zeroes and floats starting with
a dot. However, SQL standard requires to follow JSON specification, which
doesn't allow none of these cases. Our json[b] datatypes also restrict that.
So, restrict it in jsonpath altogether.
Author: Nikita Glukhov
This commit makes existing GIN operator classes jsonb_ops and json_path_ops
support "jsonb @@ jsonpath" and "jsonb @? jsonpath" operators. Basic idea is
to extract statements of following form out of jsonpath.
key1.key2. ... .keyN = const
The rest of jsonpath is rechecked from heap.
Catversion is bumped.
Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Author: Nikita Glukhov, Alexander Korotkov
Reviewed-by: Jonathan Katz, Pavel Stehule
The syntax
GENERATED BY DEFAULT AS (expr)
is not allowed but we have to accept it in the grammar to avoid
shift/reduce conflicts because of the similar syntax for identity
columns. The existing code just ignored this, incorrectly. Add an
explicit error check and a bespoke error message.
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Commit 29b64d1d mishandled skipping over truncated high key attributes
during row comparisons. The row comparison key matching loop would loop
forever when a truncated attribute was encountered for a row compare
subkey. Fix by following the example of other code in the loop: advance
the current subkey, or break out of the loop when the last subkey is
reached.
Add test coverage for the relevant _bt_check_rowcompare() code path.
The new test case is somewhat tied to nbtree implementation details,
which isn't ideal, but seems unavoidable.
There was some debate about whether the code I'd added to remap
AppendRelInfos obtained from the initial SELECT planning run is
actually necessary. Add a test case demonstrating that it is.
Discussion: https://postgr.es/m/23831.1553873385@sss.pgh.pa.us
Previously, the planner created RangeTblEntry and RelOptInfo structs
for every partition of a partitioned table, even though many of them
might later be deemed uninteresting thanks to partition pruning logic.
This incurred significant overhead when there are many partitions.
Arrange to postpone creation of these data structures until after
we've processed the query enough to identify restriction quals for
the partitioned table, and then apply partition pruning before not
after creation of each partition's data structures. In this way
we need not open the partition relations at all for partitions that
the planner has no real interest in.
For queries that can be proven at plan time to access only a small
number of partitions, this patch improves the practical maximum
number of partitions from under 100 to perhaps a few thousand.
Amit Langote, reviewed at various times by Dilip Kumar, Jesper Pedersen,
Yoshikazu Imai, and David Rowley
Discussion: https://postgr.es/m/9d7c5112-cb99-6a47-d3be-cf1ee6862a1d@lab.ntt.co.jp
While trying to plan a partitionwise join, we may be faced with cases
where one or both input partitions for a particular segment of the join
have been pruned away. In HEAD and v11, this is problematic because
earlier processing didn't bother to make a pruned RelOptInfo fully
valid. With an upcoming patch to make partition pruning more efficient,
this'll be even more problematic because said RelOptInfo won't exist at
all.
The existing code attempts to deal with this by retroactively making the
RelOptInfo fully valid, but that causes crashes under GEQO because join
planning is done in a short-lived memory context. In v11 we could
probably have fixed this by switching to the planner's main context
while fixing up the RelOptInfo, but that idea doesn't scale well to the
upcoming patch. It would be better not to mess with the base-relation
data structures during join planning, anyway --- that's just a recipe
for order-of-operations bugs.
In many cases, though, we don't actually need the child RelOptInfo,
because if the input is certainly empty then the join segment's result
is certainly empty, so we can skip making a join plan altogether. (The
existing code ultimately arrives at the same conclusion, but only after
doing a lot more work.) This approach works except when the pruned-away
partition is on the nullable side of a LEFT, ANTI, or FULL join, and the
other side isn't pruned. But in those cases the existing code leaves a
lot to be desired anyway --- the correct output is just the result of
the unpruned side of the join, but we were emitting a useless outer join
against a dummy Result. Pending somebody writing code to handle that
more nicely, let's just abandon the partitionwise-join optimization in
such cases.
When the modified code skips making a join plan, it doesn't make a
join RelOptInfo either; this requires some upper-level code to
cope with nulls in part_rels[] arrays. We would have had to have
that anyway after the upcoming patch.
Back-patch to v11 since the crash is demonstrable there.
Discussion: https://postgr.es/m/8305.1553884377@sss.pgh.pa.us
This is an SQL-standard feature that allows creating columns that are
computed from expressions rather than assigned, similar to a view or
materialized view but on a column basis.
This implements one kind of generated column: stored (computed on
write). Another kind, virtual (computed on read), is planned for the
future, and some room is left for it.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b151f851-4019-bdb1-699e-ebab07d2f40a@2ndquadrant.com
This adds the CONCURRENTLY option to the REINDEX command. A REINDEX
CONCURRENTLY on a specific index creates a new index (like CREATE
INDEX CONCURRENTLY), then renames the old index away and the new index
in place and adjusts the dependencies, and then drops the old
index (like DROP INDEX CONCURRENTLY). The REINDEX command also has
the capability to run its other variants (TABLE, DATABASE) with the
CONCURRENTLY option (but not SYSTEM).
The reindexdb command gets the --concurrently option.
Author: Michael Paquier, Andreas Karlsson, Peter Eisentraut
Reviewed-by: Andres Freund, Fujii Masao, Jim Nasby, Sergei Kornilov
Discussion: https://www.postgresql.org/message-id/flat/60052986-956b-4478-45ed-8bd119e9b9cf%402ndquadrant.com#74948a1044c56c5e817a5050f554ddee
Introduce a third extended statistic type, supported by the CREATE
STATISTICS command - MCV lists, a generalization of the statistic
already built and used for individual columns.
Compared to the already supported types (n-distinct coefficients and
functional dependencies), MCV lists are more complex, include column
values and allow estimation of much wider range of common clauses
(equality and inequality conditions, IS NULL, IS NOT NULL etc.).
Similarly to the other types, a new pseudo-type (pg_mcv_list) is used.
Author: Tomas Vondra
Reviewed-by: Dean Rasheed, David Rowley, Mark Dilger, Alvaro Herrera
Discussion: https://postgr.es/m/dfdac334-9cf2-2597-fb27-f0fb3753f435@2ndquadrant.com
Column references are not allowed in default expressions and partition
bound expressions, and are restricted as such once the transformation of
their expressions is done. However, trying to use more complex column
references can lead to confusing error messages. For example, trying to
use a two-field column reference name for default expressions and
partition bounds leads to "missing FROM-clause entry for table", which
makes no sense in their respective context.
In order to make the errors generated more useful, this commit adds more
verbose messages when transforming column references depending on the
context. This has a little consequence though: for example an
expression using an aggregate with a column reference as argument would
cause an error to be generated for the column reference, while the
aggregate was the problem reported before this commit because column
references get transformed first.
The confusion exists for default expressions for a long time, and the
problem is new as of v12 for partition bounds. Still per the lack of
complaints on the matter no backpatch is done.
The patch has been written by Amit Langote and me, and Tom Lane has
provided the improvement of the documentation for default expressions on
the CREATE TABLE page.
Author: Amit Langote, Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20190326020853.GM2558@paquier.xyz
When invoked for the first time in a session, current_schema() and
current_schemas() can finish by creating a temporary schema. Currently
those functions are parallel-safe, however if for a reason or another
they get launched across multiple parallel workers, they would fail when
attempting to create a temporary schema as temporary contexts are not
supported in this case.
The original issue has been spotted by buildfarm members crake and
lapwing, after commit c5660e0 has introduced the first regression tests
based on current_schema() in the tree. After that, 396676b has
introduced a workaround to avoid parallel plans but that was not
completely right either.
Catversion is bumped.
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20190118024618.GF1883@paquier.xyz
ALTER INDEX .. ATTACH PARTITION fails if the partitioned table where the
index is defined contains more dropped columns than its partition, with
this message:
ERROR: incorrect attribute map
The cause was that one caller of CompareIndexInfo was passing the number
of attributes of the partition rather than the parent, which confused
the length check. Repair.
This can cause pg_upgrade to fail when used on such a database. Leave
some more objects around after regression tests, so that the case is
detected by pg_upgrade test suite.
Remove some spurious empty lines noticed while looking for other cases
of the same problem.
Discussion: https://postgr.es/m/20190326213924.GA2322@alvherre.pgsql
When used on a partition containing foreign keys coming from one of its
ancestors, \d would (rather unhelpfully) print the details about the
pg_constraint row in the partition. This becomes a bit frustrating when
the user tries things like dropping the FK in the partition; instead,
show the details for the foreign key on the table where it is defined.
Also, when a table is referenced by a foreign key on a partitioned
table, we would show multiple "Referenced by" lines, one for each
partition, which gets unwieldy pretty fast. Modify that so that it
shows only one line for the ancestor partitioned table where the FK is
defined.
Discussion: https://postgr.es/m/20181204143834.ym6euxxxi5aeqdpn@alvherre.pgsql
Reviewed-by: Tom Lane, Amit Langote, Peter Eisentraut
Since 7c079d7, partition bounds are able to use generalized expression
syntax when processed, treating "minvalue" and "maxvalue" as specific
cases as they get passed down for transformation as a column references.
The checks for infinite bounds in range expressions have been lax
though, causing crashes when trying to use column reference names with
more than one field. Here is an example causing a crash:
CREATE TABLE list_parted (a int) PARTITION BY LIST (a);
CREATE TABLE part_list_crash PARTITION OF list_parted
FOR VALUES IN (somename.somename);
Note that the creation of the second relation should fail as partition
bounds cannot have column references in their expressions, so when
finding an expression which does not match the expected infinite bounds,
then this commit lets the generic transformation machinery check after
it. The error message generated in this case references as well a
missing RTE, which is confusing. This problem will be treated
separately as it impacts as well default expressions for some time, and
for now only the cases where a crash can happen are fixed.
While on it, extend the set of regression tests in place for list
partition bounds and add an extra set for range partition bounds.
Reported-by: Alexander Lakhin
Author: Michael Paquier
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/15668-0377b1981aa1a393@postgresql.org
If there's only one child relation, the Append or MergeAppend isn't
doing anything useful, and can be elided. It does have a purpose
during planning though, which is to serve as a buffer between parent
and child Var numbering. Therefore we keep it all the way through
to setrefs.c, and get rid of it only after fixing references in the
plan level(s) above it. This works largely the same as setrefs.c's
ancient hack to get rid of no-op SubqueryScan nodes, and can even
share some code with that.
Note the change to make setrefs.c use apply_tlist_labeling rather than
ad-hoc code. This has the effect of propagating the child's resjunk
and ressortgroupref labels, which formerly weren't propagated when
removing a SubqueryScan. Doing that is demonstrably necessary for
the [Merge]Append cases, and seems harmless for SubqueryScan, if only
because trivial_subqueryscan is afraid to collapse cases where the
resjunk marking differs. (I suspect that restriction could now be
removed, though it's unclear that it'd make any new matches possible,
since the outer query can't have references to a child resjunk column.)
David Rowley, reviewed by Alvaro Herrera and Tomas Vondra
Discussion: https://postgr.es/m/CAKJS1f_7u8ATyJ1JGTMHFoKDvZdeF-iEBhs+sM_SXowOr9cArg@mail.gmail.com
This uses the same progress reporting infrastructure added in commit
c16dc1aca5e01e6acaadfcf38f5fc964a381dc62 and extends it to these
additional cases. We lack the ability to track the internal progress
of sorts and index builds so the information reported is
coarse-grained for some parts of the operation, but it still seems
like a significant improvement over having nothing at all.
Tatsuro Yamada, reviewed by Thomas Munro, Masahiko Sawada, Michael
Paquier, Jeff Janes, Alvaro Herrera, Rafia Sabih, and by me. A fair
amount of polishing also by me.
Discussion: http://postgr.es/m/59A77072.3090401@lab.ntt.co.jp
Non-backtracking flex parsers work faster than backtracking ones. So, this
commit gets rid of backtracking in jsonpath_scan.l. That required explicit
handling of some cases as well as manual backtracking for some cases. More
regression tests for numerics are added.
Discussion: https://mail.google.com/mail/u/0?ik=a20b091faa&view=om&permmsgid=msg-f%3A1628425344167939063
Author: John Naylor, Nikita Gluknov, Alexander Korotkov
Coverity complained that simple8b_encode() might read beyond the end of
the 'diffs' array, in the loop to encode the integers. That was a false
positive, because we never get into the loop in modes 0 or 1, and the
array is large enough for all the other modes. But I admit it's very
subtle, so it's not surprising that Coverity didn't see it, and it's not
very obvious to humans either. Refactor it, so that the second loop
re-computes the differences, instead of carrying them over from the first
loop in the 'diffs' array. This way, the 'diffs' array is not needed
anymore. It makes no measurable difference in performance, and seems more
straightforward this way.
Also, improve the comments in simple8b_encode(): fix the comment about its
return value that was flat-out wrong, and explain the condition when it
returns EMPTY_CODEWORD better.
In the passing, move the 'selector' from the codeword's low bits to the
high bits. It doesn't matter much, but looking at the original paper, and
googling around for other Simple-8b implementations, that's how it's
usually done.
Per Coverity, and Tom Lane's report off-list.
Now that the ordering of DROP messages ought to be stable everywhere,
we should not need these kluges of hiding DETAIL output just to avoid
unstable ordering. Hiding it's not great for test coverage, so
let's undo that where possible.
In a small number of places, it's necessary to leave it in, for
example because the output might include a variable pg_temp_nnn
schema name. I also left things alone in places where the details
would depend on other regression test scripts, e.g. plpython_drop.sql.
Perhaps buildfarm experience will show this to be a bad idea,
but if so I'd like to know why.
Discussion: https://postgr.es/m/E1h6eep-0001Mw-Vd@gemulon.postgresql.org
Commit 8aa9dd74b didn't quite finish the job in this area after all,
because DROP ROLE has a code path distinct from DROP OWNED BY, and
it was still reporting dependent objects in whatever order the index
scan returned them in.
Buildfarm experience shows that index ordering of equal-keyed objects is
significantly less stable than before in the wake of using heap TIDs as
tie-breakers. So if we try to hide the unstable ordering by suppressing
DETAIL reports, we're just going to end up having to do that for every
DROP that reports multiple objects. That's not great from a coverage
or problem-detection standpoint, and it's something we'll inevitably
forget in future patches, leading to more iterations of fixing-an-
unstable-result. So let's just bite the bullet and sort here too.
Discussion: https://postgr.es/m/E1h6eep-0001Mw-Vd@gemulon.postgresql.org
Add command variants COMMIT AND CHAIN and ROLLBACK AND CHAIN, which
start new transactions with the same transaction characteristics as the
just finished one, per SQL standard.
Support for transaction chaining in PL/pgSQL is also added. This
functionality is especially useful when running COMMIT in a loop in
PL/pgSQL.
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/flat/28536681-324b-10dc-ade8-ab46f7645a5a@2ndquadrant.com
This adds new, required, table AM callbacks for insert/delete/update
and lock_tuple. To be able to reasonably use those, the EvalPlanQual
mechanism had to be adapted, moving more logic into the AM.
Previously both delete/update/lock call-sites and the EPQ mechanism had
to have awareness of the specific tuple format to be able to fetch the
latest version of a tuple. Obviously that needs to be abstracted
away. To do so, move the logic that find the latest row version into
the AM. lock_tuple has a new flag argument,
TUPLE_LOCK_FLAG_FIND_LAST_VERSION, that forces it to lock the last
version, rather than the current one. It'd have been possible to do
so via a separate callback as well, but finding the last version
usually also necessitates locking the newest version, making it
sensible to combine the two. This replaces the previous use of
EvalPlanQualFetch(). Additionally HeapTupleUpdated, which previously
signaled either a concurrent update or delete, is now split into two,
to avoid callers needing AM specific knowledge to differentiate.
The move of finding the latest row version into tuple_lock means that
encountering a row concurrently moved into another partition will now
raise an error about "tuple to be locked" rather than "tuple to be
updated/deleted" - which is accurate, as that always happens when
locking rows. While possible slightly less helpful for users, it seems
like an acceptable trade-off.
As part of this commit HTSU_Result has been renamed to TM_Result, and
its members been expanded to differentiated between updating and
deleting. HeapUpdateFailureData has been renamed to TM_FailureData.
The interface to speculative insertion is changed so nodeModifyTable.c
does not have to set the speculative token itself anymore. Instead
there's a version of tuple_insert, tuple_insert_speculative, that
performs the speculative insertion (without requiring a flag to signal
that fact), and the speculative insertion is either made permanent
with table_complete_speculative(succeeded = true) or aborted with
succeeded = false).
Note that multi_insert is not yet routed through tableam, nor is
COPY. Changing multi_insert requires changes to copy.c that are large
enough to better be done separately.
Similarly, although simpler, CREATE TABLE AS and CREATE MATERIALIZED
VIEW are also only going to be adjusted in a later commit.
Author: Andres Freund and Haribabu Kommi
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.dehttps://postgr.es/m/20190313003903.nwvrxi7rw3ywhdel@alap3.anarazel.dehttps://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
Previously we were using the SQL:2003 definition, which doesn't allow
this, but that creates a serious dump/restore gotcha: there is no
setting of xmloption that will allow all valid XML data. Hence,
switch to the 2006 definition.
Since libxml doesn't accept <!DOCTYPE> directives in the mode we
use for CONTENT parsing, the implementation is to detect <!DOCTYPE>
in the input and switch to DOCUMENT parsing mode. This should not
cost much, because <!DOCTYPE> should be close to the front of the
input if it's there at all. It's possible that this causes the
error messages for malformed input to be slightly different than
they were before, if said input includes <!DOCTYPE>; but that does
not seem like a big problem.
In passing, buy back a few cycles in parsing of large XML documents
by not doing strlen() of the whole input in parse_xml_decl().
Back-patch because dump/restore failures are not nice. This change
shouldn't break any cases that worked before, so it seems safe to
back-patch.
Chapman Flack (revised a bit by me)
Discussion: https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com
Suppress 3 lines of unstable DETAIL output from a DROP ROLE statement in
event_trigger.sql. This is further cleanup for commit dd299df8.
Note that the event_trigger test instability issue is very similar to
the recently suppressed foreign_data test instability issue. Both
issues involve DETAIL output for a DROP ROLE statement that needed to be
changed as part of dd299df8.
Per buildfarm member macaque.
Previously there was basically no coverage for UPDATEs encountering
deleted rows, and no coverage for DELETE having to perform EPQ. That's
problematic for an upcoming commit in which EPQ is tought to integrate
with tableams. Also, there was no test for UPDATE to encounter a row
UPDATEd into another partition.
Author: Andres Freund
This is almost a straight revert of commit fff518d, which itself was a
revert of 7d3bf73ac.
It turns out that commit 8aa9dd74, which sorted dependent objects before
deletion in DROP OWNED BY, was not sufficient to make all remaining
unstable DETAIL output stable. Unstable DETAIL output from DROP ROLE
was not affected, because that happens to use a different code path. It
doesn't seem worthwhile to fix the other code path at this time.
Discussion: https://postgr.es/m/6226.1553274783@sss.pgh.pa.us
There were more large constants that needed UINT64CONST. And one variable
was declared as "int", when it needed to be uint64. These bugs were only
visible on 32-bit systems; clearly I should've tested on one, given that
this code does a lot of work with 64-bit integers.
Also, in the test "huge distances" test, the code created some values with
random distances between them, but the test logic didn't take into account
the possibility that the random distance was exactly 1. That never actually
happens with the seed we're using, but let's be tidy.
Buildfarm member 'woodlouse' failed one of the tests, and I'm not sure
which test failed. Better to print the names of the tests, so that it
will appear in the regression.diffs on failure.
To do this, we scan GiST two times. In the first pass we make note of
empty leaf pages and internal pages. At second pass we scan through
internal pages, looking for downlinks to the empty pages.
Deleting internal pages is still not supported, like in nbtree, the last
child of an internal page is never deleted. That means that if you have a
workload where new keys are always inserted to different area than where
old keys are removed, the index will still grow without bound. But the rate
of growth will be an order of magnitude slower than before.
Author: Andrey Borodin
Discussion: https://www.postgresql.org/message-id/B1E4DF12-6CD3-4706-BDBD-BF3283328F60@yandex-team.ru
The set is implemented as a B-tree, with a compact representation at leaf
items, using Simple-8b algorithm, so that clusters of nearby values use
less memory.
The IntegerSet isn't used for anything yet, aside from the test code, but
we have two patches in the works that would benefit from this: A patch to
allow GiST vacuum to delete empty pages, and a patch to reduce heap
VACUUM's memory usage, by storing the list of dead TIDs more efficiently
and lifting the 1 GB limit on its size.
This includes a unit test module, in src/test/modules/test_integerset.
It can be used to verify correctness, as a regression test, but if you run
it manully, it can also print memory usage and execution time of some of
the tests.
Author: Heikki Linnakangas, Andrey Borodin
Reviewed-by: Julien Rouhaud
Discussion: https://www.postgresql.org/message-id/b5e82599-1966-5783-733c-1a947ddb729f@iki.fi
This adds a flag "deterministic" to collations. If that is false,
such a collation disables various optimizations that assume that
strings are equal only if they are byte-wise equal. That then allows
use cases such as case-insensitive or accent-insensitive comparisons
or handling of strings with different Unicode normal forms.
This functionality is only supported with the ICU provider. At least
glibc doesn't appear to have any locales that work in a
nondeterministic way, so it's not worth supporting this for the libc
provider.
The term "deterministic comparison" in this context is from Unicode
Technical Standard #10
(https://unicode.org/reports/tr10/#Deterministic_Comparison).
This patch makes changes in three areas:
- CREATE COLLATION DDL changes and system catalog changes to support
this new flag.
- Many executor nodes and auxiliary code are extended to track
collations. Previously, this code would just throw away collation
information, because the eventually-called user-defined functions
didn't use it since they only cared about equality, which didn't
need collation information.
- String data type functions that do equality comparisons and hashing
are changed to take the (non-)deterministic flag into account. For
comparison, this just means skipping various shortcuts and tie
breakers that use byte-wise comparison. For hashing, we first need
to convert the input string to a canonical "sort key" using the ICU
analogue of strxfrm().
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Discussion: https://www.postgresql.org/message-id/flat/1ccc668f-4cbc-0bef-af67-450b47cdfee7@2ndquadrant.com
Trying to call the function with the top-most parent of a partition tree
was leading to a crash. In this case the correct result is to return
the top-most parent itself.
Reported-by: Álvaro Herrera
Author: Michael Paquier
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/20190322032612.GA323@alvherre.pgsql
When DefineIndex recurses to create constraints on partitions, it needs
to use the value returned by index_constraint_create to set up partition
dependencies. However, in the course of fixing the DEPENDENCY_INTERNAL_AUTO
mess, commit 1d92a0c9f7dd introduced some code to that function that
clobbered the return value, causing the recorded OID to be of the wrong
object. Close examination of pg_depend after creating the tables leads
to indescribable objects :-( My sin (in commit bdc3d7fa2376, while
preparing for DDL deparsing in event triggers) was to use a variable
name for the return value that's typically used for throwaway objects in
dependency-setting calls ("referenced"). Fix by changing the variable
names to match extended practice (the return value is "myself" rather
than "referenced".)
The pg_upgrade test notices the problem (in an indirect way: the pg_dump
outputs are in different order), but only if you create the objects in a
specific way that wasn't being used in the existing tests. Add a stanza
to leave some objects around that shows the bug.
Catversion bump because preexisting databases might have bogus pg_depend
entries.
Discussion: https://postgr.es/m/20190318204235.GA30360@alvherre.pgsql
These commands allow the argument type list to be omitted if there is
just one object that matches by name. However, if that syntax was
used with DROP IF EXISTS and there was more than one match, you got
a "function ... does not exist, skipping" notice message rather than a
truthful complaint about the ambiguity. This was basically due to
poor factorization and a rats-nest of logic, so refactor the relevant
lookup code to make it cleaner.
Note that this amounts to narrowing the scope of which sorts of
error conditions IF EXISTS will bypass. Per discussion, we only
intend it to skip no-such-object cases, not multiple-possible-matches
cases.
Per bug #15572 from Ash Marath. Although this definitely seems like
a bug, it's not clear that people would thank us for changing the
behavior in minor releases, so no back-patch.
David Rowley, reviewed by Julien Rouhaud and Pavel Stehule
Discussion: https://postgr.es/m/15572-ed1b9ed09503de8a@postgresql.org