Commit 8694cc96b did this randomly differently from other callers of
parse_filename_for_nontemp_relation(). Perhaps unsurprisingly,
the randomly different way is wrong; it fails to ensure the
extracted string is null-terminated. Per buildfarm member skink.
Discussion: https://postgr.es/m/14453.1522001792@sss.pgh.pa.us
This adds a new option --wal-segsize (analogous to initdb) that changes
the WAL segment size in pg_control.
Author: Nathan Bossart <bossartn@amazon.com>
Coverity complained that this check is pointless, and it's right.
There is no case where we'd call ExecutorStart with a null plannedstmt,
and if we did, it'd have crashed before here. Thinko in commit cc415a56d.
If random() returns a result sufficiently close to zero, float8out
switches to scientific notation, breaking this test case's expectation
that the output should look like '0.xxxxxxxxx'. Casting to numeric
should fix that. Per buildfarm member pogona.
Discussion: https://postgr.es/m/20180324212502.wt4serghfidge2on@alap3.anarazel.de
We were running out of good single-letter options for some upcoming
pg_resetwal functionality, so add long options to create more
possibilities. Add to pg_controldata as well for symmetry.
based on patch by Bossart, Nathan <bossartn@amazon.com>
In the case that PostgreSQL uses stdbool.h but Perl doesn't, we need to
prevent Perl from defining bool, to prevent compiler warnings about
redefinition.
For years, our makefiles have correctly observed that "there is no correct
way to write a rule that generates two files". However, what we did is to
provide empty rules that "generate" the secondary output files from the
primary one, and that's not right either. Depending on the details of
the creating process, the primary file might end up timestamped later than
one or more secondary files, causing subsequent make runs to consider the
secondary file(s) out of date. That's harmless in a plain build, since
make will just re-execute the empty rule and nothing happens. But it's
fatal in a VPATH build, since make will expect the secondary file to be
rebuilt in the build directory. This would manifest as "file not found"
failures during VPATH builds from tarballs, if we were ever unlucky enough
to ship a tarball with apparently out-of-date secondary files. (It's not
clear whether that has ever actually happened, but it definitely could.)
To ensure that secondary output files have timestamps >= their primary's,
change our makefile convention to be that we provide a "touch $@" action
not an empty rule. Also, make sure that this rule actually gets invoked
during a distprep run, else the hazard remains.
It's been like this a long time, so back-patch to all supported branches.
In HEAD, I skipped the changes in src/backend/catalog/Makefile, because
those rules are due to get replaced soon in the bootstrap data format
patch, and there seems no need to create a merge issue for that patch.
If for some reason we fail to land that patch in v11, we'll need to
back-fill the changes in that one makefile from v10.
Discussion: https://postgr.es/m/18556.1521668179@sss.pgh.pa.us
Revert the PL/Perl-specific change in
9a95a77d9d5d3003d2d67121f2731b6e5fc37336. We must not prevent Perl from
using stdbool.h when it has been built to do so, even if it uses an
incompatible size. Otherwise, we would be imposing our bool on Perl,
which will lead to crashes because of the size mismatch.
Instead, we undef bool after including the Perl headers, as we did
previously, but now only if we are not using stdbool.h ourselves.
Record that choice in c.h as USE_STDBOOL. This will also make it easier
to apply that coding pattern elsewhere if necessary.
Previously, FOR EACH ROW triggers were not allowed in partitioned
tables. Now we allow AFTER triggers on them, and on trigger creation we
cascade to create an identical trigger in each partition. We also clone
the triggers to each partition that is created or attached later.
This means that deferred unique keys are allowed on partitioned tables,
too.
Author: Álvaro Herrera
Reviewed-by: Peter Eisentraut, Simon Riggs, Amit Langote, Robert Haas,
Thomas Munro
Discussion: https://postgr.es/m/20171229225319.ajltgss2ojkfd3kp@alvherre.pgsql
The LLVM JIT provider uses clang to synchronize types between normal C
code and runtime generated code. Clang represents stdbool.h style
booleans in return values & parameters differently from booleans
stored in variables.
Thus the expression compilation code from 2a0faed9d needs to be
adapted to 9a95a77d9. Instead of hardcoding i8 as the type for
booleans (which already was wrong on some edge case platforms!), use
postgres' notion of a boolean as used for storage and for parameters.
Per buildfarm animal xenodermus.
Author: Andres Freund
Using the standard bool type provided by C allows some recent compilers
and debuggers to give better diagnostics. Also, some extension code and
third-party headers are increasingly pulling in stdbool.h, so it's
probably saner if everyone uses the same definition.
But PostgreSQL code is not prepared to handle bool of a size other than
1, so we keep our own old definition if we encounter a stdbool.h with a
bool of a different size. (Among current build farm members, this only
applies to old macOS versions on PowerPC.)
To check that the used bool is of the right size, add a static
assertions about size of GinTernaryValue vs bool. This is currently the
only place that assumes that bool and char are of the same size.
Discussion: https://www.postgresql.org/message-id/flat/3a0fe7e1-5ed1-414b-9230-53bbc0ed1f49@2ndquadrant.com
In addition to the interpretation of expressions (which back
evaluation of WHERE clauses, target list projection, aggregates
transition values etc) support compiling expressions to native code,
using the infrastructure added in earlier commits.
To avoid duplicating a lot of code, only support emitting code for
cases that are likely to be performance critical. For expression steps
that aren't deemed that, use the existing interpreter.
The generated code isn't great - some architectural changes are
required to address that. But this already yields a significant
speedup for some analytics queries, particularly with WHERE clauses
filtering a lot, or computing multiple aggregates.
Author: Andres Freund
Tested-By: Thomas Munro
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Disable JITing for VALUES() nodes.
VALUES() nodes are only ever executed once. This is primarily helpful
for debugging, when forcing JITing even for cheap queries.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
For any interesting JIT target, fields inside structs need to be
accessed. b96d550e contains infrastructure for syncing the definition
of types between postgres C code and runtime code generation with
LLVM. But that doesn't sync the number or names of fields inside
structs, just the types (including padding etc).
One option would be to hardcode the offset numbers in the JIT code,
but that'd be hard to keep in sync. Instead add macros indicating the
field offset to the fields that need to be accessed. Not pretty, but
manageable.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Per the project style guide, details and hints should have leading
capitalization and end with a period. On the other hand, errcontext should
not be capitalized and should not end with a period. To support well
formatted error contexts in dblink, extend dblink_res_error() to take a
format+arguments rather than a hardcoded string.
Daniel Gustafsson
Discussion: https://postgr.es/m/B3C002C8-21A0-4F53-A06E-8CAB29FCF295@yesql.se
Without this patch, we can implement a UNION or UNION ALL as an
Append where Gather appears beneath one or more of the Append
branches, but this lets us put the Gather node on top, with
a partial path for each relation underneath.
There is considerably more work that could be done to improve
planning in this area, but that will probably need to wait
for a future release.
Patch by me, reviewed and tested by Ashutosh Bapat and Rajkumar
Raghuwanshi.
Discussion: http://postgr.es/m/CA+TgmoaLRAOqHmMZx=ESM3VDEPceg+-XXZsRXQ8GtFJO_zbMSw@mail.gmail.com
VACUUM thought that reltuples represents the total number of tuples in
the relation, while ANALYZE counted only live tuples. This can cause
"flapping" in the value when background vacuums and analyzes happen
separately. The planner's use of reltuples essentially assumes that
it's the count of live (visible) tuples, so let's standardize on having
it mean live tuples.
Another issue is that the definition of "live tuple" isn't totally clear;
what should be done with INSERT_IN_PROGRESS or DELETE_IN_PROGRESS tuples?
ANALYZE's choices in this regard are made on the assumption that if the
originating transaction commits at all, it will happen after ANALYZE
finishes, so we should ignore the effects of the in-progress transaction
--- unless it is our own transaction, and then we should count it.
Let's propagate this definition into VACUUM, too.
Likewise propagate this definition into CREATE INDEX, and into
contrib/pgstattuple's pgstattuple_approx() function.
Tomas Vondra, reviewed by Haribabu Kommi, some corrections by me
Discussion: https://postgr.es/m/16db4468-edfa-830a-f921-39a50498e77e@2ndquadrant.com
This adds simple cost based plan time decision about whether JIT
should be performed. jit_above_cost, jit_optimize_above_cost are
compared with the total cost of a plan, and if the cost is above them
JIT is performed / optimization is performed respectively.
For that PlannedStmt and EState have a jitFlags (es_jit_flags) field
that stores information about what JIT operations should be performed.
EState now also has a new es_jit field, which can store a
JitContext. When there are no errors the context is released in
standard_ExecutorEnd().
It is likely that the default values for jit_[optimize_]above_cost
will need to be adapted further, but in my test these values seem to
work reasonably.
Author: Andres Freund, with feedback by Peter Eisentraut
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
This commit introduces the ability to actually generate code using
LLVM. In particular, this adds:
- Ability to emit code both in heavily optimized and largely
unoptimized fashion
- Batching facility to allow functions to be defined in small
increments, but optimized and emitted in executable form in larger
batches (for performance and memory efficiency)
- Type and function declaration synchronization between runtime
generated code and normal postgres code. This is critical to be able
to access struct fields etc.
- Developer oriented jit_dump_bitcode GUC, for inspecting / debugging
the generated code.
- per JitContext statistics of number of functions, time spent
generating code, optimizing, and emitting it. This will later be
employed for EXPLAIN support.
This commit doesn't yet contain any code actually generating
functions. That'll follow in later commits.
Documentation for GUCs added, and for JIT in general, will be added in
later commits.
Author: Andres Freund, with contributions by Pierre Ducroquet
Testing-By: Thomas Munro, Peter Eisentraut
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Count the number of tuples in the index honestly, instead of assuming
that it's the same as the number of tuples in the heap. (It might be
different if the index is partial.)
Back-patch to all supported versions.
Tomas Vondra
Discussion: https://postgr.es/m/3b3d8eac-c709-0d25-088e-b98339a1b28a@2ndquadrant.com
If the partition keys of input relation are part of the GROUP BY
clause, all the rows belonging to a given group come from a single
partition. This allows aggregation/grouping over a partitioned
relation to be broken down * into aggregation/grouping on each
partition. This should be no worse, and often better, than the normal
approach.
If the GROUP BY clause does not contain all the partition keys, we can
still perform partial aggregation for each partition and then finalize
aggregation after appending the partial results. This is less certain
to be a win, but it's still useful.
Jeevan Chalke, Ashutosh Bapat, Robert Haas. The larger patch series
of which this patch is a part was also reviewed and tested by Antonin
Houska, Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin
Knizhnik, Pascal Legrand, and Rafia Sabih.
Discussion: http://postgr.es/m/CAM2+6=V64_xhstVHie0Rz=KPEQnLJMZt_e314P0jaT_oJ9MR8A@mail.gmail.com
conditional.c was moved in f67b113ac62777d18cd20d3c4d05be964301b936 commit
but forgotten to add to Windows build system.
I don't have a Windows box, so blind attempt.
In commit e51a04840a1c45db101686bef0b7025d5014c74b it was missed 64-bit
constants, wrap them with UINT64CONST().
Per buildfarm member dromedary and gripe from Tom Lane
Previously, a value was included in the MCV list if its frequency was
25% larger than the estimated average frequency of all nonnull values
in the table. For uniform distributions, that can lead to values
being included in the MCV list and significantly overestimated on the
basis of relatively few (sometimes just 2) instances being seen in the
sample. For non-uniform distributions, it can lead to too few values
being included in the MCV list, since the overall average frequency
may be dominated by a small number of very common values, while the
remaining values may still have a large spread of frequencies, causing
both substantial overestimation and underestimation of the remaining
values. Furthermore, increasing the statistics target may have little
effect because the overall average frequency will remain relatively
unchanged.
Instead, populate the MCV list with the largest set of common values
that are statistically significantly more common than the average
frequency of the remaining values. This takes into account the
variance of the sample counts, which depends on the counts themselves
and on the proportion of the table that was sampled. As a result, it
constrains the relative standard error of estimates based on the
frequencies of values in the list, reducing the chances of too many
values being included. At the same time, it allows more values to be
included, since the MCVs need only be more common than the remaining
non-MCVs, rather than the overall average. Thus it tends to produce
fewer MCVs than the previous code for uniform distributions, and more
for non-uniform distributions, reducing estimation errors in both
cases. In addition, the algorithm responds better to increasing the
statistics target, allowing more values to be included in the MCV list
when more of the table is sampled.
Jeff Janes, substantially modified by me. Reviewed by John Naylor and
Tomas Vondra.
Discussion: https://postgr.es/m/CAMkU=1yvdGvW9TmiLAhz2erFnvnPFYHbOZuO+a=4DVkzpuQ2tw@mail.gmail.com
This commit introduces:
1) JIT provider abstraction, which allows JIT functionality to be
implemented in separate shared libraries. That's desirable because
it allows to install JIT support as a separate package, and because
it allows experimentation with different forms of JITing.
2) JITContexts which can be, using functions introduced in follow up
commits, used to emit JITed functions, and have them be cleaned up
on error.
3) The outline of a LLVM JIT provider, which will be fleshed out in
subsequent commits.
Documentation for GUCs added, and for JIT in general, will be added in
later commits.
Author: Andres Freund, with architectural input from Jeff Davis
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Pending some solution for the problems noted in commit 742869946,
disallow dynamic creation of GUC_LIST_QUOTE variables.
If there are any extensions out there using this feature, they'd not
be happy for us to start enforcing this rule in minor releases, so
this is a HEAD-only change. The previous commit didn't make things
any worse than they already were for such cases.
Discussion: https://postgr.es/m/20180111064900.GA51030@paquier.xyz
Code that prints out the contents of setconfig or proconfig arrays in
SQL format needs to handle GUC_LIST_QUOTE variables differently from
other ones, because for those variables, flatten_set_variable_args()
already applied a layer of quoting. The value can therefore safely
be printed as-is, and indeed must be, or flatten_set_variable_args()
will muck it up completely on reload. For all other GUC variables,
it's necessary and sufficient to quote the value as a SQL literal.
We'd recognized the need for this long ago, but mis-analyzed the
need slightly, thinking that all GUC_LIST_INPUT variables needed
the special treatment. That's actually wrong, since a valid value
of a LIST variable might include characters that need quoting,
although no existing variables accept such values.
More to the point, we hadn't made any particular effort to keep the
various places that deal with this up-to-date with the set of variables
that actually need special treatment, meaning that we'd do the wrong
thing with, for example, temp_tablespaces values. This affects dumping
of SET clauses attached to functions, as well as ALTER DATABASE/ROLE SET
commands.
In ruleutils.c we can fix it reasonably honestly by exporting a guc.c
function that allows discovering the flags for a given GUC variable.
But pg_dump doesn't have easy access to that, so continue the old method
of having a hard-wired list of affected variable names. At least we can
fix it to have just one list not two, and update the list to match
current reality.
A remaining problem with this is that it only works for built-in
GUC variables. pg_dump's list obvious knows nothing of third-party
extensions, and even the "ask guc.c" method isn't bulletproof since
the relevant extension might not be loaded. There's no obvious
solution to that, so for now, we'll just have to discourage extension
authors from inventing custom GUCs that need GUC_LIST_QUOTE.
This has been busted for a long time, so back-patch to all supported
branches.
Michael Paquier and Tom Lane, reviewed by Kyotaro Horiguchi and
Pavel Stehule
Discussion: https://postgr.es/m/20180111064900.GA51030@paquier.xyz
Currently, if operator_predicate_proof() is given an operator clause like
"something op NULL", it just throws up its hands and reports it can't prove
anything. But we can often do better than that, if the operator is strict,
because then we know that the clause returns NULL overall. Depending on
whether we're trying to prove or refute something, and whether we need
weak or strong semantics for NULL, this may be enough to prove the
implication, especially when we rely on the standard rule that "false
implies anything". In particular, this lets us do something useful with
questions like "does X IN (1,3,5,NULL) imply X <= 5?" The null entry
in the IN list can effectively be ignored for this purpose, but the
proof rules were not previously smart enough to deduce that.
This patch is by me, but it owes something to previous work by
Amit Langote to try to solve problems of the form mentioned.
Thanks also to Emre Hasegeli and Ashutosh Bapat for review.
Discussion: https://postgr.es/m/3bad48fc-f257-c445-feeb-8a2b2fb622ba@lab.ntt.co.jp
I noticed while fooling with John Naylor's bootstrap-data patch that we had
one high-numbered manually assigned OID, 8888, which evidently came from a
submission that the committer didn't bother to bring into line with usual
OID allocation practices before committing. That's a bad idea, because it
creates a hazard for other patches that may be temporarily using high OID
numbers. Change it to something more in line with what we usually do.
This evidently dates to commit abb173392. It's too late to change it
in released branches, but we can fix it in HEAD.
If the control file is corrupted and specifies the WAL segment size
to be 0 bytes, calculating the latest checkpoint's REDO WAL file
will fail with a division-by-zero error. Show it as "???" instead.
Also reword the warning message a bit and send it to stdout, like the
other pre-existing warning messages.
Add some tests for dealing with a corrupted pg_control file.
Author: Nathan Bossart <bossartn@amazon.com>, tests by me
My commit 4dba331cb3dc that moved around CommandCounterIncrement calls
in partitioning DDL code unearthed a problem with the relcache handling
for the 'default' partition: the construction of a correct relcache
entry for the partitioned table was at the mercy of lack of CCI calls in
non-trivial amounts of code. This was prone to creating problems later
on, as the code develops. This was visible as a test failure in a
compile with RELCACHE_FORCE_RELASE (buildfarm member prion).
The problem is that after the mentioned commit it was possible to create
a relcache entry that had incomplete information regarding the default
partition because I introduced a CCI between adding the catalog entries
for the default partition (StorePartitionBound) and the update of
pg_partitioned_table entry for its parent partitioned table
(update_default_partition_oid). It seems the best fix is to move the
latter so that it occurs inside the former; the purposeful lack of
intervening CCI should be more obvious, and harder to break.
I also remove a check in RelationBuildPartitionDesc that returns NULL if
the key is not set. I couldn't find any place that needs this hack
anymore; probably it was required because of bugs that have since been
fixed.
Fix a few typos I noticed while reviewing the code involved.
Discussion: https://postgr.es/m/20180320182659.nyzn3vqtjbbtfgwq@alvherre.pgsql
Logical decoding should not publish anything about tables created as
part of a heap rewrite during DDL. Those tables don't exist externally,
so consumers of logical decoding cannot do anything sensible with that
information. In ab28feae2bd3d4629bd73ae3548e671c57d785f0, we worked
around this for built-in logical replication, but that was hack.
This is a more proper fix: We mark such transient heaps using the new
field pg_class.relwrite, linking to the original relation OID. By
default, we ignore them in logical decoding before they get to the
output plugin. Optionally, a plugin can register their interest in
getting such changes, if they handle DDL specially, in which case the
new field will help them get information about the actual table.
Reviewed-by: Craig Ringer <craig@2ndquadrant.com>