1
0
mirror of https://github.com/postgres/postgres.git synced 2025-05-05 09:19:17 +03:00

32465 Commits

Author SHA1 Message Date
Tom Lane
f9263006d8 Support CONSTANT/NOT NULL/initial value for plpgsql composite variables.
These features were never implemented previously for composite or record
variables ... not that the documentation admitted it, so there's no doc
updates here.

This also fixes some issues concerning enforcing DOMAIN NOT NULL
constraints against plpgsql variables, although I'm not sure that
that topic is completely dealt with.

I created a new plpgsql test file for these features, and moved the
one relevant existing test case into that file.

Tom Lane, reviewed by Daniel Gustafsson

Discussion: https://postgr.es/m/18362.1514605650@sss.pgh.pa.us
2018-02-13 22:15:08 -05:00
Tom Lane
fd333bc763 Speed up plpgsql trigger startup by introducing "promises".
Over the years we've accreted quite a few special variables that are
predefined in plpgsql trigger functions.  The cost of initializing these
variables to their defined values turns out to be a significant part of
the runtime of simple triggers; but, undoubtedly, most real-world triggers
never examine the values of most of these variables.

To improve matters, invent the notion of a variable that has a "promise"
attached to it, specifying which of the predetermined values should be
assigned to the variable if anything ever reads it.  This eliminates all
the unneeded startup overhead, in return for a small penalty on accesses
to these variables.

Tom Lane, reviewed by Pavel Stehule

Discussion: https://postgr.es/m/11986.1514407114@sss.pgh.pa.us
2018-02-13 19:20:37 -05:00
Tom Lane
40301c1c8b Speed up plpgsql function startup by doing fewer pallocs.
Previously, copy_plpgsql_datum did a separate palloc for each variable
needing instance-local storage.  In simple benchmarks this made for a
noticeable fraction of the total runtime.  Improve it by precalculating
the space needed for all of a function's variables and doing just one
palloc for all of them.

In passing, remove PLPGSQL_DTYPE_EXPR from the list of plpgsql "datum"
types, since in fact it has nothing in common with the others, and there
is noplace that needs to discriminate on the basis of dtype between an
expression and any type of datum.  And add comments clarifying which
datum struct fields are generic and which aren't.

Tom Lane, reviewed by Pavel Stehule

Discussion: https://postgr.es/m/11986.1514407114@sss.pgh.pa.us
2018-02-13 19:10:43 -05:00
Tom Lane
4b93f57999 Make plpgsql use its DTYPE_REC code paths for composite-type variables.
Formerly, DTYPE_REC was used only for variables declared as "record";
variables of named composite types used DTYPE_ROW, which is faster for
some purposes but much less flexible.  In particular, the ROW code paths
are entirely incapable of dealing with DDL-caused changes to the number
or data types of the columns of a row variable, once a particular plpgsql
function has been parsed for the first time in a session.  And, since the
stored representation of a ROW isn't a tuple, there wasn't any easy way
to deal with variables of domain-over-composite types, since the domain
constraint checking code would expect the value to be checked to be a
tuple.  A lesser, but still real, annoyance is that ROW format cannot
represent a true NULL composite value, only a row of per-field NULL
values, which is not exactly the same thing.

Hence, switch to using DTYPE_REC for all composite-typed variables,
whether "record", named composite type, or domain over named composite
type.  DTYPE_ROW remains but is used only for its native purpose, to
represent a fixed-at-compile-time list of variables, for instance the
targets of an INTO clause.

To accomplish this without taking significant performance losses, introduce
infrastructure that allows storing composite-type variables as "expanded
objects", similar to the "expanded array" infrastructure introduced in
commit 1dc5ebc90.  A composite variable's value is thereby kept (most of
the time) in the form of separate Datums, so that field accesses and
updates are not much more expensive than they were in the ROW format.
This holds the line, more or less, on performance of variables of named
composite types in field-access-intensive microbenchmarks, and makes
variables declared "record" perform much better than before in similar
tests.  In addition, the logic involved with enforcing composite-domain
constraints against updates of individual fields is in the expanded
record infrastructure not plpgsql proper, so that it might be reusable
for other purposes.

In further support of this, introduce a typcache feature for assigning a
unique-within-process identifier to each distinct tuple descriptor of
interest; in particular, DDL alterations on composite types result in a new
identifier for that type.  This allows very cheap detection of the need to
refresh tupdesc-dependent data.  This improves on the "tupDescSeqNo" idea
I had in commit 687f096ea: that assigned identifying sequence numbers to
successive versions of individual composite types, but the numbers were not
unique across different types, nor was there support for assigning numbers
to registered record types.

In passing, allow plpgsql functions to accept as well as return type
"record".  There was no good reason for the old restriction, and it
was out of step with most of the other PLs.

Tom Lane, reviewed by Pavel Stehule

Discussion: https://postgr.es/m/8962.1514399547@sss.pgh.pa.us
2018-02-13 18:52:21 -05:00
Peter Eisentraut
7a32ac8a66 Add procedure support to pg_get_functiondef
This also makes procedures work in psql's \ef and \sf commands.

Reported-by: Pavel Stehule <pavel.stehule@gmail.com>
2018-02-13 15:13:44 -05:00
Peter Eisentraut
7cd56f218d Add tests for pg_get_functiondef 2018-02-13 15:13:44 -05:00
Peter Eisentraut
a7b8f0661d Fix typo 2018-02-13 15:13:44 -05:00
Peter Eisentraut
b4e2ada347 In LDAP test, restart after pg_hba.conf changes
Instead of issuing a reload after pg_hba.conf changes between test
cases, run a full restart.  With a reload, an error in the new
pg_hba.conf is ignored and the tests will continue to run with the old
settings, invalidating the subsequent test cases.  With a restart, a
faulty pg_hba.conf will lead to the test being aborted, which is what
we'd rather want.
2018-02-13 09:12:45 -05:00
Peter Eisentraut
ebdb42a0d6 Fix typo
Author: Masahiko Sawada <sawada.mshk@gmail.com>
2018-02-12 22:39:52 -05:00
Alvaro Herrera
8237f27b50 get_relid_attribute_name is dead, long live get_attname
The modern way is to use a missing_ok argument instead of two separate
almost-identical routines, so do that.

Author: Michaël Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20180201063212.GE6398@paquier.xyz
2018-02-12 19:33:15 -03:00
Robert Haas
88ef48c1cc Fix parallel index builds for dynamic_shared_memory_type=none.
The previous code failed to realize that this setting effectively
disables parallelism, and would crash if it decided to attempt
parallelism anyway.  Instead, treat it as a disabling condition.

Kyotaro Horiguchi, who also reported the issue. Reviewed by Michael
Paquier and Peter Geoghegan.

Discussion: http://postgr.es/m/20180209.170635.256350357.horiguchi.kyotaro@lab.ntt.co.jp
2018-02-12 12:55:12 -05:00
Bruce Momjian
91389228a1 psql: give ^D hint for \q in place where \q is ignored
Also add comment on why exit/quit are not documented.

Discussion: https://postgr.es/m/20180202053928.GA13472@momjian.us
2018-02-12 01:27:06 -05:00
Tom Lane
5c9f2564fa Fix assorted errors in pg_dump's handling of extended statistics objects.
pg_dump supposed that a stats object necessarily shares the same schema
as its underlying table, and that it doesn't have a separate owner.
These things may have been true during early development of the feature,
but they are not true as of v10 release.

Failure to track the object's schema separately turns out to have only
limited consequences, because pg_get_statisticsobjdef() always schema-
qualifies the target object name in the generated CREATE STATISTICS command
(a decision out of step with the rest of ruleutils.c, but I digress).
Therefore the restored object would be in the right schema, so that the
only problem is that the TOC entry would be mislabeled as to schema.  That
could lead to wrong decisions for schema-selective restores, for example.

The ownership issue is a bit more serious: not only was the TOC entry
potentially mislabeled as to owner, but pg_dump didn't bother to issue an
ALTER OWNER command at all, so that after restore the stats object would
continue to be owned by the restoring superuser.

A final point is that decisions as to whether to dump a stats object or
not were driven by whether the underlying table was dumped or not.  While
that's not wrong on its face, it won't scale nicely to the planned future
extension to cross-table statistics.  Moreover, that design decision comes
out of the view of stats objects as being auxiliary to a particular table,
like a rule or trigger, which is exactly where the above problems came
from.  Since we're now treating stats objects more like independent objects
in their own right, they ought to behave like standalone objects for this
purpose too.  So change to using the generic selectDumpableObject() logic
for them (which presently amounts to "dump if containing schema is to be
dumped").

Along the way to fixing this, restructure so that getExtendedStatistics
collects the identity info (only) for all extended stats objects in one
query, and then for each object actually being dumped, we retrieve the
definition in dumpStatisticsExt.  This is necessary to ensure that
schema-qualification in the generated CREATE STATISTICS command happens
with respect to the search path that pg_dump will now be using at restore
time (ie, the schema the stats object is in, not that of the underlying
table).  It's probably also significantly faster in the typical scenario
where only a minority of tables have extended stats.

Back-patch to v10 where extended stats were introduced.

Discussion: https://postgr.es/m/18272.1518328606@sss.pgh.pa.us
2018-02-11 13:24:15 -05:00
Tom Lane
d02d4a6d4f Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure.  This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.

In addition, using a separate EState for each argument was just silly.

So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.

While at it, improve the function's commentary a bit.

Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 13:37:12 -05:00
Tom Lane
65b1d76785 Fix oversight in CALL argument handling, and do some minor cleanup.
CALL statements cannot support sub-SELECTs in the arguments of the called
procedure, since they just use ExecEvalExpr to evaluate such arguments.
Teach transformSubLink() to reject the case, as it already does for other
contexts in which subqueries are not supported.

In passing, s/EXPR_KIND_CALL/EXPR_KIND_CALL_ARGUMENT/ to make that enum
symbol line up more closely with the phrasing of the error messages it is
associated with.  And fix someone's weak grasp of English grammar in the
preceding EXPR_KIND_PARTITION_EXPRESSION addition.  Also update an
incorrect comment in resolve_unique_index_expr (possibly it was correct
when written, but nowadays transformExpr definitely does reject SRFs here).

Per report from Pavel Stehule --- but this resolves only one of the bugs
he mentions.

Discussion: https://postgr.es/m/CAFj8pRDxOwPPzpA8i+AQeDQFj7bhVw-dR2==rfWZ3zMGkm568Q@mail.gmail.com
2018-02-10 13:05:14 -05:00
Robert Haas
935dee9ad5 Mark assorted GUC variables as PGDLLIMPORT.
This makes life easier for extension authors.

Metin Doslu

Discussion: http://postgr.es/m/CAL1dPcfa45o1dC-c4t-48v0OZE6oy4ChJhObrtkK8mzNfXqDTA@mail.gmail.com
2018-02-09 15:54:45 -05:00
Robert Haas
be42015fcc Clear stmt_timeout_active if we disable_all_timeouts.
Otherwise, we can end up with the flag set when the timeout is
actually disabled, leading to misbehavior.  Commit
f8e5f156b30efee5d0038b03e38735773abcb7ed introduced this bug.

Reported by Peter Eisentraut.  Analysis and fix by Thomas Munro,
tweaked by me.

Discussion: http://postgr.es/m/6a909374-2602-7136-8c70-397330a418f3@2ndquadrant.com
2018-02-09 15:48:18 -05:00
Robert Haas
b78d0160da Fix incorrect method name in comment.
Atsushi Torikoshi

Discussion: http://postgr.es/m/1b056262-4bc0-a982-c899-bb67a0a7fd52@lab.ntt.co.jp
2018-02-08 14:35:54 -05:00
Robert Haas
e44dd84325 Avoid listing the same ResultRelInfo in more than one EState list.
Doing so causes EXPLAIN ANALYZE to show trigger statistics multiple
times.  Commit 2f178441044be430f6b4d626e4dae68a9a6f6cec seems to
be to blame for this.

Amit Langote, revieed by Amit Khandekar, Etsuro Fujita, and me.
2018-02-08 14:29:05 -05:00
Robert Haas
88fdc70060 Fix possible infinite loop with Parallel Append.
When the previously-chosen plan was non-partial, all pa_finished
flags for partial plans are now set, and pa_next_plan has not yet
been set to INVALID_SUBPLAN_INDEX, the previous code could go into
an infinite loop.

Report by Rajkumar Raghuwanshi.  Patch by Amit Khandekar and me.
Review by Kyotaro Horiguchi.

Discussion: http://postgr.es/m/CAJ3gD9cf43z78qY=U=H0HvOEN341qfRO-vLpnKPSviHeWgJQ5w@mail.gmail.com
2018-02-08 12:31:48 -05:00
Peter Eisentraut
b3a101eff0 Refine SSL tests test name reporting
Instead of using the psql/libpq connection string as the displayed test
name and relying on "notes" and source code comments to explain the
tests, give the tests self-explanatory names, like we do elsewhere.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
2018-02-08 09:57:10 -05:00
Peter Eisentraut
7c44b75a2a Make new triggers tests more robust
Add explicit collation on the trigger name to avoid locale dependencies.
Also restrict the tables selected, to avoid interference from
concurrently running tests.
2018-02-07 14:57:19 -05:00
Peter Eisentraut
32ff269117 Add more information_schema columns
- table_constraints.enforced
- triggers.action_order
- triggers.action_reference_old_table
- triggers.action_reference_new_table

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2018-02-07 10:08:02 -05:00
Robert Haas
b98a7cd58f Update out-of-date comment in StartupXLOG.
Commit 4b0d28de06b28e57c540fca458e4853854fbeaf8 should have updated
this comment, but did not.

Thomas Munro

Discussion: http://postgr.es/m/CAEepm=0iJ8aqQcF9ij2KerAkuHF3SwrVTzjMdm1H4w++nfBf9A@mail.gmail.com
2018-02-07 08:48:04 -05:00
Robert Haas
4815dfa10f Remove prototype for fmgr() function, which no longer exists.
Commit 5ded4bd21403e143dd3eb66b92d52732fdac1945 removed the code
for this function, but neglected to remove the prototype and
associated comments.

Dagfinn Ilmari Mannsåker

Discussion: http://postgr.es/m/d8j4lmuxjzk.fsf@dalvik.ping.uio.no
2018-02-07 08:42:36 -05:00
Tom Lane
0a459cec96 Support all SQL:2011 options for window frame clauses.
This patch adds the ability to use "RANGE offset PRECEDING/FOLLOWING"
frame boundaries in window functions.  We'd punted on that back in the
original patch to add window functions, because it was not clear how to
do it in a reasonably data-type-extensible fashion.  That problem is
resolved here by adding the ability for btree operator classes to provide
an "in_range" support function that defines how to add or subtract the
RANGE offset value.  Factoring it this way also allows the operator class
to avoid overflow problems near the ends of the datatype's range, if it
wishes to expend effort on that.  (In the committed patch, the integer
opclasses handle that issue, but it did not seem worth the trouble to
avoid overflow failures for datetime types.)

The patch includes in_range support for the integer_ops opfamily
(int2/int4/int8) as well as the standard datetime types.  Support for
other numeric types has been requested, but that seems like suitable
material for a follow-on patch.

In addition, the patch adds GROUPS mode which counts the offset in
ORDER-BY peer groups rather than rows, and it adds the frame_exclusion
options specified by SQL:2011.  As far as I can see, we are now fully
up to spec on window framing options.

Existing behaviors remain unchanged, except that I changed the errcode
for a couple of existing error reports to meet the SQL spec's expectation
that negative "offset" values should be reported as SQLSTATE 22013.

Internally and in relevant parts of the documentation, we now consistently
use the terminology "offset PRECEDING/FOLLOWING" rather than "value
PRECEDING/FOLLOWING", since the term "value" is confusingly vague.

Oliver Ford, reviewed and whacked around some by me

Discussion: https://postgr.es/m/CAGMVOdu9sivPAxbNN0X+q19Sfv9edEPv=HibOJhB14TJv_RCQg@mail.gmail.com
2018-02-07 00:06:56 -05:00
Robert Haas
2320945731 Fix incorrect grammar.
Etsuro Fujita

Discussion: http://postgr.es/m/5A7981EA.8020201@lab.ntt.co.jp
2018-02-06 15:50:13 -05:00
Robert Haas
9fafa413ac Avoid valgrind complaint about write() of uninitalized bytes.
LogicalTapeFreeze() may write out its first block when it is dirty but
not full, and then immediately read the first block back in from its
BufFile as a BLCKSZ-width block.  This can only occur in rare cases
where very few tuples were written out, which is currently only
possible with parallel external tuplesorts.  To avoid valgrind
complaints, tell it to treat the tail of logtape.c's buffer as
defined.

Commit 9da0cc35284bdbe8d442d732963303ff0e0a40bc exposed this problem
but did not create it.  LogicalTapeFreeze() has always tended to write
out some amount of garbage bytes, but previously never wrote less than
one block of data in total, so the problem was masked.

Per buildfarm members lousyjack and skink.

Peter Geoghegan, based on a suggestion from Tom Lane and me.  Some
comment revisions by me.
2018-02-06 14:24:57 -05:00
Tom Lane
3785f7eee3 Doc: move info for btree opclass implementors into main documentation.
Up to now, useful info for writing a new btree opclass has been buried
in the backend's nbtree/README file.  Let's move it into the SGML docs,
in preparation for extending it with info about "in_range" functions
in the upcoming window RANGE patch.

To do this, I chose to create a new chapter for btree indexes in Part VII
(Internals), parallel to the chapters that exist for the newer index AMs.
This is a pretty short chapter as-is.  At some point somebody might care
to flesh it out with more detail about btree internals, but that is
beyond the scope of my ambition for today.

Discussion: https://postgr.es/m/23141.1517874668@sss.pgh.pa.us
2018-02-06 13:52:27 -05:00
Robert Haas
f069c91a57 Fix possible crash in partition-wise join.
The previous code assumed that we'd always succeed in creating
child-joins for a joinrel for which partition-wise join was considered,
but that's not guaranteed, at least in the case where dummy rels
are involved.

Ashutosh Bapat, with some wordsmithing by me.

Discussion: http://postgr.es/m/CAFjFpRf8=uyMYYfeTBjWDMs1tR5t--FgOe2vKZPULxxdYQ4RNw@mail.gmail.com
2018-02-05 17:31:57 -05:00
Tom Lane
a926eb84e0 Ensure that all temp files made during pg_upgrade are non-world-readable.
pg_upgrade has always attempted to ensure that the transient dump files
it creates are inaccessible except to the owner.  However, refactoring
in commit 76a7650c4 broke that for the file containing "pg_dumpall -g"
output; since then, that file was protected according to the process's
default umask.  Since that file may contain role passwords (hopefully
encrypted, but passwords nonetheless), this is a particularly unfortunate
oversight.  Prudent users of pg_upgrade on multiuser systems would
probably run it under a umask tight enough that the issue is moot, but
perhaps some users are depending only on pg_upgrade's umask changes to
protect their data.

To fix this in a future-proof way, let's just tighten the umask at
process start.  There are no files pg_upgrade needs to write at a
weaker security level; and if there were, transiently relaxing the
umask around where they're created would be a safer approach.

Report and patch by Tom Lane; the idea for the fix is due to Noah Misch.
Back-patch to all supported branches.

Security: CVE-2018-1053
2018-02-05 10:58:27 -05:00
Tom Lane
3492a0af0b Fix RelationBuildPartitionKey's processing of partition key expressions.
Failure to advance the list pointer while reading partition expressions
from a list results in invoking an input function with inappropriate data,
possibly leading to crashes or, with carefully crafted input, disclosure
of arbitrary backend memory.

Bug discovered independently by Álvaro Herrera and David Rowley.
This patch is by Álvaro but owes something to David's proposed fix.
Back-patch to v10 where the issue was introduced.

Security: CVE-2018-1052
2018-02-05 10:37:30 -05:00
Tom Lane
05d0f13f07 Skip setting up shared instrumentation for Hash node if not needed.
We don't need to set up the shared space for hash join instrumentation data
if instrumentation hasn't been requested.  Let's follow the example of the
similar Sort node code and save a few cycles by skipping that when we can.

This reverts commit d59ff4ab3 and instead allows us to use the safer choice
of passing noError = false to shm_toc_lookup in ExecHashInitializeWorker,
since if we reach that call there should be a TOC entry to be found.

Thomas Munro

Discussion: https://postgr.es/m/E1ehkoZ-0005uW-43%40gemulon.postgresql.org
2018-02-04 22:14:07 -05:00
Tom Lane
d59ff4ab31 Fix another instance of unsafe coding for shm_toc_lookup failure.
One or another author of commit 5bcf389ec seems to have thought that
computing an offset from a NULL pointer would yield another NULL pointer.
There may possibly be architectures where that works, but common machines
don't work like that.  Per a quick code review of places calling
shm_toc_lookup and not using noError = false.
2018-02-02 18:32:05 -05:00
Tom Lane
957ff087c8 Be more wary about shm_toc_lookup failure.
Commit 445dbd82a basically missed the point of commit d46633506,
which was that we shouldn't allow shm_toc_lookup() failure to lead
to a core dump or assertion crash, because the odds of such a
failure should never be considered negligible.  It's correct that
we can't expect the PARALLEL_KEY_ERROR_QUEUE TOC entry to be there
if we have no workers.  But if we have no workers, we're not going
to do anything in this function with the lookup result anyway,
so let's just skip it.  That lets the code use the easy-to-prove-safe
noError=false case, rather than anything requiring effort to review.

Back-patch to v10, like the previous commit.

Discussion: https://postgr.es/m/3647.1517601675@sss.pgh.pa.us
2018-02-02 18:26:07 -05:00
Peter Eisentraut
533c5d8bdd Fix application of identity values in some cases
Investigation of 2d2d06b7e27e3177d5bef0061801c75946871db3 revealed that
identity values were not applied in some further cases, including
logical replication subscribers, VALUES RTEs, and ALTER TABLE ... ADD
COLUMN.  To fix all that, apply the identity column expression in
build_column_default() instead of repeating the same logic at each call
site.

For ALTER TABLE ... ADD COLUMN ... IDENTITY, the previous coding
completely ignored that existing rows for the new column should have
values filled in from the identity sequence.  The coding using
build_column_default() fails for this because the sequence ownership
isn't registered until after ALTER TABLE, and we can't do it before
because we don't have the column in the catalog yet.  So we specially
remember in ColumnDef the sequence name that we decided on and build a
custom NextValueExpr using that.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2018-02-02 14:39:10 -05:00
Robert Haas
9da0cc3528 Support parallel btree index builds.
To make this work, tuplesort.c and logtape.c must also support
parallelism, so this patch adds that infrastructure and then applies
it to the particular case of parallel btree index builds.  Testing
to date shows that this can often be 2-3x faster than a serial
index build.

The model for deciding how many workers to use is fairly primitive
at present, but it's better than not having the feature.  We can
refine it as we get more experience.

Peter Geoghegan with some help from Rushabh Lathia.  While Heikki
Linnakangas is not an author of this patch, he wrote other patches
without which this feature would not have been possible, and
therefore the release notes should possibly credit him as an author
of this feature.  Reviewed by Claudio Freire, Heikki Linnakangas,
Thomas Munro, Tels, Amit Kapila, me.

Discussion: http://postgr.es/m/CAM3SWZQKM=Pzc=CAHzRixKjp2eO5Q0Jg1SoFQqeXFQ647JiwqQ@mail.gmail.com
Discussion: http://postgr.es/m/CAH2-Wz=AxWqDoVvGU7dq856S4r6sJAj6DBn7VMtigkB33N5eyg@mail.gmail.com
2018-02-02 13:32:44 -05:00
Robert Haas
9aef173163 Refactor code for partition bound searching
Remove partition_bound_cmp() and partition_bound_bsearch(), whose
void * argument could be, depending on the situation, of any of
three different types: PartitionBoundSpec *, PartitionRangeBound *,
Datum *.

Instead, introduce separate bound-searching functions for each
situation: partition_list_bsearch, partition_range_bsearch,
partition_range_datum_bsearch, and partition_hash_bsearch.  This
requires duplicating the code for binary search, but it makes the
code much more type safe, involves fewer branches at runtime, and
at least in my opinion, is much easier to understand.

Along the way, add an option to partition_range_datum_bsearch
allowing the number of keys to be specified, so that we can search
for partitions based on a prefix of the full list of partition
keys.  This is important for pending work to improve partition
pruning.

Amit Langote, per a suggestion from me.

Discussion: http://postgr.es/m/CA+TgmoaVLDLc8=YESRwD32gPhodU_ELmXyKs77gveiYp+JE4vQ@mail.gmail.com
2018-02-02 09:32:44 -05:00
Robert Haas
9222c0d9ed Add new function WaitForParallelWorkersToAttach.
Once this function has been called, we know that all workers have
started and attached to their error queues -- so if any of them
subsequently exit uncleanly, we'll be sure to throw an ERROR promptly.
Otherwise, users of the ParallelContext machinery must be careful not
to wait forever for a worker that has failed to start.  Parallel query
manages to work without needing this for reasons explained in new
comments added by this patch, but it's a useful primitive for other
parallel operations, such as the pending patch to make creating a
btree index run in parallel.

Amit Kapila, revised by me.  Additional review by Peter Geoghegan.

Discussion: http://postgr.es/m/CAA4eK1+e2MzyouF5bg=OtyhDSX+=Ao=3htN=T-r_6s3gCtKFiw@mail.gmail.com
2018-02-02 09:00:59 -05:00
Robert Haas
ad25a6b1f2 Fix possible failure to mark hash metapage dirty.
Report and suggested fix by Lixian Zou.  Amit Kapila put it
in the form of a patch and reviewed.

Discussion: http://postgr.es/m/151739848647.1239.12528851873396651946@wrigleys.postgresql.org
2018-02-01 15:23:45 -05:00
Bruce Momjian
df9f599bc6 psql: Add quit/help behavior/hint, for other tool portability
Issuing 'quit'/'exit' in an empty psql buffer exits psql.  Issuing
'quit'/'exit' in a non-empty psql buffer alone on a line with no prefix
whitespace issues a hint on how to exit.

Also add similar 'help' hints for 'help' in a non-empty psql buffer.

Reported-by: Everaldo Canuto

Discussion: https://postgr.es/m/flat/CALVFHFb-C_5_94hueWg6Dd0zu7TfbpT7hzsh9Zf0DEDOSaAnfA%40mail.gmail.com

Author: original author Robert Haas, modified by me
2018-02-01 08:30:43 -05:00
Robert Haas
22757960bb Fix typo: colums -> columns.
Along the way, also fix code indentation.

Alexander Lakhin, reviewed by Michael Paquier

Discussion: http://postgr.es/m/45c44aa7-7cfa-7f3b-83fd-d8300677fdda@gmail.com
2018-01-31 16:45:37 -05:00
Robert Haas
3ccdc6f9a5 Fix list partition constraints for partition keys of array type.
The old code generated always generated a constraint of the form
col = ANY(ARRAY[val1, val2, ...]), but that's invalid when col is an
array type.  Instead, generate col = val when there's only one value,
col = val1 OR col = val2 OR ... when there are multiple values and
col is of array type, and the old form when there are multiple values
and col is not of an array type.

As a side benefit, this makes constraint exclusion able to prune
a list partition declared to accept a single Boolean value, which
didn't work before.

Amit Langote, reviewed by Etsuro Fujita

Discussion: http://postgr.es/m/97267195-e235-89d1-a41a-c110198dfce9@lab.ntt.co.jp
2018-01-31 15:43:11 -05:00
Peter Eisentraut
f75a959155 Refactor client-side SSL certificate checking code
Separate the parts specific to the SSL library from the general logic.

The previous code structure was

open_client_SSL()
calls verify_peer_name_matches_certificate()
calls verify_peer_name_matches_certificate_name()
calls wildcard_certificate_match()

and was completely in fe-secure-openssl.c.  The new structure is

open_client_SSL() [openssl]
calls pq_verify_peer_name_matches_certificate() [generic]
calls pgtls_verify_peer_name_matches_certificate_guts() [openssl]
calls openssl_verify_peer_name_matches_certificate_name() [openssl]
calls pq_verify_peer_name_matches_certificate_name() [generic]
calls wildcard_certificate_match() [generic]

Move the generic functions into a new file fe-secure-common.c, so the
calls generally go fe-connect.c -> fe-secure.c -> fe-secure-${impl}.c ->
fe-secure-common.c, although there is a bit of back-and-forth between
the last two.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2018-01-30 22:56:24 -05:00
Peter Eisentraut
38d485fdaa Fix up references to scram-sha-256
pg_hba_file_rules erroneously reported this as scram-sha256.  Fix that.

To avoid future errors and confusion, also adjust documentation links
and internal symbols to have a separator between "sha" and "256".

Reported-by: Christophe Courtois <christophe.courtois@dalibo.com>
Author: Michael Paquier <michael.paquier@gmail.com>
2018-01-30 16:50:30 -05:00
Peter Eisentraut
a044378ce2 Add some noreturn attributes to help static analyzers 2018-01-29 20:44:35 -05:00
Peter Eisentraut
07e524d3e9 Silence complaint about dead assignment
The preferred place for "placate compiler" assignments is after
elog(ERROR), not before it.  Otherwise, scan-build complains about a
dead assignment.
2018-01-29 20:43:43 -05:00
Andres Freund
c12693d8f3 Introduce ExecQualAndReset() helper.
It's a common task to evaluate a qual and reset the corresponding
expression context. Currently that requires storing the result of the
qual eval, resetting the context, and then reacting on the result. As
that's awkward several places only reset the context next time through
a node. That's not great, so introduce a helper that evaluates and
resets.

It's a bit ugly that it currently uses MemoryContextReset() instead of
ResetExprContext(), but that seems easier than reordering all of
executor.h.

Author: Andres Freund
Discussion: https://postgr.es/m/20180109222544.f7loxrunqh3xjl5f@alap3.anarazel.de
2018-01-29 12:19:12 -08:00
Tom Lane
97d4445a03 Save a few bytes by removing useless last argument to SearchCatCacheList.
There's never any value in giving a fully specified cache key to
SearchCatCacheList: you might as well call SearchCatCache instead,
since there could be only one match.  So the maximum useful number of
key arguments is one less than the supported number of key columns.
We might as well remove the useless extra argument and save some few
bytes per call site, as well as a cycle or so per call.

I believe the reason it was coded like this is that originally, callers
had to write out all the dummy arguments in each call, and so it seemed
less confusing if SearchCatCache and SearchCatCacheList took the same
number of key arguments.  But since commit e26c539e9, callers only write
their live arguments explicitly, making that a non-factor; and there's
surely been enough time for third-party modules to adapt to that coding
style.  So this is only an ABI break not an API break for callers.

Per discussion with Oliver Ford, this might also make it less confusing
how to use SearchCatCacheList correctly.

Discussion: https://postgr.es/m/27788.1517069693@sss.pgh.pa.us
2018-01-29 15:13:17 -05:00
Andres Freund
fc96c69425 Initialize unused ExprEvalStep fields.
ExecPushExprSlots didn't initialize ExprEvalStep's resvalue/resnull
steps as it didn't use them. That caused wrong valgrind warnings for
an upcoming patch, so zero-intialize.

Also zero-initialize all scratch ExprEvalStep's allocated on the
stack, to avoid issues with similar future omissions of non-critial
data.
2018-01-29 12:01:07 -08:00