This should have been removed in commit 7e30c186da, which split the loop
into two. Only the first loop uses the 'from' variable; updating it in
the second loop is bogus. It was never read after the first loop, so this
was harmless and surely optimized away by the compiler, but let's be tidy.
Backpatch to all supported versions.
Author: Ranier Vilela
Discussion: https://www.postgresql.org/message-id/CAEudQAoWq%2BAL3BnELHu7gms2GN07k-np6yLbukGaxJ1vY-zeiQ%40mail.gmail.com
This reverts commit 54fb8c7, as per the issues reported by fairywren
when it comes to MinGW because of the lack of microsoft_native_stat()
there. Using just stat() for MSVC is not sufficient to take care of the
concurrency problems with files pending on deletion. It may be possible
to paint some __MINGW64__ in the code to switch to a different
implementation of stat() in this build context, but I am not sure either
if relying on the implementation of stat() in MinGW to take care of the
problems we are trying to fix is enough or not. So this needs more
study.
Discussion: https://postgr.es/m/YOvOlfRrIO0yGtgw@paquier.xyz
Backpatch-through: 14
The code introduced by bed9075 to enhance the stat() implementation on
Windows for file sizes larger than 4GB fails to properly detect files
pending for deletion with its method based on NtQueryInformationFile()
or GetFileInformationByHandleEx(), as proved by Alexander Lakhin in a
custom TAP test of his own.
The method used in the implementation of open() to sleep and loop when
when failing on ERROR_ACCESS_DENIED (EACCES) is showing much more
stability, so switch to this method. This could still lead to issues if
the permission problem stays around for much longer than the timeout of
1 second used, but that should (hopefully) never happen in
performance-critical paths. Still, there could be a point in increasing
the timeouts for the sake of machines that handle heavy loads.
Note that WIN32's open() now uses microsoft_native_stat() as it should
be similar to stat() when working around issues with concurrent file
deletions.
I have spent some time testing this patch with pgbench in combination
of the SQL functions from genfile.c, as well as running the TAP test
provided on the thread with MSVC builds, and this looks much more
stable than the previous method.
Author: Alexander Lakhin
Reviewed-by: Tom Lane, Michael Paquier, Justin Pryzby
Discussion: https://postgr.es/m/c3427edf-d7c0-ff57-90f6-b5de3bb62709@gmail.com
Backpatch-through: 14
Although we were careful to lock the object being added or dropped,
we failed to get any sort of lock on the extension itself. This
allowed the ALTER to proceed in parallel with a DROP EXTENSION,
which is problematic for a couple of reasons. If both commands
succeeded we'd be left with a dangling link in pg_depend, which
would cause problems later. Also, if the ALTER failed for some
reason, it might try to print the extension's name, and that could
result in a crash or (in older branches) a silly error message
complaining about extension "(null)".
Per bug #17098 from Alexander Lakhin. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/17098-b960f3616c861f83@postgresql.org
If an error occurred in the wrong place, it was possible to leave an
unintialized entry in the hash table, leading to a crash. Fixed.
Also, be more careful about the order of operations so that an
allocation error doesn't leak memory in CacheMemoryContext or
unnecessarily advance NextRecordTypmod.
Backpatch through version 11. Earlier versions (prior to 35ea75632a)
do not exhibit the problem, because an uninitialized hash entry
contains a valid empty list.
Author: Sait Talha Nisanci <Sait.Nisanci@microsoft.com>
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/HE1PR8303MB009069D476225B9A9E194B8891779@HE1PR8303MB0090.EURPRD83.prod.outlook.com
Backpatch-through: 11
The following checks are added, to make the SASL infrastructure more
aware of defects when implementing new mechanisms:
- Detect that no output is generated by a mechanism if an exchange fails
in the backend, failing if there is a message waiting to be sent.
- Handle zero-length messages in the frontend. The backend handles that
already, and SCRAM would complain if sending empty messages as this is
not authorized for this mechanism, but other mechanisms may want this
capability (the SASL specification allows that).
- Make sure that a mechanism generates a message in the middle of the
exchange in the frontend.
SCRAM, as implemented, respects all these requirements already, and the
recent refactoring of SASL done in 9fd8557 helps in documenting that in
a cleaner way.
Analyzed-by: Jacob Champion
Author: Michael Paquier
Reviewed-by: Jacob Champion
Discussion: https://postgr.es/m/3d2a6f5d50e741117d6baf83eb67ebf1a8a35a11.camel@vmware.com
This fixes an overflow error when using the numeric * operator if the
result has more than 16383 digits after the decimal point by rounding
the result. Overflow errors should only occur if the result has too
many digits *before* the decimal point.
Discussion: https://postgr.es/m/CAEZATCUmeFWCrq2dNzZpRj5+6LfN85jYiDoqm+ucSXhb9U2TbA@mail.gmail.com
When sending queries in pipeline mode, we were careless about leaving
the connection in the right state so that PQgetResult would behave
correctly; trying to read further results after sending a query after
having read a result with an error would sometimes hang. Fix by
ensuring internal libpq state is changed properly. All the state
changes were being done by the callers of pqAppendCmdQueueEntry(); it
would have become too repetitious to have this logic in each of them, so
instead put it all in that function and relieve callers of the
responsibility.
Add a test to verify this case. Without the code fix, this new test
hangs sometimes.
Also, document that PQisBusy() would return false when no queries are
pending result. This is not intuitively obvious, and NULL would be
obtained by calling PQgetResult() at that point, which is confusing.
Wording by Boris Kolpackov.
In passing, fix bogus use of "false" to mean "0", per Ranier Vilela.
Backpatch to 14.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Boris Kolpackov <boris@codesynthesis.com>
Discussion: https://postgr.es/m/boris.20210624103805@codesynthesis.com
The requirement that IDENTIFY_SYSTEM be run before START_REPLICATION
was both undocumented and unnecessary. Remove the error and ensure
that ThisTimeLineID is initialized in START_REPLICATION.
Elect not to backport because this requirement was expected behavior
(even if inconsistently enforced), and is not likely to cause any
major problem.
Author: Jeff Davis
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/de4bbf05b7cd94227841c433ea6ff71d2130c713.camel%40j-davis.com
Commit 7266d0997 added code to pull up simple constant function
results, converting the RTE_FUNCTION RTE to a dummy RTE_RESULT
RTE since it no longer need be scanned. But I forgot to clear
the LATERAL flag if the RTE has it set. If the function reduced
to a constant, it surely contains no lateral references so this
simplification is logically OK. It's needed because various other
places will Assert that RESULT RTEs aren't LATERAL.
Per bug #17097 from Yaoguang Chen. Back-patch to v13 where the
faulty code came in.
Discussion: https://postgr.es/m/17097-3372ef9f798fc94f@postgresql.org
The separate libldap_r is gone and libldap itself is now always
thread-safe. Unfortunately there seems no easy way to tell by
inspection whether libldap is thread-safe, so we have to take
it on faith that libldap is thread-safe if there's no libldap_r.
That should be okay, as it appears that libldap_r was a standard
part of the installation going back at least 20 years.
Report and patch by Adrian Ho. Back-patch to all supported
branches, since people might try to build any of them with
a newer OpenLDAP.
Discussion: https://postgr.es/m/17083-a19190d9591946a7@postgresql.org
Since the executor can't cope with a utility statement appearing
as a node of a plan tree, we can't support cases where a rewrite
rule inserts a NOTIFY into an INSERT/UPDATE/DELETE command appearing
in a WITH clause of a larger query. (One can imagine ways around
that, but it'd be a new feature not a bug fix, and so far there's
been no demand for it.) RewriteQuery checked for this, but it
missed the case where the DML command rewrites to *only* a NOTIFY.
That'd lead to crashes later on in planning. Add the missed check,
and improve the level of testing of this area.
Per bug #17094 from Yaoguang Chen. It's been busted since WITH
was introduced, so back-patch to all supported branches.
Discussion: https://postgr.es/m/17094-bf15dff55eaf2e28@postgresql.org
There was talk about adding units all the way up to yottabytes but it
seems quite far-fetched that anyone would need those. Since such large
units are not exactly commonplace, it seems unlikely that having
pg_size_pretty outputting unit any larger than petabytes would actually be
helpful to anyone.
Since petabytes are on the horizon, let's just add those only. Maybe one
day we'll get to add additional units, but it will likely be a while
before we'll need to think beyond petabytes in regards to the size of a
database.
Author: David Christensen
Discussion: https://postgr.es/m/CAOxo6XKmHc_WZip-x5QwaOqFEiCq_SVD0B7sbTZQk+qqcn2qaw@mail.gmail.com
We've grown 2 versions of pg_size_pretty over the years, one for BIGINT
and one for NUMERIC. Both should output the same, but keeping them in
sync is harder than needed due to neither function sharing a source of
truth about which units to use and how to transition to the next largest
unit.
Here we add a static array which defines the units that we recognize and
have both pg_size_pretty and pg_size_pretty_numeric use it. This will
make adding any units in the future a very simple task.
The table contains all information required to allow us to also modify
pg_size_bytes to use the lookup table, so adjust that too.
There are no behavioral changes here.
Author: David Rowley
Reviewed-by: Dean Rasheed, Tom Lane, David Christensen
Discussion: https://postgr.es/m/CAApHDvru1F7qsEVL-iOHeezJ+5WVxXnyD_Jo9nht+Eh85ekK-Q@mail.gmail.com
Due to how pg_size_pretty(bigint) was implemented, it's possible that when
given a negative number of bytes that the returning value would not match
the equivalent positive return value when given the equivalent positive
number of bytes. This was due to two separate issues.
1. The function used bit shifting to convert the number of bytes into
larger units. The rounding performed by bit shifting is not the same as
dividing. For example -3 >> 1 = -2, but -3 / 2 = -1. These two
operations are only equivalent with positive numbers.
2. The half_rounded() macro rounded towards positive infinity. This meant
that negative numbers rounded towards zero and positive numbers rounded
away from zero.
Here we fix#1 by dividing the values instead of bit shifting. We fix#2
by adjusting the half_rounded macro always to round away from zero.
Additionally, adjust the pg_size_pretty(numeric) function to be more
explicit that it's using division rather than bit shifting. A casual
observer might have believed bit shifting was used due to a static
function being named numeric_shift_right. However, that function was
calculating the divisor from the number of bits and performed division.
Here we make that more clear. This change is just cosmetic and does not
affect the return value of the numeric version of the function.
Here we also add a set of regression tests both versions of
pg_size_pretty() which test the values directly before and after the
function switches to the next unit.
This bug was introduced in 8a1fab36a. Prior to that negative values were
always displayed in bytes.
Author: Dean Rasheed, David Rowley
Discussion: https://postgr.es/m/CAEZATCXnNW4HsmZnxhfezR5FuiGgp+mkY4AzcL5eRGO4fuadWg@mail.gmail.com
Backpatch-through: 9.6, where the bug was introduced.
Most error messages about a relkind that was not supported or
appropriate for the command was of the pattern
"relation \"%s\" is not a table, foreign table, or materialized view"
This style can become verbose and tedious to maintain. Moreover, it's
not very helpful: If I'm trying to create a comment on a TOAST table,
which is not supported, then the information that I could have created
a comment on a materialized view is pointless.
Instead, write the primary error message shorter and saying more
directly that what was attempted is not possible. Then, in the detail
message, explain that the operation is not supported for the relkind
the object was. To simplify that, add a new function
errdetail_relkind_not_supported() that does this.
In passing, make use of RELKIND_HAS_STORAGE() where appropriate,
instead of listing out the relkinds individually.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/dc35a398-37d0-75ce-07ea-1dd71d98f8ec@2ndquadrant.com
Similar to 50e17ad28, which allowed hash tables to be used for IN clauses
with a set of constants, here we add the same feature for NOT IN clauses.
NOT IN evaluates the same as: WHERE a <> v1 AND a <> v2 AND a <> v3.
Obviously, if we're using a hash table we must be exactly equivalent to
that and return the same result taking into account that either side of
the condition could contain a NULL. This requires a little bit of
special handling to make work with the hash table version.
When processing NOT IN, the ScalarArrayOpExpr's operator will be the <>
operator. To be able to build and lookup a hash table we must use the
<>'s negator operator. The planner checks if that exists and is hashable
and sets the relevant fields in ScalarArrayOpExpr to instruct the executor
to use hashing.
Author: David Rowley, James Coleman
Reviewed-by: James Coleman, Zhihong Yu
Discussion: https://postgr.es/m/CAApHDvoF1mum_FRk6D621edcB6KSHBi2+GAgWmioj5AhOu2vwQ@mail.gmail.com
The code of SCRAM and SASL have been tightly linked together since SCRAM
exists in the core code, making hard to apprehend the addition of new
SASL mechanisms, but these are by design different facilities, with
SCRAM being an option for SASL. This refactors the code related to both
so as the backend and the frontend use a set of callbacks for SASL
mechanisms, documenting while on it what is expected by anybody adding a
new SASL mechanism.
The separation between both layers is neat, using two sets of callbacks
for the frontend and the backend to mark the frontier between both
facilities. The shape of the callbacks is now directly inspired from
the routines used by SCRAM, so the code change is straight-forward, and
the SASL code is moved into its own set of files. These will likely
change depending on how and if new SASL mechanisms get added in the
future.
Author: Jacob Champion
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/3d2a6f5d50e741117d6baf83eb67ebf1a8a35a11.camel@vmware.com
Previously, all CustomScan providers had to support projections,
but there may be cases where this is inconvenient. Add a flag
bit to say if it's supported.
Important item for the release notes: this is non-backwards-compatible
since the default is now to assume that CustomScan providers can't
project, instead of assuming that they can. It's fail-soft, but could
result in visible performance penalties due to adding unnecessary
Result nodes.
Sven Klemm, reviewed by Aleksander Alekseev; some cosmetic fiddling
by me.
Discussion: https://postgr.es/m/CAMCrgp1kyakOz6c8aKhNDJXjhQ1dEjEnp+6KNT3KxPrjNtsrDg@mail.gmail.com
Joel Jacobson reported that deep nesting of trivial (flattenable)
views results in O(N^3) growth of planning time for N-deep nesting.
It turns out that a large chunk of this cost comes from copying around
the "subquery" sub-tree of each view's RTE_SUBQUERY RTE. But once we
have successfully flattened the subquery, we don't need that anymore,
because the planner isn't going to do anything else interesting with
that RTE. We already zap the subquery pointer during setrefs.c (cf.
add_rte_to_flat_rtable), but it's useless baggage earlier than that
too. Clearing the pointer as soon as pull_up_simple_subquery is done
with the RTE reduces the cost from O(N^3) to O(N^2); which is still
not great, but it's quite a lot better. Further improvements will
require rethinking of the RTE data structure, which is being considered
in another thread.
Patch by me; thanks to Dean Rasheed for review.
Discussion: https://postgr.es/m/797aff54-b49b-4914-9ff9-aa42564a4d7d@www.fastmail.com
Instead of using multiple parameters in parse_subscription_options
function signature, use the struct SubOpts that encapsulate all the
subscription options and their values. It will be useful for future work
where we need to add other options in the subscription. Also, use bitmaps
to pass the supported and retrieve the specified options much like the way
it is done in the commit a3dc926009.
Author: Bharath Rupireddy
Reviewed-By: Peter Smith, Amit Kapila, Alvaro Herrera
Discussion: https://postgr.es/m/CALj2ACXtoQczfNsDQWobypVvHbX2DtgEHn8DawS0eGFwuo72kw@mail.gmail.com
In each of the create_*_bound() functions for LIST, RANGE and HASH
partitioning, there were a large number of palloc calls which could be
reduced down to a much smaller number.
In each of these functions, an array was built so that we could qsort it
before making the PartitionBoundInfo. For LIST and HASH partitioning, an
array of pointers was allocated then each element was allocated within
that array. Since the number of items of each dimension is known
beforehand, we can just allocate a single chunk of memory for this.
Similarly, with all partition strategies, we're able to reduce the number
of allocations to build the ->datums field. This is an array of Datum
pointers, but there's no need for the Datums that each element points to
to be singly allocated. One big chunk will do. For RANGE partitioning,
the PartitionBoundInfo->kind field can get the same treatment.
We can apply the same optimizations to partition_bounds_copy(). Doing
this might have a small effect on cache performance when searching for the
correct partition during partition pruning or DML on a partitioned table.
However, that's likely to be small and this is mostly about reducing
palloc overhead.
Author: Nitin Jadhav, Justin Pryzby, David Rowley
Reviewed-by: Justin Pryzby, Zhihong Yu
Discussion: https://postgr.es/m/flat/CAMm1aWYFTqEio3bURzZh47jveiHRwgQTiSDvBORczNEz2duZ1Q@mail.gmail.com
This concerns pg_stop_backup() and BASE_BACKUP, when waiting for the
WAL segments required for a backup to be archived. This simplifies a
bit the handling of the wait event used in this code path.
Author: Bharath Rupireddy
Reviewed-by: Michael Paquier, Stephen Frost
Discussion: https://postgr.es/m/CALj2ACU4AdPCq6NLfcA-ZGwX7pPCK5FgEj-CAU0xCKzkASSy_A@mail.gmail.com
Commit 03ffc4d6d added logic to bypass all caching behavior in
LookupOpclassInfo when CLOBBER_CACHE_ALWAYS is enabled. It doesn't
look like I stopped to think much about what that would cost, but
recent investigation shows that the cost is enormous: it roughly
doubles the time needed for cache-clobber test runs.
There does seem to be value in this behavior when trying to test
the opclass-cache loading logic itself, but for other purposes the
cost is excessive. Hence, let's back off to doing this only when
debug_invalidate_system_caches_always is at least 3; or in older
branches, when CLOBBER_CACHE_RECURSIVELY is defined.
While here, clean up some other minor issues in LookupOpclassInfo.
Re-order the code so we aren't left with broken cache entries (leading
to later core dumps) in the unlikely case that we suffer OOM while
trying to allocate space for a new entry. (That seems to be my
oversight in 03ffc4d6d.) Also, in >= v13, stop allocating one array
entry too many. That's evidently left over from sloppy reversion in
851b14b0c.
Back-patch to all supported branches, mainly to reduce the runtime
of cache-clobbering buildfarm animals.
Discussion: https://postgr.es/m/1370856.1625428625@sss.pgh.pa.us
In 741d7f104, I tried to make the reports from canceled steps come out
after the pg_cancel_backend() steps, since that was the most common
ordering before. However, that doesn't ensure that a canceled step
doesn't report even later, as shown in a recent failure on buildfarm
member idiacanthus. Rather than complicating things even more with
additional annotations, let's just force the cancel's effect to be
reported first. It's not *that* unnatural-looking.
Back-patch to v14 where these test cases appeared.
Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=idiacanthus&dt=2021-07-02%2001%3A40%3A04
Formerly various numeric aggregate functions supported parallel
aggregation by having each worker convert partial aggregate values to
Numeric and use numeric_send() as part of serializing their state.
That's problematic, since the range of Numeric is smaller than that of
NumericVar, so it's possible for it to overflow (on either side of the
decimal point) in cases that would succeed in non-parallel mode.
Fix by serializing NumericVars instead, to avoid the overflow risk and
ensure that parallel and non-parallel modes work the same.
A side benefit is that this improves the efficiency of the
serialization/deserialization code, which can make a noticeable
difference to performance with large numbers of parallel workers.
No back-patch due to risk from changing the binary format of the
aggregate serialization states, as well as lack of prior field
complaints and low probability of such overflows in practice.
Patch by me. Thanks to David Rowley for review and performance
testing, and Ranier Vilela for an additional suggestion.
Discussion: https://postgr.es/m/CAEZATCUmeFWCrq2dNzZpRj5+6LfN85jYiDoqm+ucSXhb9U2TbA@mail.gmail.com
Here we alter the code that calls build_pertrans_for_aggref() so that the
function no longer needs to special-case whether it's dealing with an
aggtransfn or an aggcombinefn. This allows us to reuse the
build_aggregate_transfn_expr() function and just get rid of the
build_aggregate_combinefn_expr() completely.
All of the special case code that was in build_pertrans_for_aggref() has
been moved up to the calling functions.
This saves about a dozen lines of code in nodeAgg.c and a few dozen more
in parse_agg.c
Also, rename a few variables in nodeAgg.c to try to make it more clear
that we're working with either a aggtransfn or an aggcombinefn. Some of
the old names would have you believe that we were always working with an
aggtransfn.
Discussion: https://postgr.es/m/CAApHDvptMQ9FmF0D67zC_w88yVnoNVR2+kkOQGUrCmdxWxLULQ@mail.gmail.com
Disable this check altogether in --enable-coverage builds,
because newer versions of gcc insert exit() as well as abort()
calls for that. Also disable it on AIX and Solaris, because
those platforms tend to provide facilities such as libldap
as static libraries, which then get included in libpq's shlib.
We can't expect such libraries to honor our coding rules.
(That platform list might need additional tweaking, but I think
this is enough to keep the buildfarm happy.)
Per reports from Jacob Champion and Noah Misch.
Discussion: https://postgr.es/m/3128896.1624742969@sss.pgh.pa.us
The existing code tried to do syscache lookups in an already-failed
transaction, which is problematic to say the least. After some
consideration of alternatives, the best fix seems to be to just drop
type names from the error message altogether. The table and column
names seem like sufficient localization. If the user is unsure what
types are involved, she can check the local and remote table
definitions.
Having done that, we can also discard the LogicalRepTypMap hash
table, which had no other use. Arguably, LOGICAL_REP_MSG_TYPE
replication messages are now obsolete as well; but we should
probably keep them in case some other use emerges. (The complexity
of removing something from the replication protocol would likely
outweigh any savings anyhow.)
Masahiko Sawada and Bharath Rupireddy, per complaint from Andres
Freund. Back-patch to v10 where this code originated.
Discussion: https://postgr.es/m/20210106020229.ne5xnuu6wlondjpe@alap3.anarazel.de
This has the advantage to make a process more responsive when the
postmaster dies, even if the wait time was rather limited as there was
only a 50ms timeout here. Another advantage of this change is for
monitoring, as we gain a new wait event for the end-of-vacuum
truncation.
Author: Bharath Rupireddy
Reviewed-by: Aleksander Alekseev, Thomas Munro, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACU4AdPCq6NLfcA-ZGwX7pPCK5FgEj-CAU0xCKzkASSy_A@mail.gmail.com
This commit removes a dependency to the central logging facilities in
the JSON parsing routines of src/common/, which existed to log errors
when seeing error codes that do not match any existing values in
JsonParseErrorType, which is not something that should never happen.
The routine providing a detailed error message based on the error code
is made backend-only, the existing code being unsafe to use in the
frontend as the error message may finish by being palloc'd or point to a
static string, so there is no way to know if the memory of the message
should be pfree'd or not. The only user of this routine in the frontend
was pg_verifybackup, that is changed to use a more generic error message
on parsing failure.
Note that making this code more resilient to OOM failures if used in
shared libraries would require much more work as a lot of code paths
still rely on palloc() & friends, but we are not sure yet if we need to
go down to that. Still, removing the dependency to logging is a step
toward more portability.
This cleans up the handling of check_stack_depth() while on it, as it
exists only in the backend.
Per discussion with Jacob Champion and Tom Lane.
Discussion: https://postgr.es/m/YNwL7kXwn3Cckbd6@paquier.xyz
Commit 4656e3d66 replaced the "#define CLOBBER_CACHE_ALWAYS"
testing mechanism with a GUC, which has been a great help for
doing cache-clobber testing in more efficient ways; but there
is a gap in the implementation. The only way to do cache-clobber
testing during an initdb run is to use the old method with #define,
because one can't set the GUC from outside. Improve this by
adding a switch to initdb for the purpose.
(Perhaps someday we should let initdb pass through arbitrary
"-c NAME=VALUE" switches. Quoting difficulties dissuaded me
from attempting that right now, though.)
Back-patch to v14 where 4656e3d66 came in.
Discussion: https://postgr.es/m/1582507.1624227029@sss.pgh.pa.us
Further fixes for commit dc227eb82. Per suggestion from
Peter Eisentraut, use a stamp-file to control when the check
is run, avoiding repeated executions during "make all".
Also, remove "-g" switch for nm: it's useless and some versions
of nm consider it to conflict with "-u". (Thanks to Noah Misch
for running down that portability issue.)
Discussion: https://postgr.es/m/3128896.1624742969@sss.pgh.pa.us
The prove_installcheck recipe in src/Makefile.global.in was emitting
bogus paths for a couple of elements when used with PGXS. Here we create
a separate recipe for the PGXS case that does it correctly. We also take
the opportunity to make the make the file more readable by breaking up
the prove_installcheck and prove_check recipes across several lines, and
to remove the setting for REGRESS_SHLIB to src/test/recovery/Makefile,
which is the only set of tests that actually need it.
Backpatch to all live branches
Discussion: https://postgr.es/m/f2401388-936b-f4ef-a07c-a0bcc49b3300@dunslane.net
Several places were performing a tight loop to determine the first power
of 2 number that's > or >= the required memory. Instead of using a loop
for that, we can use pg_nextpower2_32 or pg_nextpower2_64. When we need a
power of 2 number equal to or greater than a given amount, we just pass
the amount to the nextpower2 function. When we need a power of 2 greater
than the amount, we just pass the amount + 1.
Additionally, in tsearch there were a couple of locations that were
performing a while loop when a simple "if" would have done. In both of
these locations only 1 item is being added, so the loop could only have
ever iterated once. Changing the loop into an if statement makes the code
very slightly more optimal as the condition is checked once rather than
twice.
There are quite a few remaining locations that increase the size of the
buffer in the following form:
while (reqsize >= buflen)
{
buflen *= 2;
buf = repalloc(buf, buflen);
}
These are not touched in this commit. repalloc will error out for sizes
larger than MaxAllocSize. Changing these to use pg_nextpower2_32 would
remove the chance of that error being raised. It's unclear from the code
if the sizes could ever become that large, so err on the side of caution.
Discussion: https://postgr.es/m/CAApHDvp=tns7RL4PH0ZR0M+M-YFLquK7218x=0B_zO+DbOma+w@mail.gmail.com
Reviewed-by: Zhihong Yu
Give up on trying to mechanically forbid abort() within libpq.
Even though there are no such calls in the source code, we've now
seen three different scenarios where build toolchains silently
insert such calls: gcc does it for profiling, some platforms
implement assert() using it, and icc does so for no visible reason.
Checking for accidental use of exit() seems considerably more
important than checking for abort(), so we'll settle for doing
that for now.
Also, filter out __cxa_atexit() to avoid a false match. It seems
that OpenBSD inserts a call to that despite the fact that libpq
contains no C++ code.
Discussion: https://postgr.es/m/3128896.1624742969@sss.pgh.pa.us
Until now, we didn't allow to stream the changes in logical replication
till we receive speculative confirm or the next DML change record after
speculative inserts. The reason was that we never use to process
speculative aborts but after commit 4daa140a2f it is possible to process
them so we can allow streaming once we receive speculative abort after
speculative insertion.
We decided to backpatch to 14 where the feature for streaming in progress
transactions have been introduced as this is a minor change and makes that
functionality better.
Author: Amit Kapila
Reviewed-By: Dilip Kumar
Backpatch-through: 14
Discussion: https://postgr.es/m/CAA4eK1KdqmTCtrBR6oFfGELrLLbDLDedL6zACcsUOQuTJBj1vw@mail.gmail.com