Given a subplan in a MERGE query, EXPLAIN would sometimes fail to
properly display expressions involving Params referencing variables in
other parts of the plan tree.
This would affect subplans outside the topmost join plan node, for
which expansion of Params would go via the top-level ModifyTable plan
node. The problem was that "inner_tlist" for the ModifyTable node's
deparse_namespace was set to the join node's targetlist, but
"inner_plan" was set to the ModifyTable node itself, rather than the
join node, leading to incorrect results when descending to the
referenced variable.
Fix and backpatch to v15, where MERGE was introduced.
Discussion: https://postgr.es/m/CAEZATCWAv-sZuH%2BwG5xJ-%2BGt7qGNGX8wUQd3XYydMFDKgRB9nw%40mail.gmail.com
This introduces a new function equalRowTypes() that is effectively a
subset of equalTupleDescs() but only compares the number of attributes
and attribute name, type, typmod, and collation. This is enough for
most existing uses of equalTupleDescs(), which are changed to use the
new function. The only remaining callers of equalTupleDescs() are
those that really want to check the full tuple descriptor as such,
without concern about record or row or record type semantics.
The existing function hashTupleDesc() is renamed to hashRowType(),
because it now corresponds more to equalRowTypes().
The purpose of this change is to be clearer about the semantics of the
equality asked for by each caller. (At least one caller had a comment
that questioned whether equalTupleDescs() was too restrictive.) For
example, 4f622503d6d removed attstattarget from the tuple descriptor
structure. It was not fully clear at the time how this should affect
equalTupleDescs(). Now the answer is clear: By their own definitions,
equalRowTypes() does not care, and equalTupleDescs() just compares
whatever is in the tuple descriptor but does not care why it is in
there.
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/f656d6d9-6660-4518-a006-2f65cafbebd1%40eisentraut.org
destroyStringInfo() is a counterpart to makeStringInfo(), freeing a
palloc'd StringInfo and its data. This is a convenience function to
align the StringInfo API with the PQExpBuffer API. Originally added
in the OAuth patchset, it was extracted and committed separately in
order to aid upcoming JSON work.
Author: Daniel Gustafsson <daniel@yesql.se>
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAOYmi+mWdTd6ujtyF7MsvXvk7ToLRVG_tYAcaGbQLvf=N4KrQw@mail.gmail.com
psql has the :{?name} syntax for testing for a psql variable existence. This
commit implements a tab completion for this syntax. Notably, in order to
implement this we have to remove '{' from WORD_BREAKS. It appears that
'{' here from the very beginning and it comes from the default value of
rl_basic_word_break_characters. And :{?name} is the only psql syntax using
the '{' sign. So, removing it from WORD_BREAKS shouldn't break anything.
Discussion: https://postgr.es/m/CAGRrpzZU48F2oV3d8eDLr%3D4TU9xFH5Jt9ED%2BqU1%2BX91gMH68Sw%40mail.gmail.com
Author: Steve Chavez
Reviewed-by: Erik Wienhold
Commit c855872074b introduced a new parameter to pg_regress to set
the directory where to look for expected files, but accidentally
only implemented it for when compiling pg_regress for ECPG tests.
Fix by adding support for the parameter to the main regression test
compilation of pg_regress as well.
Backpatch to v16 where --expecteddir was introduced.
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/CAO6_Xqq5yKJHcJsq__LPcKwSY0XHRqVERNWGxx5ttNXXj7+W=A@mail.gmail.com
Backpatch-through: 16
The buildfarm script attempts to run all tests marked as
NO_INSTALLCHECK under src/test/modules without paying attention to
whether they are enabled or disabled in the parent Makefile. That
hasn't been a problem so far, because all the tests marked with
NO_INSTALLCHECK ran unconditionally in "make check". But commit
e2e3b8ae9e changed that: the injection fault tests are marked as
NO_INSTALLCHECK, and also depend on --enable-injection-points.
Try to work around that by ensuring that "make check" does nothing in
the those subdirectories. We can hopefully get rid of this hack soon,
after fixing the buildfarm client, or by switching to meson.
Discussion: https://www.postgresql.org/message-id/8e4cf596-dd70-432e-9068-16466ed596ed%40iki.fi
The interruption handler within the injection point can get stuck in an
infinite loop while handling transaction timeout. To avoid this situation
we reset the timeout flag before invoking the injection point.
Author: Alexander Korotkov
Reviewed-by: Andrey Borodin
Discussion: https://postgr.es/m/ZfPchPC6oNN71X2J%40paquier.xyz
There are currently no tests for the low-level backup method where
pg_backup_start() and pg_backup_stop() are involved while taking a
file-system backup. The tests introduced in this commit rely on a
background psql process to make sure that the backup is taken while the
session doing the SQL start and stop calls remains alive.
Two cases are checked here with the backup taken:
- Recovery without a backup_label, leading to a corrupted state.
- Recovery with a backup_label, with a consistent state reached.
Both cases cross-check some patterns in the logs generated when running
recovery.
Compared to the first attempt in 99b4a63bef94, this includes a couple of
fixes making the CI stable (5 runs succeeded here):
- Add the file to the list of tests in meson.build.
- Race condition with the first WAL segment that we expect in the
primary's archives, by adding a poll on pg_stat_archiver. The second
segment with the checkpoint record is archived thanks to pg_backup_stop
waiting for it.
- Fix failure of test where the backup_label does not exist. The
cluster inherits the configuration of the first node; it was attempting
to store segments in the first node's archives, triggering failures with
copy on Windows.
- Fix failure of test on Windows because of incorrect parsing of the
backup_file in the success case. The data of the backup_label file is
retrieved from the output pg_backup_stop() from a BackgroundPsql written
directly to the backup's data folder. This would include CRLFs (\r\n),
causing the startup process to fail at the beginning of recovery when
parsing the backup_label because only LFs (\n) are allowed.
Author: David Steele
Discussion: https://postgr.es/m/f20fcc82-dadb-478d-beb4-1e2ffb0ace76@pgmasters.net
The same pattern is used three times in dynahash.c to retrieve a bucket
number and a hash bucket from a hash value. This has popped up while
discussing improvements for the type cache, where this piece of
refactoring would become useful.
Note that hash_search_with_hash_value() does not need the bucket number,
just the hash bucket.
Author: Teodor Sigaev
Reviewed-by: Aleksander Alekseev, Michael Paquier
Discussion: https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1@sigaev.ru
Similar to d8a295389, trim off any PathKeys which are for ORDER BY /
DISTINCT aggregate functions from the PathKey List for the Gather Merge
paths created by gather_grouping_paths(). These additional PathKeys are
not valid to use after grouping has taken place as these PathKeys belong
to columns which are inputs to an aggregate function and, therefore are
unavailable after aggregation.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/cf63174c-8c89-3953-cb49-48f41f74941a@gmail.com
Backpatch-through: 16, where 1349d2790 was added
Commit a3c7a993d fixed some cases involving target columns that are
arrays or composites by applying transformAssignedExpr to the VALUES
entries, and then stripping off any assignment ArrayRefs or
FieldStores that the transformation added. But I forgot about domains
over arrays or composites :-(. Such cases would either fail with
surprising complaints about mismatched datatypes, or insert unexpected
coercions that could lead to odd results. To fix, extend the
stripping logic to get rid of CoerceToDomain if it's atop an ArrayRef
or FieldStore.
While poking at this, I realized that there's a poorly documented and
not-at-all-tested behavior nearby: we coerce each VALUES column to
the domain type separately, and rely on the rewriter to merge those
operations so that the domain constraints are checked only once.
If that merging did not happen, it's entirely possible that we'd get
unexpected domain constraint failures due to checking a
partially-updated container value. There's no bug there, but while
we're here let's improve the commentary about it and add some test
cases that explicitly exercise that behavior.
Per bug #18393 from Pablo Kharo. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18393-65fedb1a0de9260d@postgresql.org
This function returns the chunk_id of an on-disk TOASTed value. If
the value is un-TOASTed or not on-disk, it returns NULL. This is
useful for identifying which values are actually TOASTed and for
investigating "unexpected chunk number" errors.
Bumps catversion.
Author: Yugo Nagata
Reviewed-by: Jian He
Discussion: https://postgr.es/m/20230329105507.d764497456eeac1ca491b5bd%40sraoss.co.jp
The parallel query infrastructure copies the leader backend's active
snapshot to the worker processes. But BitmapHeapScan node also had
bespoken code to pass the snapshot from leader to the worker. That was
redundant, so remove it.
The removed code was analogous to the snapshot serialization in
table_parallelscan_initialize(), but that was the wrong role model. A
parallel bitmap heap scan is more like an independent non-parallel
bitmap heap scan in each parallel worker as far as the table AM is
concerned, because the coordination is done in nodeBitmapHeapscan.c,
and the table AM doesn't need to know anything about it.
This relies on the assumption that es_snapshot ==
GetActiveSnapshot(). That's not a new assumption, things would get
weird if you used the QueryDesc's snapshot for visibility checks in
the scans, but the active snapshot for evaluating quals, for
example. This could use some refactoring and cleanup, but for now,
just add some assertions.
Reviewed-by: Dilip Kumar, Robert Haas
Discussion: https://www.postgresql.org/message-id/5f3b9d59-0f43-419d-80ca-6d04c07cf61a@iki.fi
We don't determine the position at which a process waiting for a lock
should insert itself into the wait queue until we reach ProcSleep(),
and we may at that point discover that we must insert ourselves ahead
of everyone who wants a conflicting lock, in which case we obtain the
lock immediately. Up until now, a no-wait lock acquisition would fail
in such cases, erroneously claiming that the lock couldn't be obtained
immediately. Fix that by trying ProcSleep even in the no-wait case.
No back-patch for now, because I'm treating this as an improvement to
the existing no-wait feature. It could instead be argued that it's a
bug fix, on the theory that there should never be any case whatsoever
where no-wait fails to obtain a lock that would have been obtained
immediately without no-wait, but I'm reluctant to interpret the
semantics of no-wait that strictly.
Robert Haas and Jingxian Li
Discussion: http://postgr.es/m/CA+TgmobCH-kMXGVpb0BB-iNMdtcNkTvcZ4JBxDJows3kYM+GDg@mail.gmail.com
This commit adds new tests to verify that transaction_timeout,
idle_session_timeout, and idle_in_transaction_session_timeout work as expected.
We introduce new injection points in before throwing a timeout FATAL error
and check these injection points are reached.
Discussion: https://postgr.es/m/CAAhFRxiQsRs2Eq5kCo9nXE3HTugsAAJdSQSmxncivebAxdmBjQ%40mail.gmail.com
Author: Andrey Borodin
Reviewed-by: Alexander Korotkov
Currently, pg_visibility computes its xid horizon using the
GetOldestNonRemovableTransactionId(). The problem is that this horizon can
sometimes go backward. That can lead to reporting false errors.
In order to fix that, this commit implements a new function
GetStrictOldestNonRemovableTransactionId(). This function computes the xid
horizon, which would be guaranteed to be newer or equal to any xid horizon
computed before.
We have to do the following to achieve this.
1. Ignore processes xmin's, because they consider connection to other databases
that were ignored before.
2. Ignore KnownAssignedXids, because they are not database-aware. At the same
time, the primary could compute its horizons database-aware.
3. Ignore walsender xmin, because it could go backward if some replication
connections don't use replication slots.
As a result, we're using only currently running xids to compute the horizon.
Surely these would significantly sacrifice accuracy. But we have to do so to
avoid reporting false errors.
Inspired by earlier patch by Daniel Shelepanov and the following discussion
with Robert Haas and Tom Lane.
Discussion: https://postgr.es/m/1649062270.289865713%40f403.i.mail.ru
Reviewed-by: Alexander Lakhin, Dmitry Koval
Commit b69aba74578 added the errstr parameter to pg_md5_hash but
missed updating the synopsis in the documentation comment. The
follow-up commit 587de223f03 added the parameter to the list of
outputs. The returnvalue had been changed from integer to bool
before that but remained in the synopsis. This fixes both.
Author: Tatsuro Yamada <tatsuro.yamada@ntt.com>
Discussion: https://postgr.es/m/TYYPR01MB82313576150CC86084A122CD9E292@TYYPR01MB8231.jpnprd01.prod.outlook.com
New provider for collations, like "libc" or "icu", but without any
external dependency.
Initially, the only locale supported by the builtin provider is "C",
which is identical to the libc provider's "C" locale. The libc
provider's "C" locale has always been treated as a special case that
uses an internal implementation, without using libc at all -- so the
new builtin provider uses the same implementation.
The builtin provider's locale is independent of the server environment
variables LC_COLLATE and LC_CTYPE. Using the builtin provider, the
database collation locale can be "C" while LC_COLLATE and LC_CTYPE are
set to "en_US", which is impossible with the libc provider.
By offering a new builtin provider, it clarifies that the semantics of
a collation using this provider will never depend on libc, and makes
it easier to document the behavior.
Discussion: https://postgr.es/m/ab925f69-5f9d-f85e-b87c-bd2a44798659@joeconway.com
Discussion: https://postgr.es/m/dd9261f4-7a98-4565-93ec-336c1c110d90@manitou-mail.org
Discussion: https://postgr.es/m/ff4c2f2f9c8fc7ca27c1c24ae37ecaeaeaff6b53.camel%40j-davis.com
Reviewed-by: Daniel Vérité, Peter Eisentraut, Jeremy Schneider
With the makefile rules, the output of genbki.pl was written to
src/backend/catalog/, and then the header files were linked to
src/include/catalog/.
This changes it so that the output files are written directly to
src/include/catalog/. This makes the logic simpler, and it also makes
the behavior consistent with the meson build system. Also, the list
of catalog files is now kept in parallel in
src/include/catalog/{meson.build,Makefile}, while before the makefiles
had it in src/backend/catalog/Makefile.
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/21b74bdc-183d-4dd5-9c27-9378d178f459@eisentraut.org
This reverts commit 99b4a63bef94. The test is proving to be unstable,
so revert it for now.
One of the failures seen involves the cluster started without the
backup_label, where the archives of the primary are overwritten, causing
recovery failures on Windows. This is simple to fix, but there is
another issue that's creeping behind in the form of an "invalid data in
file" ERROR when parsing the backup_label for the second recovery case,
as an effect of the backup_label data written after retrieving its
contents from pg_backup_stop().
_
Per buildfarm member sidewinder.
There are currently no tests for the low-level backup method where
pg_backup_start() and pg_backup_stop() are involved while taking a
file-system backup. The tests introduced in this commit rely on a
background psql process to make sure that the backup is taken while the
session doing the SQL start and stop calls remains alive.
Two cases are checked here with the backup taken:
- Recovery without a backup_label, leading to a corrupted state.
- Recovery with a backup_label, with a consistent state reached.
Both cases cross-check some patterns in the logs generated when running
recovery.
Author: David Steele
Discussion: https://postgr.es/m/f20fcc82-dadb-478d-beb4-1e2ffb0ace76@pgmasters.net
pg_stat_checkpointer contains statistics for checkpoints and restartpoints.
Before 12915a58eec9 documentation said only about checkpoints implying that
restartpoint is the variation of checkpoint. 12915a58eec9 introduced
new separate statistics fields for restartpoints. This commit explicitly
documents fields that are relevant for both checkpoints and restartpoints.
Reported-by: Magnus Hagander
Discussion: https://postgr.es/m/CABUevExav5-SR0x%2BG9kBUMV0G8XsvSUfuyyqmYBBJi6VHns6sw%40mail.gmail.com
Reviewed-by: Anton A. Melnikov
Roles with MAINTAIN on a relation may run VACUUM, ANALYZE, REINDEX,
REFRESH MATERIALIZE VIEW, CLUSTER, and LOCK TABLE on the relation.
Roles with privileges of pg_maintain may run those same commands on
all relations.
This was previously committed for v16, but it was reverted in
commit 151c22deee due to concerns about search_path tricks that
could be used to escalate privileges to the table owner. Commits
2af07e2f74, 59825d1639, and c7ea3f4229 resolved these concerns by
restricting search_path when running maintenance commands.
Bumps catversion.
Reviewed-by: Jeff Davis
Discussion: https://postgr.es/m/20240305161235.GA3478007%40nathanxps13
Before this patch, if you took a full backup on server A and then
tried to use the backup manifest to take an incremental backup on
server B, it wouldn't know that the manifest was from a different
server and so the incremental backup operation could potentially
complete without error. When you later tried to run pg_combinebackup,
you'd find out that your incremental backup was and always had been
invalid. That's poor timing, because nobody likes finding out about
backup problems only at restore time.
With this patch, you'll get an error when trying to take the (invalid)
incremental backup, which seems a lot nicer.
Amul Sul, revised by me. Review by Michael Paquier.
Discussion: http://postgr.es/m/CA+TgmoYLZzbSAMM3cAjV4Y+iCRZn-bR9H2+Mdz7NdaJFU1Zb5w@mail.gmail.com
The newly introduced cancel test in libpq_pipeline was flaky. It's not
completely clear why, but one option is that the check for "active" was
actually seeing the active state for the previous query. This change
should address any such race condition by first waiting until the
connection is reported as idle.
Author: Jelte Fennema-Nio <me@jeltef.nl>
Discussion: https://postgr.es/m/CAGECzQRvmUK5-d68A+cm+fgmfht9Dv2uZ28-qq3QiaF6EAZqPQ@mail.gmail.com
This works just like get_controlfile(), but expects the path to the
control file rather than the path to the data directory that contains
the control file. This makes more sense in cases where the caller
has already constructed the path to the control file itself.
Amul Sul and Robert Haas, reviewed by Michael Paquier
In the synopsis, make the syntax for merge_update consistent with the
syntax for a plain UPDATE command. It was missing the optional "ROW"
keyword that can be used in a multi-column assignment, and the option
to assign from a multi-column subquery, both of which have been
supported by MERGE since it was introduced.
In the parameters section for the with_query parameter, mention that
WITH RECURSIVE isn't supported, since this is different from plain
INSERT, UPDATE, and DELETE commands. While at it, move that entry to
the top of the list, for consistency with the other pages.
Back-patch to v15, where MERGE was introduced.
Discussion: https://postgr.es/m/CAEZATCWoQyWkMFfu7JXXQr8dA6%3DgxjhYzgpuBP2oz0QoJTxGWw%40mail.gmail.com
Starting with the Sonoma toolchain macos' linker emits warnings when the same
library is linked to twice. That's ill considered, as the same library can be
used by multiple subsidiary libraries. Luckily there's a flag to suppress that
warning.
On Ventura meson's default of -Wl,-undefined,dynamic_lookup caused warnings,
which we suppressed with -Wl,-undefined,error. Unfortunately that causes a
warning on Sonoma, which is absurd, as it's documented linker default. To
avoid that warning, only add -Wl,-undefined,error if it does not trigger
warnings. Luckily dynamic_lookup doesn't trigger a warning on Sonoma anymore.
Discussion: https://postgr.es/m/20231201040515.p5bshhhtfru7d3da@awork3.anarazel.de
Backpatch: 16-, where the meson build was added
While digging into the code of this feature, I got confused by the fact
that a line is skipped when a value cannot be converted to its expected
attribute even if the line has fewer attributes than the target
relation. The tests had a check for the case of an empty line, this
commit a couple more patterns where a line is incomplete, but skipped
because of a conversion error.
Reviewed-by: Bharath Rupireddy
Discussion: https://postgr.es/m/Ze_7kZqexdt0BiyC@paquier.xyz
The test ensures that all the WAL on the publisher is sent to the
subscriber before shutdown by comparing the confirmed_flush_lsn of the
associated slot with the shutdown_checkpoint WAL location. But if the
shutdown_checkpoint location falls into a new page in the WAL then the
check won't work. So, ensure that the shutdown_checkpoint WAL record
doesn't fall into a new page.
Reported-by: Bharath Rupireddy
Author: Bharath Rupireddy
Reviewed-by: Vignesh C, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/CALj2ACVLzH5CN-h9=S26mdRHPuJ9yDLUw70yh4JOiPw03WL0CQ@mail.gmail.com
Run the tests in a RAM disk. It's still a UFS file system and is backed
by 20GB of disk, but this avoids a lot of I/O. Even though we disable
fsync, our tests do a lot of directory manipulations, some of which
force file system meta-data to disk and flush slow device write caches
on UFS. This was a bottleneck preventing effective scaling beyond 2
CPUs.
Now we can use 4 CPUs like on other OSes, for a huge speedup.
Reviewed-by: Maxim Orlov <orlovmg@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BFXLcEg1dyTqJjDiNQ8pGom4KrJj4wF38C90thti9dVA%40mail.gmail.com
Two assertions checking that ReplicationSlotAllocationLock is acquired
are added to pgstat_create_replslot() and pgstat_drop_replslot(),
corresponding to the routines in charge of the creation and the drop of
replication slot statistics. The code previously relied on this
assumption and documented it in comments, but did not enforce this
policy at runtime.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/Ze_p-hmD_yFeVYXg@paquier.xyz
Commit f696c0cd5f tried to account for the version in a way that
includes development versions, but it was broken. Fix with suggestion
from Tom Lane.
Discussion: https://postgr.es/m/1553991.1710191312@sss.pgh.pa.us
Reported-by: Tom Lane
There is a very ancient hack in check_sql_fn_retval that allows a
single SELECT targetlist entry of composite type to be taken as
supplying all the output columns of a function returning composite.
(This is grotty and fundamentally ambiguous, but it's really hard
to do nested composite-returning functions without it.)
As far as I know, that doesn't cause any problems in ordinary
functions. It's disastrous for procedures however. All procedures
that have any output parameters are labeled with prorettype RECORD,
and the CALL code expects it will get back a record with one column
per output parameter, regardless of whether any of those parameters
is composite. Doing something else leads to an assertion failure
or core dump.
This is simple enough to fix: we just need to not apply that rule
when considering procedures. However, that requires adding another
argument to check_sql_fn_retval, which at least in principle might be
getting called by external callers. Therefore, in the back branches
convert check_sql_fn_retval into an ABI-preserving wrapper around a
new function check_sql_fn_retval_ext.
Per report from Yahor Yuzefovich. This has been broken since we
implemented procedures, so back-patch to all supported branches.
Discussion: https://postgr.es/m/CABz5gWHSjj2df6uG0NRiDhZ_Uz=Y8t0FJP-_SVSsRsnrQT76Gg@mail.gmail.com
The existing PQcancel API uses blocking IO, which makes PQcancel
impossible to use in an event loop based codebase without blocking the
event loop until the call returns. It also doesn't encrypt the
connection over which the cancel request is sent, even when the original
connection required encryption.
This commit adds a PQcancelConn struct and assorted functions, which
provide a better mechanism of sending cancel requests; in particular all
the encryption used in the original connection are also used in the
cancel connection. The main entry points are:
- PQcancelCreate creates the PQcancelConn based on the original
connection (but does not establish an actual connection).
- PQcancelStart can be used to initiate non-blocking cancel requests,
using encryption if the original connection did so, which must be
pumped using
- PQcancelPoll.
- PQcancelReset puts a PQcancelConn back in state so that it can be
reused to send a new cancel request to the same connection.
- PQcancelBlocking is a simpler-to-use blocking API that still uses
encryption.
Additional functions are
- PQcancelStatus, mimicks PQstatus;
- PQcancelSocket, mimicks PQcancelSocket;
- PQcancelErrorMessage, mimicks PQerrorMessage;
- PQcancelFinish, mimicks PQfinish.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Denis Laxalde <denis.laxalde@dalibo.com>
Discussion: https://postgr.es/m/AM5PR83MB0178D3B31CA1B6EC4A8ECC42F7529@AM5PR83MB0178.EURPRD83.prod.outlook.com
Valgrind alerted about accessing uninitialized bytes after commit
4945e4ed4a:
==700242== VALGRINDERROR-BEGIN
==700242== Conditional jump or move depends on uninitialised value(s)
==700242== at 0x6D8A2A: getnameinfo_unix (ip.c:253)
==700242== by 0x6D8BD1: pg_getnameinfo_all (ip.c:122)
==700242== by 0x4B3EB6: BackendInitialize (postmaster.c:4266)
==700242== by 0x4B684E: BackendStartup (postmaster.c:4114)
==700242== by 0x4B6986: ServerLoop (postmaster.c:1780)
==700242== by 0x4B80CA: PostmasterMain (postmaster.c:1478)
==700242== by 0x3F7424: main (main.c:197)
==700242== Uninitialised value was created by a stack allocation
==700242== at 0x4B6934: ServerLoop (postmaster.c:1737)
==700242==
==700242== VALGRINDERROR-END
That was because the SockAddr struct was not copied correctly.
Per buildfarm animal "skink".
In postmaster, use a more lightweight ClientSocket struct that
encapsulates just the socket itself and the remote endpoint's address
that you get from accept() call. ClientSocket is passed to the child
process, which initializes the bigger Port struct. This makes it more
clear what information postmaster initializes, and what is left to the
child process.
Rename the StreamServerPort and StreamConnection functions to make it
more clear what they do. Remove StreamClose, replacing it with plain
closesocket() calls.
Reviewed-by: Tristan Partin, Andres Freund
Discussion: https://www.postgresql.org/message-id/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef@iki.fi
We used to smuggle it to the child process in the Port struct, but it
seems better to pass it down as a separate argument. This paves the
way for the next commit, which moves the initialization of the Port
struct to the backend process, after forking.
Reviewed-by: Tristan Partin, Andres Freund
Discussion: https://www.postgresql.org/message-id/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef@iki.fi