In pqSendSome, if the connection is already closed at entry, discard any
queued output data before returning. There is no possibility of ever
sending the data, and anyway this corresponds to what we'd do if we'd
detected a hard error while trying to send(). This avoids possible
indefinite bloat of the output buffer if the application keeps trying
to send data (or even just keeps trying to do PQputCopyEnd, as psql
indeed will).
Because PQputCopyEnd won't transition out of PGASYNC_COPY_IN state
until it's successfully queued the COPY END message, and pqPutMsgEnd
doesn't distinguish a queuing failure from a pqSendSome failure,
this omission allowed an infinite loop in psql if the connection closure
occurred when we had at least 8K queued to send. It might be worth
refactoring so that we can make that distinction, but for the moment
the other changes made here seem to offer adequate defenses.
To guard against other variants of this scenario, do not allow
PQgetResult to return a PGRES_COPY_XXX result if the connection is
already known dead. Make sure it returns PGRES_FATAL_ERROR instead.
Per report from Stephen Frost. Back-patch to all active branches.
After taking awhile to digest the row-processor feature that was added to
libpq in commit 92785dac2ee7026948962cd61c4cd84a2d052772, we've concluded
it is over-complicated and too hard to use. Leave the core infrastructure
changes in place (that is, there's still a row processor function inside
libpq), but remove the exposed API pieces, and instead provide a "single
row" mode switch that causes PQgetResult to return one row at a time in
separate PGresult objects.
This approach incurs more overhead than proper use of a row processor
callback would, since construction of a PGresult per row adds extra cycles.
However, it is far easier to use and harder to break. The single-row mode
still affords applications the primary benefit that the row processor API
was meant to provide, namely not having to accumulate large result sets in
memory before processing them. Preliminary testing suggests that we can
probably buy back most of the extra cycles by micro-optimizing construction
of the extra results, but that task will be left for another day.
Marko Kreen
Traditionally libpq has collected an entire query result before passing
it back to the application. That provides a simple and transactional API,
but it's pretty inefficient for large result sets. This patch allows the
application to process each row on-the-fly instead of accumulating the
rows into the PGresult. Error recovery becomes a bit more complex, but
often that tradeoff is well worth making.
Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
The previous test for status < 0 test is in fact testing nothing if the
compiler considers an enum to be an unsigned data type. clang doesn't
like tautologies, so do this instead.
Report by Peter Geoghegan, fix as suggested by Tom Lane.
PQsetvalue unnecessarily duplicated the logic in pqAddTuple, and didn't
duplicate it exactly either --- pqAddTuple does not care what is in the
tuple-pointer array positions beyond the last valid entry, whereas the
code in PQsetvalue assumed such positions would contain NULL. This led
to possible crashes if PQsetvalue was applied to a PGresult that had
previously been enlarged with pqAddTuple, for instance one built from a
server query. Fix by relying on pqAddTuple instead of duplicating logic,
and not assuming anything about the contents of res->tuples[res->ntups].
Back-patch to 8.4, where PQsetvalue was introduced.
Andrew Chernow
PQescapeLiteral is similar to PQescapeStringConn, but it relieves the
caller of the need to know how large the output buffer should be, and
it provides the appropriate quoting (in addition to escaping special
characers within the string). PQescapeIdentifier provides similar
functionality for escaping identifiers.
Per recent discussion with Tom Lane.
Both hex format and the traditional "escape" format are automatically
handled on input. The output format is selected by the new GUC variable
bytea_output.
As committed, bytea_output defaults to HEX, which is an *incompatible
change*. We will keep it this way for awhile for testing purposes, but
should consider whether to switch to the more backwards-compatible
default of ESCAPE before 8.5 is released.
Peter Eisentraut
guarantees about whether event procedures will receive DESTROY events.
They no longer need to defend themselves against getting a DESTROY
without a successful prior CREATE.
Andrew Chernow
we are on a 64-bit machine (ie, size_t is wider than int) and someone passes
in a query string that approaches or exceeds INT_MAX bytes. Also, just for
paranoia's sake, guard against similar overflows in sizing the input buffer.
The backend will not in the foreseeable future be prepared to send or receive
strings exceeding 1GB, so I didn't take the more invasive step of switching
all the buffer index variables from int to size_t; though someday we might
want to do that.
I have a suspicion that this is not the only such bug in libpq, but this
fix is enough to take care of the crash reported by Francisco Reyes.
renumbering of encoding IDs done between 8.2 and 8.3 turns out to break 8.2
initdb and psql if they are run with an 8.3beta1 libpq.so. For the moment
we can rearrange the order of enum pg_enc to keep the same number for
everything except PG_JOHAB, which isn't a problem since there are no direct
references to it in the 8.2 programs anyway. (This does force initdb
unfortunately.)
Going forward, we want to fix things so that encoding IDs can be changed
without an ABI break, and this commit includes the changes needed to allow
libpq's encoding IDs to be treated as fully independent of the backend's.
The main issue is that libpq clients should not include pg_wchar.h or
otherwise assume they know the specific values of libpq's encoding IDs,
since they might encounter version skew between pg_wchar.h and the libpq.so
they are using. To fix, have libpq officially export functions needed for
encoding name<=>ID conversion and validity checking; it was doing this
anyway unofficially.
It's still the case that we can't renumber backend encoding IDs until the
next bump in libpq's major version number, since doing so will break the
8.2-era client programs. However the code is now prepared to avoid this
type of problem in future.
Note that initdb is no longer a libpq client: we just pull in the two
source files we need directly. The patch also fixes a few places that
were being sloppy about checking for an unrecognized encoding name.
and standard_conforming_strings; likewise for the other client programs
that need it. As per previous discussion, a pg_dump dump now conforms
to the standard_conforming_strings setting of the source database.
We don't use E'' syntax in the dump, thereby improving portability of
the SQL. I added a SET escape_strings_warning = off command to keep
the dumps from getting a lot of back-chatter from that.
Per Coverity bug #304. Thanks to Martijn van Oosterhout for reporting it.
Zero out the pointer fields of PGresult so that these mistakes are more
easily catched, per discussion.
and standard_conforming_strings. The encoding changes are needed for proper
escaping in multibyte encodings, as per the SQL-injection vulnerabilities
noted in CVE-2006-2313 and CVE-2006-2314. Concurrent fixes are being applied
to the server to ensure that it rejects queries that may have been corrupted
by attempted SQL injection, but this merely guarantees that unpatched clients
will fail rather than allow injection. An actual fix requires changing the
client-side code. While at it we have also fixed these routines to understand
about standard_conforming_strings, so that the upcoming changeover to SQL-spec
string syntax can be somewhat transparent to client code.
Since the existing API of PQescapeString and PQescapeBytea provides no way to
inform them which settings are in use, these functions are now deprecated in
favor of new functions PQescapeStringConn and PQescapeByteaConn. The new
functions take the PGconn to which the string will be sent as an additional
parameter, and look inside the connection structure to determine what to do.
So as to provide some functionality for clients using the old functions,
libpq stores the latest encoding and standard_conforming_strings values
received from the backend in static variables, and the old functions consult
these variables. This will work reliably in clients using only one Postgres
connection at a time, or even multiple connections if they all use the same
encoding and string syntax settings; which should cover many practical
scenarios.
Clients that use homebrew escaping methods, such as PHP's addslashes()
function or even hardwired regexp substitution, will require extra effort
to fix :-(. It is strongly recommended that such code be replaced by use of
PQescapeStringConn/PQescapeByteaConn if at all feasible.
during parse analysis, not only errors detected in the flex/bison stages.
This is per my earlier proposal. This commit includes all the basic
infrastructure, but locations are only tracked and reported for errors
involving column references, function calls, and operators. More could
be done later but this seems like a good set to start with. I've also
moved the ReportSyntaxErrorPosition logic out of psql and into libpq,
which should make it available to more people --- even within psql this
is an improvement because warnings weren't handled by ReportSyntaxErrorPosition.
because pqSendSome will absorb input data anytime it'd be forced to block.
Avoiding a kernel call per PQputCopyData call helps COPY speed materially.
Alon Goldshuv
rather than "return expr;" -- the latter style is used in most of the
tree. I kept the parentheses when they were necessary or useful because
the return expression was complex.
comment line where output as too long, and update typedefs for /lib
directory. Also fix case where identifiers were used as variable names
in the backend, but as typedefs in ecpg (favor the backend for
indenting).
Backpatch to 8.1.X.