1
0
mirror of https://github.com/postgres/postgres.git synced 2025-06-10 09:21:54 +03:00

32439 Commits

Author SHA1 Message Date
Heikki Linnakangas
86073a2b7a Correctly detect SSI conflicts of prepared transactions after crash.
A prepared transaction can get new conflicts in and out after preparing, so
we cannot rely on the in- and out-flags stored in the statefile at prepare-
time. As a quick fix, make the conservative assumption that after a restart,
all prepared transactions are considered to have both in- and out-conflicts.
That can lead to unnecessary rollbacks after a crash, but that shouldn't be
a big problem in practice; you don't want prepared transactions to hang
around for a long time anyway.

Dan Ports
2012-02-29 15:43:43 +02:00
Tom Lane
57b100fe0f Fix some more bugs in GIN's WAL replay logic.
In commit 4016bdef8aded77b4903c457050622a5a1815c16 I fixed a bunch of
ginxlog.c bugs having to do with not handling XLogReadBuffer failures
correctly.  However, in ginRedoUpdateMetapage and ginRedoDeleteListPages,
I unaccountably thought that failure to read the metapage would be
impossible and just put in an elog(PANIC) call.  This is of course wrong:
failure is exactly what will happen if the index got dropped (or rebuilt)
between creation of the WAL record and the crash we're trying to recover
from.  I believe this explains Nicholas Wilson's recent report of these
errors getting reached.

Also, fix memory leak in forgetIncompleteSplit.  This wasn't of much
concern when the code was written, but in a long-running standby server
page split records could be expected to accumulate indefinitely.

Back-patch to 8.4 --- before that, GIN didn't have a metapage.
2012-02-26 15:12:23 -05:00
Tom Lane
64c47e4542 Stamp 9.1.3. REL9_1_3 2012-02-23 17:53:36 -05:00
Tom Lane
22795f096b Last-minute release note updates.
Security: CVE-2012-0866, CVE-2012-0867, CVE-2012-0868
2012-02-23 17:47:59 -05:00
Tom Lane
2d2f63ddcc Convert newlines to spaces in names written in pg_dump comments.
pg_dump was incautious about sanitizing object names that are emitted
within SQL comments in its output script.  A name containing a newline
would at least render the script syntactically incorrect.  Maliciously
crafted object names could present a SQL injection risk when the script
is reloaded.

Reported by Heikki Linnakangas, patch by Robert Haas

Security: CVE-2012-0868
2012-02-23 15:53:17 -05:00
Tom Lane
e6fcb03dc0 Remove arbitrary limitation on length of common name in SSL certificates.
Both libpq and the backend would truncate a common name extracted from a
certificate at 32 bytes.  Replace that fixed-size buffer with dynamically
allocated string so that there is no hard limit.  While at it, remove the
code for extracting peer_dn, which we weren't using for anything; and
don't bother to store peer_cn longer than we need it in libpq.

This limit was not so terribly unreasonable when the code was written,
because we weren't using the result for anything critical, just logging it.
But now that there are options for checking the common name against the
server host name (in libpq) or using it as the user's name (in the server),
this could result in undesirable failures.  In the worst case it even seems
possible to spoof a server name or user name, if the correct name is
exactly 32 bytes and the attacker can persuade a trusted CA to issue a
certificate in which that string is a prefix of the certificate's common
name.  (To exploit this for a server name, he'd also have to send the
connection astray via phony DNS data or some such.)  The case that this is
a realistic security threat is a bit thin, but nonetheless we'll treat it
as one.

Back-patch to 8.4.  Older releases contain the faulty code, but it's not
a security problem because the common name wasn't used for anything
interesting.

Reported and patched by Heikki Linnakangas

Security: CVE-2012-0867
2012-02-23 15:48:09 -05:00
Tom Lane
54e2b6488b Require execute permission on the trigger function for CREATE TRIGGER.
This check was overlooked when we added function execute permissions to the
system years ago.  For an ordinary trigger function it's not a big deal,
since trigger functions execute with the permissions of the table owner,
so they couldn't do anything the user issuing the CREATE TRIGGER couldn't
have done anyway.  However, if a trigger function is SECURITY DEFINER,
that is not the case.  The lack of checking would allow another user to
install it on his own table and then invoke it with, essentially, forged
input data; which the trigger function is unlikely to realize, so it might
do something undesirable, for instance insert false entries in an audit log
table.

Reported by Dinesh Kumar, patch by Robert Haas

Security: CVE-2012-0866
2012-02-23 15:39:02 -05:00
Tom Lane
630fa6f308 Allow MinGW builds to use standardly-named OpenSSL libraries.
In the Fedora variant of MinGW, the openssl libraries have their normal
names, not libeay32 and libssleay32.  Adjust configure probes to allow
that, per bug #6486.

Tomasz Ostrowski
2012-02-23 15:05:17 -05:00
Peter Eisentraut
602dd1eeaa Translation updates 2012-02-23 20:40:55 +02:00
Peter Eisentraut
09c00af94e Remove inappropriate quotes
And adjust wording for consistency.
2012-02-23 12:52:03 +02:00
Tom Lane
f209a0c559 Draft release notes for 9.1.3, 9.0.7, 8.4.11, 8.3.18. 2012-02-22 18:12:35 -05:00
Alvaro Herrera
cfd1c382f0 REASSIGN OWNED: Support foreign data wrappers and servers
This was overlooked when implementing those kinds of objects, in commit
cae565e503c42a0942ca1771665243b4453c5770.

Per report from Pawel Casperek.
2012-02-22 17:32:42 -03:00
Simon Riggs
11c730f412 Correctly initialise shared recoveryLastRecPtr in recovery.
Previously we used ReadRecPtr rather than EndRecPtr, which was
not a serious error but caused pg_stat_replication to report
incorrect replay_location until at least one WAL record is replayed.

Fujii Masao
2012-02-22 13:53:48 +00:00
Tom Lane
6182e01f18 Don't clear btpo_cycleid during _bt_vacuum_one_page.
When "vacuuming" a single btree page by removing LP_DEAD tuples, we are not
actually within a vacuum operation, but rather in an ordinary insertion
process that could well be running concurrently with a vacuum.  So clearing
the cycleid is incorrect, and could cause the concurrent vacuum to miss
removing tuples that it needs to remove.  This is a longstanding bug
introduced by commit e6284649b9e30372b3990107a082bc7520325676 of
2006-07-25.  I believe it explains Maxim Boguk's recent report of index
corruption, and probably some other previously unexplained reports.

In 9.0 and up this is a one-line fix; before that we need to introduce a
flag to tell _bt_delitems what to do.
2012-02-21 15:03:44 -05:00
Magnus Hagander
3d2aa2c086 Avoid double close of file handle in syslogger on win32
This causes an exception when running under a debugger or in particular
when running on a debug version of Windows.

Patch from MauMau
2012-02-21 17:13:59 +01:00
Tom Lane
f22bd1570e Don't reject threaded Python on FreeBSD.
According to Chris Rees, this has worked for awhile, and the current
FreeBSD port is removing the test anyway.
2012-02-20 16:21:35 -05:00
Tom Lane
d3158f339f Fix regex back-references that are directly quantified with *.
The syntax "\n*", that is a backref with a * quantifier directly applied
to it, has never worked correctly in Spencer's library.  This has been an
open bug in the Tcl bug tracker since 2005:
https://sourceforge.net/tracker/index.php?func=detail&aid=1115587&group_id=10894&atid=110894

The core of the problem is in parseqatom(), which first changes "\n*" to
"\n+|" and then applies repeat() to the NFA representing the backref atom.
repeat() thinks that any arc leading into its "rp" argument is part of the
sub-NFA to be repeated.  Unfortunately, since parseqatom() already created
the arc that was intended to represent the empty bypass around "\n+", this
arc gets moved too, so that it now leads into the state loop created by
repeat().  Thus, what was supposed to be an "empty" bypass gets turned into
something that represents zero or more repetitions of the NFA representing
the backref atom.  In the original example, in place of
	^([bc])\1*$
we now have something that acts like
	^([bc])(\1+|[bc]*)$
At runtime, the branch involving the actual backref fails, as it's supposed
to, but then the other branch succeeds anyway.

We could no doubt fix this by some rearrangement of the operations in
parseqatom(), but that code is plenty ugly already, and what's more the
whole business of converting "x*" to "x+|" probably needs to go away to fix
another problem I'll mention in a moment.  Instead, this patch suppresses
the *-conversion when the target is a simple backref atom, leaving the case
of m == 0 to be handled at runtime.  This makes the patch in regcomp.c a
one-liner, at the cost of having to tweak cbrdissect() a little.  In the
event I went a bit further than that and rewrote cbrdissect() to check all
the string-length-related conditions before it starts comparing characters.
It seems a bit stupid to possibly iterate through many copies of an
n-character backreference, only to fail at the end because the target
string's length isn't a multiple of n --- we could have found that out
before starting.  The existing coding could only be a win if integer
division is hugely expensive compared to character comparison, but I don't
know of any modern machine where that might be true.

This does not fix all the problems with quantified back-references.  In
particular, the code is still broken for back-references that appear within
a larger expression that is quantified (so that direct insertion of the
quantification limits into the BACKREF node doesn't apply).  I think fixing
that will take some major surgery on the NFA code, specifically introducing
an explicit iteration node type instead of trying to transform iteration
into concatenation of modified regexps.

Back-patch to all supported branches.  In HEAD, also add a regression test
case for this.  (It may seem a bit silly to create a regression test file
for just one test case; but I'm expecting that we will soon import a whole
bunch of regex regression tests from Tcl, so might as well create the
infrastructure now.)
2012-02-20 00:52:42 -05:00
Tom Lane
86328cbc34 Fix longstanding error in contrib/intarray's int[] & int[] operator.
The array intersection code would give wrong results if the first entry of
the correct output array would be "1".  (I think only this value could be
at risk, since the previous word would always be a lower-bound entry with
that fixed value.)

Problem spotted by Julien Rouhaud, initial patch by Guillaume Lelarge,
cosmetic improvements by me.
2012-02-16 20:00:17 -05:00
Tom Lane
04a0231e56 Run a portal's cleanup hook immediately when pushing it to FAILED state.
This extends the changes of commit 6252c4f9e201f619e5eebda12fa867acd4e4200e
so that we run the cleanup hook earlier for failure cases as well as
success cases.  As before, the point is to avoid an assertion failure from
an Assert I added in commit a874fe7b4c890d1fe3455215a83ca777867beadd, which
was meant to check that no user-written code can be called during portal
cleanup.  This fixes a case reported by Pavan Deolasee in which the Assert
could be triggered during backend exit (see the new regression test case),
and also prevents the possibility that the cleanup hook is run after
portions of the portal's state have already been recycled.  That doesn't
really matter in current usage, but it foreseeably could matter in the
future.

Back-patch to 9.1 where the Assert in question was added.
2012-02-15 16:18:39 -05:00
Michael Meskes
421513ba84 Do not use the variable name when defining a varchar structure in ecpg.
With a unique counter being added anyway, there is no need anymore to have the variable name listed, too.
2012-02-13 15:48:49 +01:00
Andrew Dunstan
6c1603cd8a Fix auto-explain JSON output to be valid JSON.
Problem reported by Peter Eisentraut.

Backpatched to release 9.0.
2012-02-13 08:22:54 -05:00
Tom Lane
6fb17aeeab Fix I/O-conversion-related memory leaks in plpgsql.
Datatype I/O functions are allowed to leak memory in CurrentMemoryContext,
since they are generally called in short-lived contexts.  However, plpgsql
calls such functions for purposes of type conversion, and was calling them
in its procedure context.  Therefore, any leaked memory would not be
recovered until the end of the plpgsql function.  If such a conversion
was done within a loop, quite a bit of memory could get consumed.  Fix by
calling such functions in the transient "eval_econtext", and adjust other
logic to match.  Back-patch to all supported versions.

Andres Freund, Jan Urbański, Tom Lane
2012-02-11 18:06:29 -05:00
Tom Lane
dd4e0a3878 Fix oversight in pg_dump's handling of extension configuration tables.
If an extension has not been selected to be dumped (perhaps because of
a --schema or --table switch), the contents of its configuration tables
surely should not get dumped either.  Per gripe from
Hubert Depesz Lubaczewski.
2012-02-10 15:22:36 -05:00
Tom Lane
e565eff45a Fix brain fade in previous pg_dump patch.
In pre-7.3 databases, pg_attribute.attislocal doesn't exist.  The easiest
way to make sure the new inheritance logic behaves sanely is to assume it's
TRUE, not FALSE.  This will result in printing child columns even when
they're not really needed.  We could work harder at trying to reconstruct a
value for attislocal, but there is little evidence that anyone still cares
about dumping from such old versions, so just do the minimum necessary to
have a valid dump.

I had this correct in the original draft of the patch, but for some
unaccountable reason decided it wasn't necessary to change the value.
Testing against an old server shows otherwise...
2012-02-10 14:09:25 -05:00
Tom Lane
182228bd74 Fix pg_dump for better handling of inherited columns.
Revise pg_dump's handling of inherited columns, which was last looked at
seriously in 2001, to eliminate several misbehaviors associated with
inherited default expressions and NOT NULL flags.  In particular make sure
that a column is printed in a child table's CREATE TABLE command if and
only if it has attislocal = true; the former behavior would sometimes cause
a column to become marked attislocal when it was not so marked in the
source database.  Also, stop relying on textual comparison of default
expressions to decide if they're inherited; instead, don't use
default-expression inheritance at all, but just install the default
explicitly at each level of the hierarchy.  This fixes the
search-path-related misbehavior recently exhibited by Chester Young, and
also removes some dubious assumptions about the order in which ALTER TABLE
SET DEFAULT commands would be executed.

Back-patch to all supported branches.
2012-02-10 13:28:10 -05:00
Tom Lane
1e7d008bf8 Throw error sooner for unlogged GiST indexes.
Throwing an error only after we've built the main index fork is pretty
unfriendly when the table already contains data.  Per gripe from Jay
Levitt.
2012-02-08 16:19:31 -05:00
Tom Lane
ef19c9dfaa Fix postmaster to attempt restart after a hot-standby crash.
The postmaster was coded to treat any unexpected exit of the startup
process (i.e., the WAL replay process) as a catastrophic crash, and not try
to restart it. This was OK so long as the startup process could not have
any sibling postmaster children.  However, if a hot-standby backend
crashes, we SIGQUIT the startup process along with everything else, and the
resulting exit is hardly "unexpected".  Treating it as such meant we failed
to restart a standby server after any child crash at all, not only a crash
of the WAL replay process as intended.  Adjust that.  Back-patch to 9.0
where hot standby was introduced.
2012-02-06 15:29:41 -05:00
Tom Lane
c74ad4e55b Avoid throwing ERROR during WAL replay of DROP TABLESPACE.
Although we will not even issue an XLOG_TBLSPC_DROP WAL record unless
removal of the tablespace's directories succeeds, that does not guarantee
that the same operation will succeed during WAL replay.  Foreseeable
reasons for it to fail include temp files created in the tablespace by Hot
Standby backends, wrong directory permissions on a standby server, etc etc.
The original coding threw ERROR if replay failed to remove the directories,
but that is a serious overreaction.  Throwing an error aborts recovery,
and worse means that manual intervention will be needed to get the database
to start again, since otherwise the same error will recur on subsequent
attempts to replay the same WAL record.  And the consequence of failing to
remove the directories is only that some probably-small amount of disk
space is wasted, so it hardly seems justified to throw an error.
Accordingly, arrange to report such failures as LOG messages and keep going
when a failure occurs during replay.

Back-patch to 9.0 where Hot Standby was introduced.  In principle such
problems can occur in earlier releases, but Hot Standby increases the odds
of trouble significantly.  Given the lack of field reports of such issues,
I'm satisfied with patching back as far as the patch applies easily.
2012-02-06 14:44:04 -05:00
Tom Lane
f1b8a84dec Avoid problems with OID wraparound during WAL replay.
Fix a longstanding thinko in replay of NEXTOID and checkpoint records: we
tried to advance nextOid only if it was behind the value in the WAL record,
but the comparison would draw the wrong conclusion if OID wraparound had
occurred since the previous value.  Better to just unconditionally assign
the new value, since OID assignment shouldn't be happening during replay
anyway.

The consequences of a failure to update nextOid would be pretty minimal,
since we have long had the code set up to obtain another OID and try again
if the generated value is already in use.  But in the worst case there
could be significant performance glitches while such loops iterate through
many already-used OIDs before finding a free one.

The odds of a wraparound happening during WAL replay would be small in a
crash-recovery scenario, and the length of any ensuing OID-assignment stall
quite limited anyway.  But neither of these statements hold true for a
replication slave that follows a WAL stream for a long period; its behavior
upon going live could be almost unboundedly bad.  Hence it seems worth
back-patching this fix into all supported branches.

Already fixed in HEAD in commit c6d76d7c82ebebb7210029f7382c0ebe2c558bca.
2012-02-06 13:14:46 -05:00
Alvaro Herrera
2a84671909 fe-misc.c depends on pg_config_paths.h
Declare this in Makefile to avoid failures in parallel compiles.

Author: Lionel Elie Mamane
2012-02-06 11:52:01 -03:00
Tom Lane
b2e1eaa4a1 Fix transient clobbering of shared buffers during WAL replay.
RestoreBkpBlocks was in the habit of zeroing and refilling the target
buffer; which was perfectly safe when the code was written, but is unsafe
during Hot Standby operation.  The reason is that we have coding rules
that allow backends to continue accessing a tuple in a heap relation while
holding only a pin on its buffer.  Such a backend could see transiently
zeroed data, if WAL replay had occasion to change other data on the page.
This has been shown to be the cause of bug #6425 from Duncan Rance (who
deserves kudos for developing a sufficiently-reproducible test case) as
well as Bridget Frey's re-report of bug #6200.  It most likely explains the
original report as well, though we don't yet have confirmation of that.

To fix, change the code so that only bytes that are supposed to change will
change, even transiently.  This actually saves cycles in RestoreBkpBlocks,
since it's not writing the same bytes twice.

Also fix seq_redo, which has the same disease, though it has to work a bit
harder to meet the requirement.

So far as I can tell, no other WAL replay routines have this type of bug.
In particular, the index-related replay routines, which would certainly be
broken if they had to meet the same standard, are not at risk because we
do not have coding rules that allow access to an index page when not
holding a buffer lock on it.

Back-patch to 9.0 where Hot Standby was added.
2012-02-05 15:49:35 -05:00
Simon Riggs
8572cc495c Resolve timing issue with logging locks for Hot Standby.
We log AccessExclusiveLocks for replay onto standby nodes,
but because of timing issues on ProcArray it is possible to
log a lock that is still held by a just committed transaction
that is very soon to be removed. To avoid any timing issue we
avoid applying locks made by transactions with InvalidXid.

Simon Riggs, bug report Tom Lane, diagnosis Pavan Deolasee
2012-02-01 09:31:07 +00:00
Heikki Linnakangas
b896e1e852 Accept a non-existent value in "ALTER USER/DATABASE SET ..." command.
When default_text_search_config, default_tablespace, or temp_tablespaces
setting is set per-user or per-database, with an "ALTER USER/DATABASE SET
..." statement, don't throw an error if the text search configuration or
tablespace does not exist. In case of text search configuration, even if
it doesn't exist in the current database, it might exist in another
database, where the setting is intended to have its effect. This behavior
is now the same as search_path's.

Tablespaces are cluster-wide, so the same argument doesn't hold for
tablespaces, but there's a problem with pg_dumpall: it dumps "ALTER USER
SET ..." statements before the "CREATE TABLESPACE" statements. Arguably
that's pg_dumpall's fault - it should dump the statements in such an order
that the tablespace is created first and then the "ALTER USER SET
default_tablespace ..." statements after that - but it seems better to be
consistent with search_path and default_text_search_config anyway. Besides,
you could still create a dump that throws an error, by creating the
tablespace, running "ALTER USER SET default_tablespace", then dropping the
tablespace and running pg_dumpall on that.

Backpatch to all supported versions.
2012-01-30 11:19:38 +02:00
Tom Lane
106123fa26 Fix pushing of index-expression qualifications through UNION ALL.
In commit 57664ed25e5dea117158a2e663c29e60b3546e1c, I made the planner
wrap non-simple-variable outputs of appendrel children (IOW, child SELECTs
of UNION ALL subqueries) inside PlaceHolderVars, in order to solve some
issues with EquivalenceClass processing.  However, this means that any
upper-level WHERE clauses mentioning such outputs will now contain
PlaceHolderVars after they're pushed down into the appendrel child,
and that prevents indxpath.c from recognizing that they could be matched
to index expressions.  To fix, add explicit stripping of PlaceHolderVars
from index operands, same as we have long done for RelabelType nodes.
Add a regression test covering both this and the plain-UNION case (which
is a totally different code path, but should also be able to do it).

Per bug #6416 from Matteo Beccati.  Back-patch to 9.1, same as the
previous change.
2012-01-29 16:31:31 -05:00
Tom Lane
b3bd5a093f Update statement about sorting of character-string data.
The sort order is no longer fixed at database creation time, but can be
controlled via COLLATE.  Noted by Thomas Kellerer.
2012-01-28 20:55:07 -05:00
Tom Lane
1a096957d7 Fix handling of init_plans list in inheritance_planner().
Formerly we passed an empty list to each per-child-table invocation of
grouping_planner, and then merged the results into the global list.
However, that fails if there's a CTE attached to the statement, because
create_ctescan_plan uses the list to find the plan referenced by a CTE
reference; so it was unable to find any CTEs attached to the outer UPDATE
or DELETE.  But there's no real reason not to use the same list throughout
the process, and doing so is simpler and faster anyway.

Per report from Josh Berkus of "could not find plan for CTE" failures.
Back-patch to 9.1 where we added support for WITH attached to UPDATE or
DELETE.  Add some regression test cases, too.
2012-01-28 20:24:49 -05:00
Tom Lane
0b1e177953 Fix handling of data-modifying CTE subplans in EvalPlanQual.
We can't just skip initializing such subplans, because the referencing CTE
node will expect to find the subplan available when it initializes.  That
in turn means that ExecInitModifyTable must allow the case (which actually
it needed to do anyway, since there's no guarantee that ModifyTable is
exactly at the top of the CTE plan tree).  So move the complaint about not
being allowed in EvalPlanQual mode to execution instead of initialization.
Testing turned up yet another problem, which is that we'd try to
re-initialize the result relation's index list, leading to leaks and
dangling pointers.

Per report from Phil Sorber.  Back-patch to 9.1 where data-modifying CTEs
were introduced.
2012-01-28 17:44:03 -05:00
Tom Lane
7c016e3f56 Fix error detection in contrib/pgcrypto's encrypt_iv() and decrypt_iv().
Due to oversights, the encrypt_iv() and decrypt_iv() functions failed to
report certain types of invalid-input errors, and would instead return
random garbage values.

Marko Kreen, per report from Stefan Kaltenbrunner
2012-01-27 23:09:33 -05:00
Magnus Hagander
b7922a6dd0 Fix wording, per Peter Geoghegan 2012-01-27 10:37:09 +01:00
Bruce Momjian
e96fcb06b9 Now that the shared library name can be adjusted in the library test,
have pg_upgrade allocate a maximum fixed size buffer for testing the
library file name, rather than base the allocation on the library name.

Backpatch to 9.1.
2012-01-25 09:35:17 -05:00
Bruce Momjian
fa4dad6cc0 In pg_upgrade, when checking for the plpython library, we must check for
"plpython2" when upgrading from pre-PG 9.1.  Patch to head and 9.1.

Per report from Peter.
2012-01-24 22:42:37 -05:00
Bruce Momjian
e9cdb00ccd Remove tab in 9.1 SGML file. 2012-01-23 21:08:46 -05:00
Heikki Linnakangas
02f377dbe5 Fix corner case in cleanup of transactions using SSI.
When the only remaining active transactions are READ ONLY, we do a "partial
cleanup" of committed transactions because certain types of conflicts
aren't possible anymore. For committed r/w transactions, we release the
SIREAD locks but keep the SERIALIZABLEXACT. However, for committed r/o
transactions, we can go further and release the SERIALIZABLEXACT too. The
problem was with the latter case: we were returning the SERIALIZABLEXACT to
the free list without removing it from the finished list.

The only real change in the patch is the SHMQueueDelete line, but I also
reworked some of the surrounding code to make it obvious that r/o and r/w
transactions are handled differently -- the existing code felt a bit too
clever.

Dan Ports
2012-01-18 17:12:58 +02:00
Andrew Dunstan
ef007e6702 Improve efficiency of recent changes to plperl's sv2cstr().
Along the way, add a missing dependency in the GNUmakefile.

Alex Hunsaker, with a slight adjustment by me.
2012-01-15 16:20:39 -05:00
Tom Lane
b994c57a80 Fix CLUSTER/VACUUM FULL for toast values owned by recently-updated rows.
In commit 7b0d0e9356963d5c3e4d329a917f5fbb82a2ef05, I made CLUSTER and
VACUUM FULL try to preserve toast value OIDs from the original toast table
to the new one.  However, if we have to copy both live and recently-dead
versions of a row that has a toasted column, those versions may well
reference the same toast value with the same OID.  The patch then led to
duplicate-key failures as we tried to insert the toast value twice with the
same OID.  (The previous behavior was not very desirable either, since it
would have silently inserted the same value twice with different OIDs.
That wastes space, but what's worse is that the toast values inserted for
already-dead heap rows would not be reclaimed by subsequent ordinary
VACUUMs, since they go into the new toast table marked live not deleted.)

To fix, check if the copied OID already exists in the new toast table, and
if so, assume that it stores the desired value.  This is reasonably safe
since the only case where we will copy an OID from a previous toast pointer
is when toast_insert_or_update was given that toast pointer and so we just
pulled the data from the old table; if we got two different values that way
then we have big problems anyway.  We do have to assume that no other
backend is inserting items into the new toast table concurrently, but
that's surely safe for CLUSTER and VACUUM FULL.

Per bug #6393 from Maxim Boguk.  Back-patch to 9.0, same as the previous
patch.
2012-01-12 16:40:19 -05:00
Tom Lane
d427e75e51 Fix one-byte buffer overrun in contrib/test_parser.
The original coding examined the next character before verifying that
there *is* a next character.  In the worst case with the input buffer
right up against the end of memory, this would result in a segfault.

Problem spotted by Paul Guyot; this commit extends his patch to fix an
additional case.  In addition, make the code a tad more readable by not
overloading the usage of *tlen.
2012-01-09 19:57:21 -05:00
Tom Lane
068e08eebb Use __sync_lock_test_and_set() for spinlocks on ARM, if available.
Historically we've used the SWPB instruction for TAS() on ARM, but this
is deprecated and not available on ARMv6 and later.  Instead, make use
of a GCC builtin if available.  We'll still fall back to SWPB if not,
so as not to break existing ports using older GCC versions.

Eventually we might want to try using __sync_lock_test_and_set() on some
other architectures too, but for now that seems to present only risk and
not reward.

Back-patch to all supported versions, since people might want to use any
of them on more recent ARM chips.

Martin Pitt
2012-01-07 15:38:59 -05:00
Tom Lane
f517ece063 Fix typo, pg_types_date.h => pgtypes_date.h.
Spotted by Koizumi Satoru.
2012-01-06 13:31:41 -05:00
Tom Lane
522650a6e4 Fix pg_restore's direct-to-database mode for INSERT-style table data.
In commit 6545a901aaf84cb05212bb6a7674059908f527c3, I removed the mini SQL
lexer that was in pg_backup_db.c, thinking that it had no real purpose
beyond separating COPY data from SQL commands, which purpose had been
obsoleted by long-ago fixes in pg_dump's archive file format.
Unfortunately this was in error: that code was also used to identify
command boundaries in INSERT-style table data, which is run together as a
single string in the archive file for better compressibility.  As a result,
direct-to-database restores from archive files made with --inserts or
--column-inserts fail in our latest releases, as reported by Dick Visser.

To fix, restore the mini SQL lexer, but simplify it by adjusting the
calling logic so that it's only required to cope with INSERT-style table
data, not arbitrary SQL commands.  This allows us to not have to deal with
SQL comments, E'' strings, or dollar-quoted strings, none of which have
ever been emitted by dumpTableData_insert.

Also, fix the lexer to cope with standard-conforming strings, which was the
actual bug that the previous patch was meant to solve.

Back-patch to all supported branches.  The previous patch went back to 8.2,
which unfortunately means that the EOL release of 8.2 contains this bug,
but I don't think we're doing another 8.2 release just because of that.
2012-01-06 13:04:15 -05:00
Robert Haas
f9f0484504 Fix variable confusion in BufferSync().
As noted by Heikki Linnakangas, the previous coding confused the "flags"
variable with the "mask" variable.  The affect of this appears to be that
unlogged buffers would get written out at every checkpoint rather than
only at shutdown time.  Although that's arguably an acceptable failure
mode, I'm back-patching this change, since it seems like a poor idea to
rely on this happening to work.
2012-01-06 08:35:41 -05:00