1
0
mirror of https://github.com/postgres/postgres.git synced 2025-07-28 23:42:10 +03:00

Back-patch fcff8a5751 as a bug fix.

When there is both a serialization failure and a unique violation,
throw the former rather than the latter.  When initially pushed,
this was viewed as a feature to assist application framework
developers, so that they could more accurately determine when to
retry a failed transaction, but a test case presented by Ian
Jackson has shown that this patch can prevent serialization
anomalies in some cases where a unique violation is caught within a
subtransaction, the work of that subtransaction is discarded, and
no error is thrown.  That makes this a bug fix, so it is being
back-patched to all supported branches where it is not already
present (i.e., 9.2 to 9.5).

Discussion: https://postgr.es/m/1481307991-16971-1-git-send-email-ian.jackson@eu.citrix.com
Discussion: https://postgr.es/m/22607.56276.807567.924144@mariner.uk.xensource.com
This commit is contained in:
Kevin Grittner
2016-12-13 19:14:42 -06:00
parent 15b3722700
commit bed2a0b06b
11 changed files with 307 additions and 7 deletions

View File

@ -644,7 +644,7 @@ ERROR: could not serialize access due to read/write dependencies among transact
first. In <productname>PostgreSQL</productname> these locks do not
cause any blocking and therefore can <emphasis>not</> play any part in
causing a deadlock. They are used to identify and flag dependencies
among concurrent serializable transactions which in certain combinations
among concurrent Serializable transactions which in certain combinations
can lead to serialization anomalies. In contrast, a Read Committed or
Repeatable Read transaction which wants to ensure data consistency may
need to take out a lock on an entire table, which could block other
@ -679,12 +679,13 @@ ERROR: could not serialize access due to read/write dependencies among transact
<para>
Consistent use of Serializable transactions can simplify development.
The guarantee that any set of concurrent serializable transactions will
have the same effect as if they were run one at a time means that if
you can demonstrate that a single transaction, as written, will do the
right thing when run by itself, you can have confidence that it will
do the right thing in any mix of serializable transactions, even without
any information about what those other transactions might do. It is
The guarantee that any set of successfully committed concurrent
Serializable transactions will have the same effect as if they were run
one at a time means that if you can demonstrate that a single transaction,
as written, will do the right thing when run by itself, you can have
confidence that it will do the right thing in any mix of Serializable
transactions, even without any information about what those other
transactions might do, or it will not successfully commit. It is
important that an environment which uses this technique have a
generalized way of handling serialization failures (which always return
with a SQLSTATE value of '40001'), because it will be very hard to
@ -698,6 +699,26 @@ ERROR: could not serialize access due to read/write dependencies among transact
for some environments.
</para>
<para>
While <productname>PostgreSQL</>'s Serializable transaction isolation
level only allows concurrent transactions to commit if it can prove there
is a serial order of execution that would produce the same effect, it
doesn't always prevent errors from being raised that would not occur in
true serial execution. In particular, it is possible to see unique
constraint violations caused by conflicts with overlapping Serializable
transactions even after explicitly checking that the key isn't present
before attempting to insert it. This can be avoided by making sure
that <emphasis>all</> Serializable transactions that insert potentially
conflicting keys explicitly check if they can do so first. For example,
imagine an application that asks the user for a new key and then checks
that it doesn't exist already by trying to select it first, or generates
a new key by selecting the maximum existing key and adding one. If some
Serializable transactions insert new keys directly without following this
protocol, unique constraints violations might be reported even in cases
where they could not occur in a serial execution of the concurrent
transactions.
</para>
<para>
For optimal performance when relying on Serializable transactions for
concurrency control, these issues should be considered: