1
0
mirror of https://github.com/postgres/postgres.git synced 2025-06-19 04:21:08 +03:00
Commit Graph

31631 Commits

Author SHA1 Message Date
07044efe00 Remove bogus SCRAM_ITERATION_LEN constant.
It was not used for what the comment claimed, at all. It was actually used
as the 'base' argument to strtol(), when reading the iteration count. We
don't need a constant for base-10, so remove it.
2017-04-06 17:41:48 +03:00
cd0cebaf7d Always SnapshotResetXmin() during ClearTransaction()
Avoid corner cases during 2PC with 6bad580d9e
2017-04-06 10:30:22 -04:00
3217327053 Identity columns
This is the SQL standard-conforming variant of PostgreSQL's serial
columns.  It fixes a few usability issues that serial columns have:

- CREATE TABLE / LIKE copies default but refers to same sequence
- cannot add/drop serialness with ALTER TABLE
- dropping default does not drop sequence
- need to grant separate privileges to sequence
- other slight weirdnesses because serial is some kind of special macro

Reviewed-by: Vitaly Burovoy <vitaly.burovoy@gmail.com>
2017-04-06 08:41:37 -04:00
6bad580d9e Avoid SnapshotResetXmin() during AtEOXact_Snapshot()
For normal commits and aborts we already reset PgXact->xmin,
so we can simply avoid running SnapshotResetXmin() twice.

During performance tests by Alexander Korotkov, diagnosis
by Andres Freund showed PgXact array as a bottleneck. After
manual analysis by me of the code paths that touch those
memory locations, I was able to identify extraneous code
in the main transaction commit path.

Avoiding touching highly contented shmem improves concurrent
performance slightly on all workloads, confirmed by tests
run by Ashutosh Sharma and Alexander Korotkov.

Simon Riggs

Discussion: CANP8+jJdXE9b+b9F8CQT-LuxxO0PBCB-SZFfMVAdp+akqo4zfg@mail.gmail.com
2017-04-06 08:31:52 -04:00
fd01983594 Remove dead code and fix comments in fast-path function handling.
HandleFunctionRequest() is no longer responsible for reading the protocol
message from the client, since commit 2b3a8b20c2. Fix the outdated
comments.

HandleFunctionRequest() now always returns 0, because the code that used
to return EOF was moved in 2b3a8b20c2. Therefore, the caller no longer
needs to check the return value.

Reported by Andres Freund. Backpatch to all supported versions, even though
this doesn't have any user-visible effect, to make backporting future
patches in this area easier.

Discussion: https://www.postgresql.org/message-id/20170405010525.rt5azbya5fkbhvrx@alap3.anarazel.de
2017-04-06 09:09:39 +03:00
5c21ad07cc Code review for recent slot.c changes. 2017-04-05 21:00:29 -07:00
df1a699e5b Fix integer-overflow problems in interval comparison.
When using integer timestamps, the interval-comparison functions tried
to compute the overall magnitude of an interval as an int64 number of
microseconds.  As reported by Frazer McLean, this overflows for intervals
exceeding about 296000 years, which is bad since we nominally allow
intervals many times larger than that.  That results in wrong comparison
results, and possibly in corrupted btree indexes for columns containing
such large interval values.

To fix, compute the magnitude as int128 instead.  Although some compilers
have native support for int128 calculations, many don't, so create our
own support functions that can do 128-bit addition and multiplication
if the compiler support isn't there.  These support functions are designed
with an eye to allowing the int128 code paths in numeric.c to be rewritten
for use on all platforms, although this patch doesn't do that, or even
provide all the int128 primitives that will be needed for it.

Back-patch as far as 9.4.  Earlier releases did not guard against overflow
of interval values at all (commit 146604ec4 fixed that), so it seems not
very exciting to worry about overly-large intervals for them.

Before 9.6, we did not assume that unreferenced "static inline" functions
would not draw compiler warnings, so omit functions not directly referenced
by timestamp.c, the only present consumer of int128.h.  (We could have
omitted these functions in HEAD too, but since they were written and
debugged on the way to the present patch, and they look likely to be needed
by numeric.c, let's keep them in HEAD.)  I did not bother to try to prevent
such warnings in a --disable-integer-datetimes build, though.

Before 9.5, configure will never define HAVE_INT128, so the part of
int128.h that exploits a native int128 implementation is dead code in the
9.4 branch.  I didn't bother to remove it, thinking that keeping the file
looking similar in different branches is more useful.

In HEAD only, add a simple test harness for int128.h in src/tools/.

In back branches, this does not change the float-timestamps code path.
That's not subject to the same kind of overflow risk, since it computes
the interval magnitude as float8.  (No doubt, when this code was originally
written, overflow was disregarded for exactly that reason.)  There is a
precision hazard instead :-(, but we'll avert our eyes from that question,
since no complaints have been reported and that code's deprecated anyway.

Kyotaro Horiguchi and Tom Lane

Discussion: https://postgr.es/m/1490104629.422698.918452336.26FA96B7@webmail.messagingengine.com
2017-04-05 23:51:27 -04:00
68ea2b7f9b Reduce lock level for CREATE STATISTICS
In line with other lock reductions related to planning.

Simon Riggs
2017-04-05 18:22:32 -04:00
2686ee1b7c Collect and use multi-column dependency stats
Follow on patch in the multi-variate statistics patch series.

CREATE STATISTICS s1 WITH (dependencies) ON (a, b) FROM t;
ANALYZE;
will collect dependency stats on (a, b) and then use the measured
dependency in subsequent query planning.

Commit 7b504eb282 added
CREATE STATISTICS with n-distinct coefficients. These are now
specified using the mutually exclusive option WITH (ndistinct).

Author: Tomas Vondra, David Rowley
Reviewed-by: Kyotaro HORIGUCHI, Álvaro Herrera, Dean Rasheed, Robert Haas
and many other comments and contributions
Discussion: https://postgr.es/m/56f40b20-c464-fad2-ff39-06b668fac47c@2ndquadrant.com
2017-04-05 18:00:42 -04:00
ed770c325c Spelling mistake in comment in utility.c 2017-04-05 14:29:29 -04:00
633e15ea0f Fix pageinspect failures on hash indexes.
Make every page in a hash index which isn't all-zeroes have a valid
special space, so that tools like pageinspect don't error out.

Also, make pageinspect cope with all-zeroes pages, because
_hash_alloc_buckets can leave behind large numbers of those until
they're consumed by splits.

Ashutosh Sharma and Robert Haas, reviewed by Amit Kapila.
Original trouble report from Jeff Janes.

Discussion: http://postgr.es/m/CAMkU=1y6NjKmqbJ8wLMhr=F74WzcMALYWcVFhEpm7i=mV=XsOg@mail.gmail.com
2017-04-05 14:18:15 -04:00
6785fbd60f Use American English in error message
All error messages use the American English spelling of recognize,
apply to the single one not doing so to be consistent.

Author: Daniel Gustafsson <daniel@yesql.se>
2017-04-05 14:06:15 -04:00
75a1cbdc3c hash: Fix write-ahead logging bug.
The size of the data is not the same thing as the size of the size of
the data.

Reported off-list by Tushar Ahuja.  Fix by Ashutosh Sharma, reviewed
by Amit Kapila.

Discussion: http://postgr.es/m/CAE9k0PnmPDXfvf8HDObme7q_Ewc4E26ukHXUBPySoOs0ObqqaQ@mail.gmail.com
2017-04-05 11:45:35 -04:00
4deb413813 Add isolation test for SERIALIZABLE READ ONLY DEFERRABLE.
This improves code coverage and lays a foundation for testing
similar issues in a distributed environment.

Author: Thomas Munro <thomas.munro@enterprisedb.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2017-04-05 10:04:36 -05:00
afd79873a0 Capitalize names of PLs consistently
Author: Daniel Gustafsson <daniel@yesql.se>
2017-04-05 00:38:25 -04:00
193f5f9e91 pageinspect: Add bt_page_items function with bytea argument
Author: Tomas Vondra <tomas.vondra@2ndquadrant.com>
Reviewed-by: Ashutosh Sharma <ashu.coek88@gmail.com>
2017-04-04 23:52:55 -04:00
5ebeb579b9 Follow-on cleanup for the transition table patch.
Commit 59702716 added transition table support to PL/pgsql so that
SQL queries in trigger functions could access those transient
tables.  In order to provide the same level of support for PL/perl,
PL/python and PL/tcl, refactor the relevant code into a new
function SPI_register_trigger_data.  Call the new function in the
trigger handler of all four PLs, and document it as a public SPI
function so that authors of out-of-tree PLs can do the same.

Also get rid of a second QueryEnvironment object that was
maintained by PL/pgsql.  That was previously used to deal with
cursors, but the same approach wasn't appropriate for PLs that are
less tangled up with core code.  Instead, have SPI_cursor_open
install the connection's current QueryEnvironment, as already
happens for SPI_execute_plan.

While in the docs, remove the note that transition tables were only
supported in C and PL/pgSQL triggers, and correct some ommissions.

Thomas Munro with some work by Kevin Grittner (mostly docs)
2017-04-04 18:36:39 -05:00
9a3215026b Make min_wal_size/max_wal_size use MB internally
Previously they were defined using multiples of XLogSegSize.
Remove GUC_UNIT_XSEGS. Introduce GUC_UNIT_MB

Extracted from patch series on XLogSegSize infrastructure.

Beena Emerson
2017-04-04 18:00:01 -04:00
cd740c0dbf Fix uninitialized variables in twophase.c 2017-04-04 17:50:02 -04:00
490e9a98ff Fix two valgrind issues in slab allocator.
During allocation VALGRIND_MAKE_MEM_DEFINED was called with a pointer
as size. That kind of works, but makes valgrind exceedingly slow for
workloads involving the slab allocator.

Secondly there was an access to memory marked as unreachable within
SlabCheck(). Fix that too.

Author: Tomas Vondra
Discussion: https://postgr.es/m/a6543b6d-6015-99b1-63ef-3ed55a76a730@2ndquadrant.com
2017-04-04 14:26:42 -07:00
728bd991c3 Speedup 2PC recovery by skipping two phase state files in normal path
2PC state info held in shmem at PREPARE, then cleaned at COMMIT PREPARED/ABORT PREPARED,
avoiding writing/fsyncing any state information to disk in the normal path, greatly enhancing replay speed.
Prepared transactions that live past one checkpoint redo horizon will be written to disk as now.
Similar conceptually to 978b2f65aa and building upon
the infrastructure created by that commit.

Authors, in equal measure: Stas Kelvich, Nikhil Sontakke and Michael Paquier
Discussion: https://postgr.es/m/CAMGcDxf8Bn9ZPBBJZba9wiyQq-Qk5uqq=VjoMnRnW5s+fKST3w@mail.gmail.com
2017-04-04 15:56:56 -04:00
60a0b2ec89 Adjust min/max values when changing sequence type
When changing the type of a sequence, adjust the min/max values of the
sequence if it looks like the previous values were the default values.
Previously, it would leave the old values in place, requiring manual
adjustments even in the usual/default cases.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Vitaly Burovoy <vitaly.burovoy@gmail.com>
2017-04-04 12:49:39 -04:00
a9a7949134 Fix thinko in BitmapAdjustPrefetchIterator.
Dilip Kumar

Discussion: http://postgr.es/m/CAFiTN-uKAvRhWprb0i-U9zFOekgQRRwqjP1wvOBsKZb-UEKbug@mail.gmail.com
2017-04-04 09:07:18 -04:00
d1f103c739 Fix typo
Author: Masahiko Sawada <sawada.mshk@gmail.com>
2017-04-04 09:03:24 -04:00
553c3bef4c psql: Add some missing tab completion
Add tab completion for COMMENT/SECURITY LABEL ON
PUBLICATION/SUBSCRIPTION.

Reported-by: Stephen Frost <sfrost@snowman.net>
2017-04-04 08:59:13 -04:00
e9c81b6016 Remove --verbose from PROVE_FLAGS
Per discussion, the TAP tests are really more verbose than necessary, so
remove the --verbose flag from PROVE_FLAGS.  Also add comments to let
folks know how they can enable it if they really wish to, as suggested
by Craig Ringer.

Author: Michael Paquier, additional comments by me.
Discussion: https://postgr.es/m/CAMsr%2BYGAzcMDOZ_BirnMCL6Sb%3DMUjP0FRE82YBDSbXcf6pm9Yg%40mail.gmail.com
2017-04-04 08:42:09 -04:00
fe7bbc4ddb Fix remote position tracking in logical replication
We need to set the origin remote position to end_lsn, not commit_lsn, as
commit_lsn is the start of commit record, and we use the origin remote
position as start position when restarting replication stream.  If we'd
use commit_lsn, we could request data that we already received from the
remote server after a crash of a downstream server.

Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
2017-04-04 08:24:32 -04:00
b38006ef6d Fix formula in _hash_spareindex.
This was correct in earlier versions of the patch that lead to
commit ea69a0dead, but somehow got
broken in the last version which I actually committed.

Mithun Cy, per an off-list report from Ashutosh Sharma

Discussion: http://postgr.es/m/CAD__OujbAwNU71v1y-RoQxZ8LZ6-V2UFTkex3v34MK6uZ3Xb5w@mail.gmail.com
2017-04-04 07:45:04 -04:00
ea69a0dead Expand hash indexes more gradually.
Since hash indexes typically have very few overflow pages, adding a
new splitpoint essentially doubles the on-disk size of the index,
which can lead to large and abrupt increases in disk usage (and
perhaps long delays on occasion).  To mitigate this problem to some
degree, divide larger splitpoints into four equal phases.  This means
that, for example, instead of growing from 4GB to 8GB all at once, a
hash index will now grow from 4GB to 5GB to 6GB to 7GB to 8GB, which
is perhaps still not as smooth as we'd like but certainly an
improvement.

This changes the on-disk format of the metapage, so bump HASH_VERSION
from 2 to 3.  This will force a REINDEX of all existing hash indexes,
but that's probably a good idea anyway.  First, hash indexes from
pre-10 versions of PostgreSQL could easily be corrupted, and we don't
want to confuse corruption carried over from an older release with any
corruption caused despite the new write-ahead logging in v10.  Second,
it will let us remove some backward-compatibility code added by commit
293e24e507.

Mithun Cy, reviewed by Amit Kapila, Jesper Pedersen and me.  Regression
test outputs updated by me.

Discussion: http://postgr.es/m/CAD__OuhG6F1gQLCgMQNnMNgoCvOLQZz9zKYJQNYvYmmJoM42gA@mail.gmail.com
Discussion: http://postgr.es/m/CA+TgmoYty0jCf-pa+m+vYUJ716+AxM7nv_syvyanyf5O-L_i2A@mail.gmail.com
2017-04-03 23:46:33 -04:00
c8b5c3cb06 Update comment.
Craig Ringer, reviewed by me.
2017-04-03 23:07:31 -04:00
7cdf6668cf Print new RelOptInfo field top_parent_relids in outfuncs.c
I intended to include this adjustment in the previous commit
(7a39b5e4d1) but messed up.
2017-04-03 23:06:36 -04:00
7a39b5e4d1 Abstract logic to allow for multiple kinds of child rels.
Currently, the only type of child relation is an "other member rel",
which is the child of a baserel, but in the future joins and even
upper relations may have child rels.  To facilitate that, introduce
macros that test to test for particular RelOptKind values, and use
them in various places where they help to clarify the sense of a test.
(For example, a test may allow RELOPT_OTHER_MEMBER_REL either because
it intends to allow child rels, or because it intends to allow simple
rels.)

Also, remove find_childrel_top_parent, which will not work for a
child rel that is not a baserel.  Instead, add a new RelOptInfo
member top_parent_relids to track the same kind of information in a
more generic manner.

Ashutosh Bapat, slightly tweaked by me.  Review and testing of the
patch set from which this was taken by Rajkumar Raghuwanshi and Rafia
Sabih.

Discussion: http://postgr.es/m/CA+TgmoagTnF2yqR3PT2rv=om=wJiZ4-A+ATwdnriTGku1CLYxA@mail.gmail.com
2017-04-03 22:41:31 -04:00
93cd7684ee Properly acquire buffer lock for page-at-a-time hash vacuum.
In a couple of places, _hash_kill_items was mistakenly called with
the buffer lock not held.  Repair.

Ashutosh Sharma, per a report from Andreas Seltenreich

Discussion: http://postgr.es/m/87o9wo8o0j.fsf@credativ.de
2017-04-03 22:26:06 -04:00
f578093526 Try and silence spurious Coverity warning.
gset_data (aka gd) in planner.c is always non-null if and only if
parse->groupingSets is non-null, but Coverity doesn't know that and
complains.  Feed it an assertion to see if that keeps it happy.
2017-04-03 23:30:24 +01:00
9fa6e08d4a Make header self-contained
Add necessary include files for things used in the header.
2017-04-03 16:17:45 -04:00
8df9994bc3 Fix whitespace 2017-04-03 11:12:48 -04:00
1116108c92 Handle change of slot name in logical replication apply
Since change of slot name is a supported operation, handle it more
gracefully, instead of in the this-should-not-happen way.

Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
2017-04-03 11:10:28 -04:00
cd6baed781 Remove reinvention of stringify macro.
We already have CppAsString2, there's no need for the MSVC support to
re-invent a macro to do that (and especially not to inject it in as
ugly a way as this).

Discussion: https://postgr.es/m/CADkLM=c+hm2rc0tkKgC-ZgrLttHT2KkfppE+BC-=i-xj+7V-TQ@mail.gmail.com
2017-04-02 19:19:16 -04:00
5dbc5da118 Fix behavior of psql's \p to agree with \g, \w, etc.
In commit e984ef586 I (tgl) simplified the behavior of \p to just print
the current query buffer; but Daniel Vérité points out that this made it
inconsistent with the behavior of \g and \w.  It should print the same
thing \g would execute.  Fix that, and improve related comments.

Daniel Vérité

Discussion: https://postgr.es/m/9b4ea968-753f-4b5f-b46c-d7d3bf7c8f90@manitou-mail.org
2017-04-02 16:50:25 -04:00
130ae4a547 Fix some typos and spelling errors in comments
Author: Erik Rijkers
2017-04-02 19:55:28 +02:00
f833c847b8 Allow psql variable substitution to occur in backtick command strings.
Previously, text between backquotes in a psql metacommand's arguments
was always passed to the shell literally.  That considerably hobbles
the usefulness of the feature for scripting, so we'd foreseen for a long
time that we'd someday want to allow substitution of psql variables into
the shell command.  IMO the addition of \if metacommands has brought us to
that point, since \if can greatly benefit from some sort of client-side
expression evaluation capability, and psql itself is not going to grow any
such thing in time for v10.  Hence, this patch.  It allows :VARIABLE to be
replaced by the exact contents of the named variable, while :'VARIABLE'
is replaced by the variable's contents suitably quoted to become a single
shell-command argument.  (The quoting rules for that are different from
those for SQL literals, so this is a bit of an abuse of the :'VARIABLE'
notation, but I doubt anyone will be confused.)

As with other situations in psql, no substitution occurs if the word
following a colon is not a known variable name.  That limits the risk of
compatibility problems for existing psql scripts; but the risk isn't zero,
so this needs to be called out in the v10 release notes.

Discussion: https://postgr.es/m/9561.1490895211@sss.pgh.pa.us
2017-04-01 21:44:54 -04:00
41bd155dd6 Fix two undocumented parameters to functions from ENR patch.
On ProcessUtility document the parameter, to match others.

On CreateCachedPlan drop the queryEnv parameter.  It was not
referenced within the function, and had been added on the
assumption that with some unknown future usage of QueryEnvironment
it might be useful to do something there.  We have avoided other
"just in case" implementation of unused paramters, so drop it here.

Per gripe from Tom Lane
2017-04-01 15:21:05 -05:00
c655899ba9 BRIN de-summarization
When the BRIN summary tuple for a page range becomes too "wide" for the
values actually stored in the table (because the tuples that were
present originally are no longer present due to updates or deletes), it
can be useful to remove the outdated summary tuple, so that a future
summarization can install a tighter summary.

This commit introduces a SQL-callable interface to do so.

Author: Álvaro Herrera
Reviewed-by: Eiji Seki
Discussion: https://postgr.es/m/20170228045643.n2ri74ara4fhhfxf@alvherre.pgsql
2017-04-01 16:10:04 -03:00
3a82129a40 Fix expected output
Previous commit had a thinko in the expected output for new tests.

Per buildfarm
2017-04-01 16:00:11 -03:00
7526e10224 BRIN auto-summarization
Previously, only VACUUM would cause a page range to get initially
summarized by BRIN indexes, which for some use cases takes too much time
since the inserts occur.  To avoid the delay, have brininsert request a
summarization run for the previous range as soon as the first tuple is
inserted into the first page of the next range.  Autovacuum is in charge
of processing these requests, after doing all the regular vacuuming/
analyzing work on tables.

This doesn't impose any new tasks on autovacuum, because autovacuum was
already in charge of doing summarizations.  The only actual effect is to
change the timing, i.e. that it occurs earlier.  For this reason, we
don't go any great lengths to record these requests very robustly; if
they are lost because of a server crash or restart, they will happen at
a later time anyway.

Most of the new code here is in autovacuum, which can now be told about
"work items" to process.  This can be used for other things such as GIN
pending list cleaning, perhaps visibility map bit setting, both of which
are currently invoked during vacuum, but do not really depend on vacuum
taking place.

The requests are at the page range level, a granularity for which we did
not have SQL-level access; we only had index-level summarization
requests via brin_summarize_new_values().  It seems reasonable to add
SQL-level access to range-level summarization too, so add a function
brin_summarize_range() to do that.

Authors: Álvaro Herrera, based on sketch from Simon Riggs.
Reviewed-by: Thomas Munro.
Discussion: https://postgr.es/m/20170301045823.vneqdqkmsd4as4ds@alvherre.pgsql
2017-04-01 14:00:53 -03:00
7220c7b3e5 Write "waiting for checkpoint" on regular progress row
When reporting progress, make the "waiting for checkpoint" test be
overwritten by the file-based progress once it's completed. This is more
consistent with how we report the rest of the progress.

Suggested by Jeff Janes
2017-04-01 17:04:14 +02:00
5970271632 Add transition table support to plpgsql.
Kevin Grittner and Thomas Munro
Reviewed by Heikki Linnakangas, David Fetter, and Thomas Munro
with valuable comments and suggestions from many others
2017-03-31 23:30:08 -05:00
18ce3a4ab2 Add infrastructure to support EphemeralNamedRelation references.
A QueryEnvironment concept is added, which allows new types of
objects to be passed into queries from parsing on through
execution.  At this point, the only thing implemented is a
collection of EphemeralNamedRelation objects -- relations which
can be referenced by name in queries, but do not exist in the
catalogs.  The only type of ENR implemented is NamedTuplestore, but
provision is made to add more types fairly easily.

An ENR can carry its own TupleDesc or reference a relation in the
catalogs by relid.

Although these features can be used without SPI, convenience
functions are added to SPI so that ENRs can easily be used by code
run through SPI.

The initial use of all this is going to be transition tables in
AFTER triggers, but that will be added to each PL as a separate
commit.

An incidental effect of this patch is to produce a more informative
error message if an attempt is made to modify the contents of a CTE
from a referencing DML statement.  No tests previously covered that
possibility, so one is added.

Kevin Grittner and Thomas Munro
Reviewed by Heikki Linnakangas, David Fetter, and Thomas Munro
with valuable comments and suggestions from many others
2017-03-31 23:17:18 -05:00
25dc142a49 Avoid GatherMerge crash when there are no workers.
It's unnecessary to return an actual slot when we have no tuple.
We can just return NULL, which avoids the risk of indexing into an
array that might not contain any elements.

Rushabh Lathia, per a report from Tomas Vondra

Discussion: http://postgr.es/m/6ecd6f17-0dcf-1de7-ded8-0de7db1ddc88@2ndquadrant.com
2017-03-31 21:15:05 -04:00
7d8f6986b8 Fix parallel query so it doesn't spoil row estimates above Gather.
Commit 45be99f8cd removed GatherPath's
num_workers field, but this is entirely bogus.  Normally, a path's
parallel_workers flag is supposed to indicate the number of workers
that it wants, and should be 0 for a non-partial path.  In that
commit, I mistakenly thought that GatherPath could also use that field
to indicate the number of workers that it would try to start, but
that's disastrous, because then it can propagate up to higher nodes in
the plan tree, which will then get incorrect rowcounts because the
parallel_workers flag is involved in computing those values.  Repair
by putting the separate field back.

Report by Tomas Vondra.  Patch by me, reviewed by Amit Kapila.

Discussion: http://postgr.es/m/f91b4a44-f739-04bd-c4b6-f135bd643669@2ndquadrant.com
2017-03-31 21:01:20 -04:00