1
0
mirror of https://github.com/postgres/postgres.git synced 2025-11-24 00:23:06 +03:00

Allow configurable LZ4 TOAST compression.

There is now a per-column COMPRESSION option which can be set to pglz
(the default, and the only option in up until now) or lz4. Or, if you
like, you can set the new default_toast_compression GUC to lz4, and
then that will be the default for new table columns for which no value
is specified. We don't have lz4 support in the PostgreSQL code, so
to use lz4 compression, PostgreSQL must be built --with-lz4.

In general, TOAST compression means compression of individual column
values, not the whole tuple, and those values can either be compressed
inline within the tuple or compressed and then stored externally in
the TOAST table, so those properties also apply to this feature.

Prior to this commit, a TOAST pointer has two unused bits as part of
the va_extsize field, and a compessed datum has two unused bits as
part of the va_rawsize field. These bits are unused because the length
of a varlena is limited to 1GB; we now use them to indicate the
compression type that was used. This means we only have bit space for
2 more built-in compresison types, but we could work around that
problem, if necessary, by introducing a new vartag_external value for
any further types we end up wanting to add. Hopefully, it won't be
too important to offer a wide selection of algorithms here, since
each one we add not only takes more coding but also adds a build
dependency for every packager. Nevertheless, it seems worth doing
at least this much, because LZ4 gets better compression than PGLZ
with less CPU usage.

It's possible for LZ4-compressed datums to leak into composite type
values stored on disk, just as it is for PGLZ. It's also possible for
LZ4-compressed attributes to be copied into a different table via SQL
commands such as CREATE TABLE AS or INSERT .. SELECT.  It would be
expensive to force such values to be decompressed, so PostgreSQL has
never done so. For the same reasons, we also don't force recompression
of already-compressed values even if the target table prefers a
different compression method than was used for the source data.  These
architectural decisions are perhaps arguable but revisiting them is
well beyond the scope of what seemed possible to do as part of this
project.  However, it's relatively cheap to recompress as part of
VACUUM FULL or CLUSTER, so this commit adjusts those commands to do
so, if the configured compression method of the table happens not to
match what was used for some column value stored therein.

Dilip Kumar. The original patches on which this work was based were
written by Ildus Kurbangaliev, and those were patches were based on
even earlier work by Nikita Glukhov, but the design has since changed
very substantially, since allow a potentially large number of
compression methods that could be added and dropped on a running
system proved too problematic given some of the architectural issues
mentioned above; the choice of which specific compression method to
add first is now different; and a lot of the code has been heavily
refactored.  More recently, Justin Przyby helped quite a bit with
testing and reviewing and this version also includes some code
contributions from him. Other design input and review from Tomas
Vondra, Álvaro Herrera, Andres Freund, Oleg Bartunov, Alexander
Korotkov, and me.

Discussion: http://postgr.es/m/20170907194236.4cefce96%40wp.localdomain
Discussion: http://postgr.es/m/CAFiTN-uUpX3ck%3DK0mLEk-G_kUQY%3DSNOTeqdaNRR9FMdQrHKebw%40mail.gmail.com
This commit is contained in:
Robert Haas
2021-03-19 15:10:38 -04:00
parent e589c4890b
commit bbe0a81db6
61 changed files with 2261 additions and 160 deletions

View File

@@ -240,14 +240,20 @@ detoast_attr_slice(struct varlena *attr,
*/
if (slicelimit >= 0)
{
int32 max_size;
int32 max_size = VARATT_EXTERNAL_GET_EXTSIZE(toast_pointer);
/*
* Determine maximum amount of compressed data needed for a prefix
* of a given length (after decompression).
*
* At least for now, if it's LZ4 data, we'll have to fetch the
* whole thing, because there doesn't seem to be an API call to
* determine how much compressed data we need to be sure of being
* able to decompress the required slice.
*/
max_size = pglz_maximum_compressed_size(slicelimit,
toast_pointer.va_extsize);
if (VARATT_EXTERNAL_GET_COMPRESSION(toast_pointer) ==
TOAST_PGLZ_COMPRESSION_ID)
max_size = pglz_maximum_compressed_size(slicelimit, max_size);
/*
* Fetch enough compressed slices (compressed marker will get set
@@ -347,7 +353,7 @@ toast_fetch_datum(struct varlena *attr)
/* Must copy to access aligned fields */
VARATT_EXTERNAL_GET_POINTER(toast_pointer, attr);
attrsize = toast_pointer.va_extsize;
attrsize = VARATT_EXTERNAL_GET_EXTSIZE(toast_pointer);
result = (struct varlena *) palloc(attrsize + VARHDRSZ);
@@ -408,7 +414,7 @@ toast_fetch_datum_slice(struct varlena *attr, int32 sliceoffset,
*/
Assert(!VARATT_EXTERNAL_IS_COMPRESSED(toast_pointer) || 0 == sliceoffset);
attrsize = toast_pointer.va_extsize;
attrsize = VARATT_EXTERNAL_GET_EXTSIZE(toast_pointer);
if (sliceoffset >= attrsize)
{
@@ -418,8 +424,8 @@ toast_fetch_datum_slice(struct varlena *attr, int32 sliceoffset,
/*
* When fetching a prefix of a compressed external datum, account for the
* rawsize tracking amount of raw data, which is stored at the beginning
* as an int32 value).
* space required by va_tcinfo, which is stored at the beginning as an
* int32 value.
*/
if (VARATT_EXTERNAL_IS_COMPRESSED(toast_pointer) && slicelength > 0)
slicelength = slicelength + sizeof(int32);
@@ -464,21 +470,24 @@ toast_fetch_datum_slice(struct varlena *attr, int32 sliceoffset,
static struct varlena *
toast_decompress_datum(struct varlena *attr)
{
struct varlena *result;
ToastCompressionId cmid;
Assert(VARATT_IS_COMPRESSED(attr));
result = (struct varlena *)
palloc(TOAST_COMPRESS_RAWSIZE(attr) + VARHDRSZ);
SET_VARSIZE(result, TOAST_COMPRESS_RAWSIZE(attr) + VARHDRSZ);
if (pglz_decompress(TOAST_COMPRESS_RAWDATA(attr),
TOAST_COMPRESS_SIZE(attr),
VARDATA(result),
TOAST_COMPRESS_RAWSIZE(attr), true) < 0)
elog(ERROR, "compressed data is corrupted");
return result;
/*
* Fetch the compression method id stored in the compression header and
* decompress the data using the appropriate decompression routine.
*/
cmid = TOAST_COMPRESS_METHOD(attr);
switch (cmid)
{
case TOAST_PGLZ_COMPRESSION_ID:
return pglz_decompress_datum(attr);
case TOAST_LZ4_COMPRESSION_ID:
return lz4_decompress_datum(attr);
default:
elog(ERROR, "invalid compression method id %d", cmid);
}
}
@@ -492,22 +501,24 @@ toast_decompress_datum(struct varlena *attr)
static struct varlena *
toast_decompress_datum_slice(struct varlena *attr, int32 slicelength)
{
struct varlena *result;
int32 rawsize;
ToastCompressionId cmid;
Assert(VARATT_IS_COMPRESSED(attr));
result = (struct varlena *) palloc(slicelength + VARHDRSZ);
rawsize = pglz_decompress(TOAST_COMPRESS_RAWDATA(attr),
VARSIZE(attr) - TOAST_COMPRESS_HDRSZ,
VARDATA(result),
slicelength, false);
if (rawsize < 0)
elog(ERROR, "compressed data is corrupted");
SET_VARSIZE(result, rawsize + VARHDRSZ);
return result;
/*
* Fetch the compression method id stored in the compression header and
* decompress the data slice using the appropriate decompression routine.
*/
cmid = TOAST_COMPRESS_METHOD(attr);
switch (cmid)
{
case TOAST_PGLZ_COMPRESSION_ID:
return pglz_decompress_datum_slice(attr, slicelength);
case TOAST_LZ4_COMPRESSION_ID:
return lz4_decompress_datum_slice(attr, slicelength);
default:
elog(ERROR, "invalid compression method id %d", cmid);
}
}
/* ----------
@@ -589,7 +600,7 @@ toast_datum_size(Datum value)
struct varatt_external toast_pointer;
VARATT_EXTERNAL_GET_POINTER(toast_pointer, attr);
result = toast_pointer.va_extsize;
result = VARATT_EXTERNAL_GET_EXTSIZE(toast_pointer);
}
else if (VARATT_IS_EXTERNAL_INDIRECT(attr))
{