From 1bd5530a59cd8ddbabc279802d1ede4f8fbd5314 Mon Sep 17 00:00:00 2001 From: David Steele Date: Tue, 2 May 2023 12:57:12 +0300 Subject: [PATCH] Remove double spaces from comments and documentation. Double spaces have fallen out of favor in recent years because they no longer contribute to readability. We have been using single spaces and editing related paragraphs for some time, but now it seems best to update the remaining instances to avoid churn in unrelated commits and to make it clearer what spacing contributors should use. --- CONTRIBUTING.md | 4 +- doc/README.md | 2 +- doc/RELEASE.md | 4 +- doc/lib/pgBackRestDoc/Common/DocConfig.pm | 2 +- doc/lib/pgBackRestDoc/Common/DocRender.pm | 2 +- doc/lib/pgBackRestDoc/Common/Ini.pm | 6 +- doc/lib/pgBackRestDoc/Custom/DocConfigData.pm | 4 +- .../pgBackRestDoc/Custom/DocCustomRelease.pm | 2 +- doc/lib/pgBackRestDoc/ProjectInfo.pm | 4 +- doc/manifest.xml | 15 +- doc/resource/fake-cert/README.md | 2 +- doc/resource/git-history.cache | 704 +++++++++--------- doc/xml/coding.xml | 26 +- doc/xml/contributing.xml | 10 +- doc/xml/documentation.xml | 6 +- doc/xml/faq.xml | 2 +- doc/xml/index.xml | 16 +- doc/xml/metric.xml | 2 +- doc/xml/release.xml | 476 ++++++------ doc/xml/user-guide.xml | 152 ++-- src/Makefile.in | 6 +- src/build/common/regExp.h | 4 +- src/build/configure.ac | 2 +- src/build/error/error.yaml | 6 +- src/command/archive/common.c | 8 +- src/command/archive/common.h | 4 +- src/command/archive/get/get.c | 19 +- src/command/archive/push/push.c | 16 +- src/command/backup/backup.c | 38 +- src/command/backup/common.c | 4 +- src/command/backup/file.c | 4 +- src/command/command.c | 6 +- src/command/expire/expire.c | 6 +- src/command/remote/remote.c | 4 +- src/command/repo/get.c | 6 +- src/command/repo/ls.c | 6 +- src/command/restore/restore.c | 50 +- src/common/assert.h | 4 +- src/common/compress/helper.h | 8 +- src/common/compress/lz4/compress.c | 8 +- src/common/crypto/cipherBlock.c | 16 +- src/common/error/error.c | 2 +- src/common/error/error.h | 4 +- src/common/exec.c | 10 +- src/common/fork.c | 2 +- src/common/io/filter/buffer.h | 2 +- src/common/io/filter/filter.c | 4 +- src/common/io/filter/filter.h | 4 +- src/common/io/filter/filter.intern.h | 26 +- src/common/io/filter/group.c | 22 +- src/common/io/filter/group.h | 8 +- src/common/io/filter/sink.h | 2 +- src/common/io/filter/size.h | 2 +- src/common/io/http/client.h | 2 +- src/common/io/http/header.c | 4 +- src/common/io/http/header.h | 2 +- src/common/io/http/request.c | 6 +- src/common/io/http/response.c | 8 +- src/common/io/io.c | 4 +- src/common/io/read.c | 2 +- src/common/io/read.h | 8 +- src/common/io/write.h | 6 +- src/common/lock.c | 2 +- src/common/lock.h | 2 +- src/common/log.h | 4 +- src/common/macro.h | 4 +- src/common/memContext.c | 2 +- src/common/memContext.h | 24 +- src/common/stackTrace.c | 2 +- src/common/type/buffer.h | 4 +- src/common/type/pack.c | 4 +- src/common/type/pack.h | 2 +- src/common/type/string.c | 4 +- src/common/type/string.h | 4 +- src/common/type/variant.c | 2 +- src/common/type/variant.h | 2 +- src/common/type/xml.c | 4 +- src/common/user.h | 14 +- src/config/config.h | 20 +- src/config/exec.h | 2 +- src/config/parse.c | 4 +- src/configure | 4 +- src/db/db.c | 12 +- src/db/db.h | 2 +- src/info/info.h | 6 +- src/info/manifest.c | 60 +- src/info/manifest.h | 6 +- src/postgres/client.c | 2 +- src/postgres/client.h | 4 +- .../interface/pageChecksum.vendor.c.inc | 6 +- src/postgres/interface/static.vendor.h | 10 +- src/postgres/interface/version.vendor.h | 10 +- src/protocol/helper.c | 2 +- src/protocol/parallel.c | 2 +- src/protocol/parallel.h | 2 +- src/protocol/server.c | 2 +- src/storage/helper.c | 2 +- src/storage/posix/read.c | 6 +- src/storage/s3/storage.c | 4 +- src/storage/storage.c | 4 +- src/storage/storage.h | 12 +- src/storage/storage.intern.h | 10 +- src/storage/write.c | 2 +- src/version.h | 4 +- test/certificate/README.md | 6 +- test/container.yaml | 4 +- test/define.yaml | 10 +- test/lib/pgBackRestTest/Common/BuildTest.pm | 2 +- .../lib/pgBackRestTest/Common/CoverageTest.pm | 4 +- test/lib/pgBackRestTest/Common/Io/Buffered.pm | 4 +- test/lib/pgBackRestTest/Common/ListTest.pm | 4 +- test/lib/pgBackRestTest/Common/RunTest.pm | 4 +- .../lib/pgBackRestTest/Common/StoragePosix.pm | 2 +- test/lib/pgBackRestTest/Common/VmTest.pm | 2 +- .../pgBackRestTest/Env/Host/HostBackupTest.pm | 8 +- test/lib/pgBackRestTest/Env/Manifest.pm | 12 +- .../pgBackRestTest/Module/Mock/MockAllTest.pm | 12 +- .../pgBackRestTest/Module/Real/RealAllTest.pm | 24 +- test/src/common/harnessFork.h | 4 +- test/src/common/harnessLog.c | 2 +- test/src/common/harnessLog.h | 2 +- test/src/common/harnessPq.h | 8 +- test/src/common/harnessTest.h | 4 +- test/src/module/command/backupTest.c | 8 +- test/src/module/command/restoreTest.c | 2 +- test/src/module/common/compressTest.c | 2 +- test/src/module/common/ioTest.c | 4 +- test/src/module/performance/typeTest.c | 6 +- test/src/test.c | 2 +- test/test.pl | 6 +- 130 files changed, 1111 insertions(+), 1113 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 6fc4f115f..576e04b4f 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -74,7 +74,7 @@ The example below is not structured like an actual implementation and is intende #### Example: hypothetical basic object construction ```c /* - * HEADER FILE - see db.h for a complete implementation example + * HEADER FILE - see db.h for a complete implementation example */ // Typedef the object declared in the C file @@ -646,7 +646,7 @@ To add an option, add the following to the `` section; if it does n diff --git a/doc/README.md b/doc/README.md index 0cab82a02..cbb2be2f1 100644 --- a/doc/README.md +++ b/doc/README.md @@ -58,7 +58,7 @@ Ubuntu 16.04: ``` RHEL 7: ```bash -./doc.pl --out=html --include=user-guide --no-cache --var=os-type=rhel --var=os-image=centos:7 --var=package=test/package/pgbackrest-2.08-1.el7.x86_64.rpm +./doc.pl --out=html --include=user-guide --no-cache --var=os-type=rhel --var=os-image=centos:7 --var=package=test/package/pgbackrest-2.08-1.el7.x86_64.rpm ``` RHEL 8: ```bash diff --git a/doc/RELEASE.md b/doc/RELEASE.md index 776ef7c8d..b52b91e2e 100644 --- a/doc/RELEASE.md +++ b/doc/RELEASE.md @@ -31,7 +31,7 @@ to: pgbackrest/test/test.pl --code-count ``` -## Build release documentation. Be sure to install latex using the instructions from the Vagrantfile before running this step. +## Build release documentation. Be sure to install latex using the instructions from the Vagrantfile before running this step. ``` pgbackrest/doc/release.pl --build ``` @@ -133,7 +133,7 @@ v2.14: Bug Fix and Improvements - Add user guide for Debian. ``` -The first line will be the release title and the rest will be the body. The tag field should be updated with the current version so a tag is created from main. **Be sure to select the release commit explicitly rather than auto-tagging the last commit in main!** +The first line will be the release title and the rest will be the body. The tag field should be updated with the current version so a tag is created from main. **Be sure to select the release commit explicitly rather than auto-tagging the last commit in main!** ## Push web documentation to main and deploy ``` diff --git a/doc/lib/pgBackRestDoc/Common/DocConfig.pm b/doc/lib/pgBackRestDoc/Common/DocConfig.pm index a0a087086..c9dd44574 100644 --- a/doc/lib/pgBackRestDoc/Common/DocConfig.pm +++ b/doc/lib/pgBackRestDoc/Common/DocConfig.pm @@ -354,7 +354,7 @@ sub process } } - # If the option did not come from the command also store in global option list. This prevents duplication of commonly + # If the option did not come from the command also store in global option list. This prevents duplication of commonly # used options. if ($strOptionSource ne CONFIG_HELP_SOURCE_COMMAND) { diff --git a/doc/lib/pgBackRestDoc/Common/DocRender.pm b/doc/lib/pgBackRestDoc/Common/DocRender.pm index 86080eb4c..1f928e265 100644 --- a/doc/lib/pgBackRestDoc/Common/DocRender.pm +++ b/doc/lib/pgBackRestDoc/Common/DocRender.pm @@ -475,7 +475,7 @@ sub build $oNode->paramSet('depend-default', $strDependPrev); } - # Set log to true if this section has an execute list. This helps reduce the info logging by only showing sections that are + # Set log to true if this section has an execute list. This helps reduce the info logging by only showing sections that are # likely to take a log time. $oNode->paramSet('log', $self->{bExe} && $oNode->nodeList('execute-list', false) > 0 ? true : false); diff --git a/doc/lib/pgBackRestDoc/Common/Ini.pm b/doc/lib/pgBackRestDoc/Common/Ini.pm index 25aaccb18..e114a00f2 100644 --- a/doc/lib/pgBackRestDoc/Common/Ini.pm +++ b/doc/lib/pgBackRestDoc/Common/Ini.pm @@ -520,7 +520,7 @@ sub iniRender $bFirst = false; } - # If there is a checksum write it at the end of the file. Having the checksum at the end of the file allows some major + # If there is a checksum write it at the end of the file. Having the checksum at the end of the file allows some major # performance optimizations which we won't implement in Perl, but will make the C code much more efficient. if (!$bRelaxed && defined($oContent->{&INI_SECTION_BACKREST}) && defined($oContent->{&INI_SECTION_BACKREST}{&INI_KEY_CHECKSUM})) { @@ -803,8 +803,8 @@ sub keys #################################################################################################################################### # test - test a value. # -# Test a value to see if it equals the supplied test value. If no test value is given, tests that the section, key, or subkey -# is defined. +# Test a value to see if it equals the supplied test value. If no test value is given, tests that the section, key, or subkey is +# defined. #################################################################################################################################### sub test { diff --git a/doc/lib/pgBackRestDoc/Custom/DocConfigData.pm b/doc/lib/pgBackRestDoc/Custom/DocConfigData.pm index 64aff74b4..80a5e5586 100644 --- a/doc/lib/pgBackRestDoc/Custom/DocConfigData.pm +++ b/doc/lib/pgBackRestDoc/Custom/DocConfigData.pm @@ -35,7 +35,7 @@ use constant CFGCMD_VERSION => 'version' #################################################################################################################################### # Command role constants - roles allowed for each command. Commands may have multiple processes that work together to implement -# their functionality. These roles allow each process to know what it is supposed to do. +# their functionality. These roles allow each process to know what it is supposed to do. #################################################################################################################################### # Called directly by the user. This is the main process of the command that may or may not spawn other command roles. use constant CFGCMD_ROLE_MAIN => 'main'; @@ -400,7 +400,7 @@ foreach my $strKey (sort(keys(%{$rhConfigDefine}))) $rhConfigDefine->{$strKey}{&CFGDEF_INTERNAL} = false; } - # All boolean config options can be negated. Boolean command-line options must be marked for negation individually. + # All boolean config options can be negated. Boolean command-line options must be marked for negation individually. if ($rhConfigDefine->{$strKey}{&CFGDEF_TYPE} eq CFGDEF_TYPE_BOOLEAN && defined($rhConfigDefine->{$strKey}{&CFGDEF_SECTION})) { $rhConfigDefine->{$strKey}{&CFGDEF_NEGATE} = true; diff --git a/doc/lib/pgBackRestDoc/Custom/DocCustomRelease.pm b/doc/lib/pgBackRestDoc/Custom/DocCustomRelease.pm index 9c8037f60..b9b46f458 100644 --- a/doc/lib/pgBackRestDoc/Custom/DocCustomRelease.pm +++ b/doc/lib/pgBackRestDoc/Custom/DocCustomRelease.pm @@ -268,7 +268,7 @@ sub contributorTextGet } #################################################################################################################################### -# Find a commit by subject prefix. Error if the prefix appears more than once. +# Find a commit by subject prefix. Error if the prefix appears more than once. #################################################################################################################################### sub commitFindSubject { diff --git a/doc/lib/pgBackRestDoc/ProjectInfo.pm b/doc/lib/pgBackRestDoc/ProjectInfo.pm index ebf3b86e8..ec4f29890 100644 --- a/doc/lib/pgBackRestDoc/ProjectInfo.pm +++ b/doc/lib/pgBackRestDoc/ProjectInfo.pm @@ -23,14 +23,14 @@ push @EXPORT, qw(PROJECT_CONF); # Project Version Number # -# Defines the current version of the BackRest executable. The version number is used to track features but does not affect what +# Defines the current version of the BackRest executable. The version number is used to track features but does not affect what # repositories or manifests can be read - that's the job of the format number. #----------------------------------------------------------------------------------------------------------------------------------- push @EXPORT, qw(PROJECT_VERSION); # Repository Format Number # -# Defines format for info and manifest files as well as on-disk structure. If this number changes then the repository will be +# Defines format for info and manifest files as well as on-disk structure. If this number changes then the repository will be # invalid unless migration functions are written. #----------------------------------------------------------------------------------------------------------------------------------- push @EXPORT, qw(REPOSITORY_FORMAT); diff --git a/doc/manifest.xml b/doc/manifest.xml index 442369608..cb182d5e0 100644 --- a/doc/manifest.xml +++ b/doc/manifest.xml @@ -1,7 +1,7 @@ - + pgBackRest Reliable PostgreSQL Backup & Restore @@ -57,15 +57,15 @@ $stryMonth[$month] . ' ' . $mday . ', ' . $year; - + 'Copyright &copy; 2015' . '-' . substr('{[release-date]}', length('{[release-date]}') - 4) . - ', The PostgreSQL Global Development Group, <a href="{[github-url-license]}">MIT License</a>. Updated ' . + ', The PostgreSQL Global Development Group, <a href="{[github-url-license]}">MIT License</a>. Updated ' . '{[release-date]}'; - + {[doc-path]}/output/latex/logo {[project]} User Guide @@ -117,10 +117,9 @@ diff --git a/doc/resource/fake-cert/README.md b/doc/resource/fake-cert/README.md index 8667bfee1..ba6277178 100644 --- a/doc/resource/fake-cert/README.md +++ b/doc/resource/fake-cert/README.md @@ -4,7 +4,7 @@ The certificates in this directory are used for documentation generation only an ## pgBackRest CA -Generate a CA that will be used to sign documentation certificates. It can be installed in the documentation containers to make certificates signed by it valid. +Generate a CA that will be used to sign documentation certificates. It can be installed in the documentation containers to make certificates signed by it valid. ``` cd [pgbackrest-root]/doc/resource/fake-cert diff --git a/doc/resource/git-history.cache b/doc/resource/git-history.cache index 1f54fa09f..b815c467c 100644 --- a/doc/resource/git-history.cache +++ b/doc/resource/git-history.cache @@ -2656,7 +2656,7 @@ "commit": "e6e1122dbcf5e667d683295f7e7e45de4bbf56bd", "date": "2022-02-20 16:45:07 -0600", "subject": "Pass file by reference in manifestFileAdd().", - "body": "Coverity complained that this pass by value was inefficient:\n\nCID 376402: Performance inefficiencies (PASS_BY_VALUE)\nPassing parameter file of type \"ManifestFile\" (size 136 bytes) by value.\n\nThis was completely intentional since it gives us a copy of the struct that we can change without bothering the caller. However, updating fields is fine and may benefit the caller at some future data, and in any case does no harm now.\n\nAnd as usual it is easier not to fight with Coverity." + "body": "Coverity complained that this pass by value was inefficient:\n\nCID 376402: Performance inefficiencies (PASS_BY_VALUE)\nPassing parameter file of type \"ManifestFile\" (size 136 bytes) by value.\n\nThis was completely intentional since it gives us a copy of the struct that we can change without bothering the caller. However, updating fields is fine and may benefit the caller at some future data, and in any case does no harm now.\n\nAnd as usual it is easier not to fight with Coverity." }, { "commit": "b4897077937ee4571ba719276a44d5db0a75510e", @@ -8887,13 +8887,13 @@ "commit": "f9c86b11a54e5c37ad98afd5a2de76cd96ae1061", "date": "2020-03-23 12:17:34 -0400", "subject": "More improvements to custom coverage report.", - "body": "* Fix a few issues with file names being truncated introduced in 787d3fd6.\n\n* Use function line info from the lcov file to calculate which lines to show for uncovered functions. This is more accurate than what we were doing before and function comment headers are now excluded which reduces clutter in the report." + "body": "* Fix a few issues with file names being truncated introduced in 787d3fd6.\n\n* Use function line info from the lcov file to calculate which lines to show for uncovered functions. This is more accurate than what we were doing before and function comment headers are now excluded which reduces clutter in the report." }, { "commit": "dbb1248bfbb1d469bae030084e6672772702f89d", "date": "2020-03-22 20:44:51 -0400", "subject": "Implement TEST_RESULT_*() macros with functions, mostly.", - "body": "The prior macros had grown over time to be pretty significant pieces of code that required a lot of compile time, though runtime was efficient.\n\nMove most of the macro code into functions to reduce compile time, perhaps at a slight expense to runtime. The overall performance benefit is 10-15% so this seems like a good tradeoff.\n\nAdd TEST_RESULT_UINT_INT() to safely compare uint to int with range checking." + "body": "The prior macros had grown over time to be pretty significant pieces of code that required a lot of compile time, though runtime was efficient.\n\nMove most of the macro code into functions to reduce compile time, perhaps at a slight expense to runtime. The overall performance benefit is 10-15% so this seems like a good tradeoff.\n\nAdd TEST_RESULT_UINT_INT() to safely compare uint to int with range checking." }, { "commit": "d6ffa9ea6d45fcae46abd79a5f6d6123e79c9168", @@ -8934,7 +8934,7 @@ "commit": "4c831d8e83f4ffda02023351c17cb3072b5b6e10", "date": "2020-03-22 13:50:31 -0400", "subject": "Use --clean-only for reproducible builds in contributing documentation.", - "body": "If the work or result directories already contain data then the docs might be generated slightly differently. Doing a clean ensures they will always produce the same output (provided the code does not change)." + "body": "If the work or result directories already contain data then the docs might be generated slightly differently. Doing a clean ensures they will always produce the same output (provided the code does not change)." }, { "commit": "06a3f82e912b549add76b65f51075bc389f4e29c", @@ -8968,7 +8968,7 @@ "commit": "56fb39937368463474c96307454051a66c1742c4", "date": "2020-03-21 18:45:58 -0400", "subject": "Build contributing documentation on Travis CI.", - "body": "Building the contributing document has some special requirements because it runs Docker in Docker so the repo path must align on the host and all Docker containers. Run `pgbackrest/doc/doc.pl` from within the home directory of the user that will do the doc build, e.g. `home/vagrant`. If the repo is not located directly in the home directory, e.g. `/home/vagrant/pgbackrest`, then a symlink may be used, e.g. `ln -s /path/to/repo /home/vagrant/pgbackrest`.\n\nMount the repo in the Vagrantfile at /home/vagrant/pgbackrest but provide a link from the old location at /backrest to make the transition less painful." + "body": "Building the contributing document has some special requirements because it runs Docker in Docker so the repo path must align on the host and all Docker containers. Run `pgbackrest/doc/doc.pl` from within the home directory of the user that will do the doc build, e.g. `home/vagrant`. If the repo is not located directly in the home directory, e.g. `/home/vagrant/pgbackrest`, then a symlink may be used, e.g. `ln -s /path/to/repo /home/vagrant/pgbackrest`.\n\nMount the repo in the Vagrantfile at /home/vagrant/pgbackrest but provide a link from the old location at /backrest to make the transition less painful." }, { "commit": "ee2e15bf5524cf8bac3454d409dee250792411d9", @@ -8997,7 +8997,7 @@ "commit": "787d3fd67b3549b9b04a8053e751cda9494d8df5", "date": "2020-03-20 12:54:29 -0400", "subject": "Improve custom coverage report.", - "body": "* Show all uncovered branch parts even when there are more than two parts per branch. This is the way gcc9 reports coverage so it needs to work even if it doesn't make as much sense as the old way.\n\n* Show covered branches in functions where coverage is missing. Showing just the uncovered branches can be confusing because it's not always clear how the coverage relates to the code. By showing all branch coverage (+ or -) this correspondence is made easier." + "body": "* Show all uncovered branch parts even when there are more than two parts per branch. This is the way gcc9 reports coverage so it needs to work even if it doesn't make as much sense as the old way.\n\n* Show covered branches in functions where coverage is missing. Showing just the uncovered branches can be confusing because it's not always clear how the coverage relates to the code. By showing all branch coverage (+ or -) this correspondence is made easier." }, { "commit": "8af802900657174e1b24a1bdf9992c9fa78909d9", @@ -9009,7 +9009,7 @@ "commit": "f6e9bb081963932b4a35e6fd6c01ac2a523195f1", "date": "2020-03-19 19:30:09 -0400", "subject": "Remove obsolete -O2 option for Fedora 30 unit test builds.", - "body": "For some reason gcc9 would not do -O0 builds in combination with one of the options that libperl required. Now that libperl is gone this exception is no longer required." + "body": "For some reason gcc9 would not do -O0 builds in combination with one of the options that libperl required. Now that libperl is gone this exception is no longer required." }, { "commit": "2241524c0bdc92e1a04ac18e91a188a2bc1bcbdf", @@ -9066,7 +9066,7 @@ "commit": "f2548f45ce4c1e444e9d3175b2349e0a97b2783a", "date": "2020-03-17 18:16:17 -0400", "subject": "Allow storage reads to be limited by bytes.", - "body": "The current use case is reading files from the PostgreSQL cluster during backup.\n\nA file may grow during backup but we only need to copy the number of bytes that were reported during the manifest build. The rest will be rebuilt from the WAL during recovery so copying more is just a waste of space.\n\nLimiting the copy sizes in backup will be part of a future commit." + "body": "The current use case is reading files from the PostgreSQL cluster during backup.\n\nA file may grow during backup but we only need to copy the number of bytes that were reported during the manifest build. The rest will be rebuilt from the WAL during recovery so copying more is just a waste of space.\n\nLimiting the copy sizes in backup will be part of a future commit." }, { "commit": "307e741298cd068e68bcbe8994c871a989bdff01", @@ -9269,7 +9269,7 @@ "commit": "0ba8062f5f04e22edb5700be3d8eb26255ea68d4", "date": "2020-03-12 08:48:45 -0400", "subject": "Get package source files dynamically during package build.", - "body": "The prior method was to build a special container to hold these files which meant they would get stale on development systems. On CI the container was always rebuilt so failures would be seen there even when dev seemed to be working.\n\nInstead get the package source when the package is built to ensure it is as up-to-date as possible.\n\nThis change was prompted by failures on the Ubuntu 12.04 container while getting the package source, probably due to an ancient version of git. Package builds are no longer supported on that platform with the addition of lz4 compression so it didn't seem worth fixing." + "body": "The prior method was to build a special container to hold these files which meant they would get stale on development systems. On CI the container was always rebuilt so failures would be seen there even when dev seemed to be working.\n\nInstead get the package source when the package is built to ensure it is as up-to-date as possible.\n\nThis change was prompted by failures on the Ubuntu 12.04 container while getting the package source, probably due to an ancient version of git. Package builds are no longer supported on that platform with the addition of lz4 compression so it didn't seem worth fixing." }, { "commit": "4a5bd002c0071960b537e6d1dc35efe626146956", @@ -9293,7 +9293,7 @@ "commit": "c279a00279e3c5b956f6dc4496454a9c4ccaa487", "date": "2020-03-10 14:45:27 -0400", "subject": "Add lz4 compression support.", - "body": "LZ4 compresses data faster than gzip but at a lower ratio. This can be a good tradeoff in certain scenarios.\n\nNote that setting compress-type=lz4 will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest." + "body": "LZ4 compresses data faster than gzip but at a lower ratio. This can be a good tradeoff in certain scenarios.\n\nNote that setting compress-type=lz4 will make new backups and archive incompatible (unrestorable) with prior versions of pgBackRest." }, { "commit": "cc9d7315dbb4e66cd281dc2156faf529e566ddbc", @@ -9304,7 +9304,7 @@ "commit": "79cfd3aebf4848c325d6eb98f5fe88f7f03805b0", "date": "2020-03-09 17:41:59 -0400", "subject": "Remove LibC.", - "body": "This was the interface between Perl and C introduced in 36a5349b but since f0ef73db has only been used by the Perl integration tests. This is expensive code to maintain just for testing.\n\nThe main dependency was the interface to storage, no matter where it was located, e.g. S3. Replace this with the new-introduced repo commands (d3c83453) that allow access to repo storage via the command line.\n\nThe other dependency was on various cfgOption* functions and CFGOPT_ constants that were convenient but not necessary. Replace these with hard-coded strings in most places and create new constants for commonly used values.\n\nRemove all auto-generated Perl code. This means that the error list will no longer be maintained automatically so copy used errors to Common::Exception.pm. This file will need to be maintained manually going forward but there is not likely to be much churn as the Perl integration tests are being retired.\n\nUpdate test.pl and related code to remove LibC builds.\n\nDing, dong, LibC is dead." + "body": "This was the interface between Perl and C introduced in 36a5349b but since f0ef73db has only been used by the Perl integration tests. This is expensive code to maintain just for testing.\n\nThe main dependency was the interface to storage, no matter where it was located, e.g. S3. Replace this with the new-introduced repo commands (d3c83453) that allow access to repo storage via the command line.\n\nThe other dependency was on various cfgOption* functions and CFGOPT_ constants that were convenient but not necessary. Replace these with hard-coded strings in most places and create new constants for commonly used values.\n\nRemove all auto-generated Perl code. This means that the error list will no longer be maintained automatically so copy used errors to Common::Exception.pm. This file will need to be maintained manually going forward but there is not likely to be much churn as the Perl integration tests are being retired.\n\nUpdate test.pl and related code to remove LibC builds.\n\nDing, dong, LibC is dead." }, { "commit": "d3c83453deffa6435ec5c4a932ad62a3d4d2cd95", @@ -9322,7 +9322,7 @@ "commit": "5e1291a29f65d0430966756c4031f6cc307ca208", "date": "2020-03-09 16:41:04 -0400", "subject": "Rename ls command to repo-ls.", - "body": "This command only makes sense for the repository storage since other storage (e.g. pg and spool) must be located on a local Posix filesystem and can be listed using standard unix commands. Since the repo storage can be located lots of places having a common way to list it makes sense.\n\nPrefix with repo- to make the scope of this command clear.\n\nUpdate documentation to reflect this change." + "body": "This command only makes sense for the repository storage since other storage (e.g. pg and spool) must be located on a local Posix filesystem and can be listed using standard unix commands. Since the repo storage can be located lots of places having a common way to list it makes sense.\n\nPrefix with repo- to make the scope of this command clear.\n\nUpdate documentation to reflect this change." }, { "commit": "f581edfa509c0879164597eba5ecf9dc287ca08d", @@ -9356,7 +9356,7 @@ "commit": "02aa03d1a266827b7ff5cb739b9598e475777086", "date": "2020-03-06 14:10:09 -0500", "subject": "Remove obsolete methods in pgBackRest::Storage::Storage module.", - "body": "All the methods in this module will need to be implemented via the command-line in order to get rid of LibC, so the first step is to reduce the code in the module as much as possible.\n\nFirst remove storageDb() and use storageTest() instead. Then create storageTest() using pgBackRestTest::Common::Storage which has no dependencies on LibC. Now the only storage using the LibC interface is storageRepo().\n\nRemove all link functions since those operations cannot be performed on a repo unless it is Posix, in which case the LibC interface is not needed. Same for owner().\n\nRemove pathSync() because syncs are not required in the tests. No test data is reused after a crash.\n\nPath create/exists functions should never be explicitly performed on a repo so remove those. File exists can be implemented by calling info() instead.\n\nRemove encryption detection functions which were only used by Backup/Archive::Info reconstruct() which are now obsolete.\n\nRemove all filters except pgBackRest::Storage::Filter::CipherBlock since they are not being used. That also means there are no filters returning results so remove all the result code.\n\nMove hashSize() and pathAbsolute() into pgBackRest::Storage::Base where they can be shared between pgBackRest::Storage::Storage and pgBackRestTest::Common::Storage." + "body": "All the methods in this module will need to be implemented via the command-line in order to get rid of LibC, so the first step is to reduce the code in the module as much as possible.\n\nFirst remove storageDb() and use storageTest() instead. Then create storageTest() using pgBackRestTest::Common::Storage which has no dependencies on LibC. Now the only storage using the LibC interface is storageRepo().\n\nRemove all link functions since those operations cannot be performed on a repo unless it is Posix, in which case the LibC interface is not needed. Same for owner().\n\nRemove pathSync() because syncs are not required in the tests. No test data is reused after a crash.\n\nPath create/exists functions should never be explicitly performed on a repo so remove those. File exists can be implemented by calling info() instead.\n\nRemove encryption detection functions which were only used by Backup/Archive::Info reconstruct() which are now obsolete.\n\nRemove all filters except pgBackRest::Storage::Filter::CipherBlock since they are not being used. That also means there are no filters returning results so remove all the result code.\n\nMove hashSize() and pathAbsolute() into pgBackRest::Storage::Base where they can be shared between pgBackRest::Storage::Storage and pgBackRestTest::Common::Storage." }, { "commit": "00647c7109cab28110d7dabd3da039b391507d50", @@ -9374,19 +9374,19 @@ "commit": "e55443c890181ea63a350275447885331c8254e4", "date": "2020-03-05 16:12:54 -0500", "subject": "Move logic from postgres/pageChecksum to command/backup/pageChecksum().", - "body": "The postgres/pageChecksum module was designed as an interface to the C structs for the Perl code. The new C code can do this directly so no need for an interface.\n\nMove the remaining test for pgPageChecksum() into the postgres/interface test module." + "body": "The postgres/pageChecksum module was designed as an interface to the C structs for the Perl code. The new C code can do this directly so no need for an interface.\n\nMove the remaining test for pgPageChecksum() into the postgres/interface test module." }, { "commit": "3796b74dcac29d7e7e7f89b69d3fa92b9d105a17", "date": "2020-03-05 14:23:01 -0500", "subject": "Use stock PostgreSQL page checksum implementation.", - "body": "We were using a customized version which worked fine but was hard to merge with upstream changes. Now this code is maintained much like the types in static.auto.h that we copy and check with each release.\n\nThe goal is to eventually build directly against PostgreSQL (either source or libcommon) and this brings us one step closer." + "body": "We were using a customized version which worked fine but was hard to merge with upstream changes. Now this code is maintained much like the types in static.auto.h that we copy and check with each release.\n\nThe goal is to eventually build directly against PostgreSQL (either source or libcommon) and this brings us one step closer." }, { "commit": "1b647a1a22ae36e2447dc04384ab794b011a74aa", "date": "2020-03-05 14:06:36 -0500", "subject": "Remove invalid page checksum test.", - "body": "All zero pages should not have checksums. Not only is this test invalid but it will not work with the stock page checksum implementation in PostgreSQL, which checks for zero pages. Since we will be using that code verbatim soon this test needs to go." + "body": "All zero pages should not have checksums. Not only is this test invalid but it will not work with the stock page checksum implementation in PostgreSQL, which checks for zero pages. Since we will be using that code verbatim soon this test needs to go." }, { "commit": "eb4347f20b85fa86208bb31b24b8b6f383432706", @@ -9398,13 +9398,13 @@ "commit": "77853d3c1387cd6ba42396e7ca29fae0f18ab6ac", "date": "2020-03-05 11:14:53 -0500", "subject": "Remove invalid const in pgPageChecksum() parameter.", - "body": "pgPageChecksum() must modify the page header in order to calculate the checksum. The modification is temporary but make it clear that it happens by removing the const.\n\nAlso make a note about our non-entirely-kosher usage of a const Buffer in the PageChecksum filter. This is safe as currently coded but at the least we need to be aware of what is going on." + "body": "pgPageChecksum() must modify the page header in order to calculate the checksum. The modification is temporary but make it clear that it happens by removing the const.\n\nAlso make a note about our non-entirely-kosher usage of a const Buffer in the PageChecksum filter. This is safe as currently coded but at the least we need to be aware of what is going on." }, { "commit": "4ab8943ca886ec61934cea55182c1123cd4596a7", "date": "2020-03-05 09:14:27 -0500", "subject": "Use PG_PAGE_SIZE_DEFAULT constant instead of pageSize variable.", - "body": "Page size is passed around a lot but in fact it can only have one value, PG_PAGE_SIZE_DEFAULT, which is checked when pg_control is loaded. There may be an argument for supporting multiple page sizes in the future but for now just use the constant to simplify the code.\n\nThere is also a significant performance benefit. Because pageSize was being used in pageChecksumBlock() the main loop was neither unrolled nor vectorized (-funroll-loops -ftree-vectorize) as it is now with a constant loop boundary." + "body": "Page size is passed around a lot but in fact it can only have one value, PG_PAGE_SIZE_DEFAULT, which is checked when pg_control is loaded. There may be an argument for supporting multiple page sizes in the future but for now just use the constant to simplify the code.\n\nThere is also a significant performance benefit. Because pageSize was being used in pageChecksumBlock() the main loop was neither unrolled nor vectorized (-funroll-loops -ftree-vectorize) as it is now with a constant loop boundary." }, { "commit": "91f321fb865fe4569f50d3a280d9ad1f502b7cbd", @@ -9422,7 +9422,7 @@ "commit": "9d488822682ec425e5ac4b0946a89eef240fc172", "date": "2020-03-04 13:31:27 -0500", "subject": "Centralize PostgreSQL page header data structures.", - "body": "These data structures were copied a few places (but only once in the core code) so put them in a place where everyone can use them.\n\nTo do this create a new file, static.auto.h, to contain data types and macros that have stayed the same through all the versions of PostgreSQL that we support. This allows us to have single, non-versioned set of headers and code for stable data structures like page headers.\n\nMigrate a few types from version.auto.h that are required for page header structures and pull the remaining types from PostgreSQL directly.\n\nWe had previously renamed xlog to wal so update those where required since we won't be modifying the PostgreSQL names anymore." + "body": "These data structures were copied a few places (but only once in the core code) so put them in a place where everyone can use them.\n\nTo do this create a new file, static.auto.h, to contain data types and macros that have stayed the same through all the versions of PostgreSQL that we support. This allows us to have single, non-versioned set of headers and code for stable data structures like page headers.\n\nMigrate a few types from version.auto.h that are required for page header structures and pull the remaining types from PostgreSQL directly.\n\nWe had previously renamed xlog to wal so update those where required since we won't be modifying the PostgreSQL names anymore." }, { "commit": "a88d709962f761a51b21dd60688c5915bbdccddb", @@ -9444,7 +9444,7 @@ "commit": "8ec41efb04ee45854d18ffaf765e4a4800dd3879", "date": "2020-02-28 17:41:34 -0500", "subject": "Improve poor man's regular expression common prefix generator.", - "body": "The S3 driver depends on being able to generate a common prefix to limit the number of results from list commands, which saves on bandwidth.\n\nThe prior implementation could be tricked by an expression like ^ABC|^DEF where there is more than one possible prefix. To fix this disallow any prefix when another ^ anchor is found in the expression. [^ and \\^ are OK since they are not anchors.\n\nNote that this was not an active bug because there are currently no expressions with multiple ^ anchors." + "body": "The S3 driver depends on being able to generate a common prefix to limit the number of results from list commands, which saves on bandwidth.\n\nThe prior implementation could be tricked by an expression like ^ABC|^DEF where there is more than one possible prefix. To fix this disallow any prefix when another ^ anchor is found in the expression. [^ and \\^ are OK since they are not anchors.\n\nNote that this was not an active bug because there are currently no expressions with multiple ^ anchors." }, { "commit": "3bbead548026155c8bb5b212bed80903a3ef6a97", @@ -9473,7 +9473,7 @@ "commit": "7d8c0d29fb5b2663c9c4b6be0b9f697a6c19c0fc", "date": "2020-02-27 14:51:40 -0500", "subject": "Remove compress option from config tests.", - "body": "This option was used for boolean testing but it will soon be deprecated and the semantics changed. To reduce churn it seems easiest to just use other options for testing. This will also be helpful when the option is eventually removed." + "body": "This option was used for boolean testing but it will soon be deprecated and the semantics changed. To reduce churn it seems easiest to just use other options for testing. This will also be helpful when the option is eventually removed." }, { "commit": "dbf6255ab8ca9a141f9a7fbd1fda99006cce9e05", @@ -9491,19 +9491,19 @@ "commit": "3f77a83e7367a05d7fee9612159e671910db25e0", "date": "2020-02-27 12:19:40 -0500", "subject": "Remove raw option for gz compression.", - "body": "This was a minor optimization used in protocol layer compression. Even though it was slightly faster, it omitted the crc-32 that is generated during normal compression which could lead to corrupt data after a bad network transmission. This would be caught on restore by our checksum but it seems better to catch an issue like this early.\n\nThe raw option also made the function signature different than future compression formats which may not support raw, or require different code to support raw.\n\nIn general, it doesn't seem worth the extra testing to support a format that has minimal benefit and is seldom used, since protocol compression is only enabled when the transmitted data is uncompressed." + "body": "This was a minor optimization used in protocol layer compression. Even though it was slightly faster, it omitted the crc-32 that is generated during normal compression which could lead to corrupt data after a bad network transmission. This would be caught on restore by our checksum but it seems better to catch an issue like this early.\n\nThe raw option also made the function signature different than future compression formats which may not support raw, or require different code to support raw.\n\nIn general, it doesn't seem worth the extra testing to support a format that has minimal benefit and is seldom used, since protocol compression is only enabled when the transmitted data is uncompressed." }, { "commit": "ee351682dae215ccbb2ae9d5c0932a41580635dd", "date": "2020-02-27 12:09:05 -0500", "subject": "Rename \"gzip\" to \"gz\".", - "body": "\"gz\" was used as the extension but \"gzip\" was generally used for function and type naming.\n\nWith a new compression format on the way, it makes sense to standardize on a single abbreviation to represent a compression format in the code. Since the extension is standard and we must use it, also use the extension for all naming." + "body": "\"gz\" was used as the extension but \"gzip\" was generally used for function and type naming.\n\nWith a new compression format on the way, it makes sense to standardize on a single abbreviation to represent a compression format in the code. Since the extension is standard and we must use it, also use the extension for all naming." }, { "commit": "5afd950ed98b7eea0bc0c52851fdbe4fb7637698", "date": "2020-02-26 21:15:39 -0500", "subject": "Improve performance of MEM_CONTEXT*() macros.", - "body": "The prior code used TRY...CATCH blocks to cleanup mem contexts when an error occurred. This included freeing new mem contexts that were still being initialized when the error occurred and ensuring that the prior memory context was restored.\n\nThis worked fine in production but it involved a lot of setjmp()/longjmp() calls that resulted in longer compilation times and sluggish performance under valgrind, profiling, and coverage testing.\n\nInstead maintain a stack of new contexts and context switches that can be used to do cleanup after an error. Normally, the stack is not used for this purpose and pushing/popping is a cheap operation. In the prior implementation most of the TRY...CATCH logic needed to be run even on success.\n\nOne bonus is that the binary is about 8% smaller after this change. Another benefit is that new contexts *must* be explicitly freed/discarded or an error will occur. See info/manifest.c for an example of where this is useful outside the standard macros." + "body": "The prior code used TRY...CATCH blocks to cleanup mem contexts when an error occurred. This included freeing new mem contexts that were still being initialized when the error occurred and ensuring that the prior memory context was restored.\n\nThis worked fine in production but it involved a lot of setjmp()/longjmp() calls that resulted in longer compilation times and sluggish performance under valgrind, profiling, and coverage testing.\n\nInstead maintain a stack of new contexts and context switches that can be used to do cleanup after an error. Normally, the stack is not used for this purpose and pushing/popping is a cheap operation. In the prior implementation most of the TRY...CATCH logic needed to be run even on success.\n\nOne bonus is that the binary is about 8% smaller after this change. Another benefit is that new contexts *must* be explicitly freed/discarded or an error will occur. See info/manifest.c for an example of where this is useful outside the standard macros." }, { "commit": "d68771a4a5f0f349ba31679a27bc7b2ab8b0d736", @@ -9534,7 +9534,7 @@ "commit": "cc743f2e04db05cc3277e43023ebd8e2007ef4ed", "date": "2020-02-21 11:51:39 -0500", "subject": "Skip pg_internal.init temp file during backup.", - "body": "If PostgreSQL crashes it can leave behind a pg_internal.init temp file with the pid as the extension, as discussed in https://www.postgresql.org/message-id/flat/20200131045352.GB2631%40paquier.xyz#7700b9481ef5b0dd5f09cc410b4750f6. On restart this file is not cleaned up so it can persist for the lifetime of the cluster or until another process with the same id happens to write pg_internal.init.\n\nThis is arguably a bug in PostgreSQL, but in any case it makes sense not to backup this file." + "body": "If PostgreSQL crashes it can leave behind a pg_internal.init temp file with the pid as the extension, as discussed in https://www.postgresql.org/message-id/flat/20200131045352.GB2631%40paquier.xyz#7700b9481ef5b0dd5f09cc410b4750f6. On restart this file is not cleaned up so it can persist for the lifetime of the cluster or until another process with the same id happens to write pg_internal.init.\n\nThis is arguably a bug in PostgreSQL, but in any case it makes sense not to backup this file." }, { "commit": "48d0f77fe3e6edf7347b49d9e8c67052db00355b", @@ -9561,7 +9561,7 @@ "commit": "6353e9428df1d241b97d02c843f1a737e7c36c85", "date": "2020-02-12 17:18:48 -0700", "subject": "Error when archive-get/archive-push/restore are not run on a PostgreSQL host.", - "body": "This error was lost during the migration to C. The error that occurred instead (generally an SSH auth error) was hard to debug.\n\nRestore the original behavior by throwing an error immediately if pg1-host is configured for any of these commands. reset-pg1-host can be used to suppress the error when required." + "body": "This error was lost during the migration to C. The error that occurred instead (generally an SSH auth error) was hard to debug.\n\nRestore the original behavior by throwing an error immediately if pg1-host is configured for any of these commands. reset-pg1-host can be used to suppress the error when required." }, { "commit": "dac8119bf11e74b7d393b601bcfbe447a80e85e4", @@ -9597,13 +9597,13 @@ "commit": "44adf21c834ca8624b47dc6dbc2794bc429463ad", "date": "2020-02-10 21:30:43 -0700", "subject": "Consolidate archive async exec code.", - "body": "Move duplicated code to the common module. This will reduce copy and paste between the get and push modules when changes are made." + "body": "Move duplicated code to the common module. This will reduce copy and paste between the get and push modules when changes are made." }, { "commit": "0eaedc9a6ae98eaf4e733a3f63cb462a5ebcfa05", "date": "2020-02-10 19:17:11 -0700", "subject": "Improve async archive error file removal.", - "body": "2a06df93 removed the error file so an old error would not be reported before the async process had a chance to try again. However, if the async process was already running this might lead to a timeout error before reporting the correct error.\n\nInstead, remove the error files once we know that the async process will start, i.e. after the archive lock has been acquired.\n\nThis effectively reverts 2a06df93." + "body": "2a06df93 removed the error file so an old error would not be reported before the async process had a chance to try again. However, if the async process was already running this might lead to a timeout error before reporting the correct error.\n\nInstead, remove the error files once we know that the async process will start, i.e. after the archive lock has been acquired.\n\nThis effectively reverts 2a06df93." }, { "commit": "8cfbc294fcc066fce594c09649865f3d88bb2364", @@ -9619,7 +9619,7 @@ "commit": "71b4cc56cbcf20ce6a39e2155e0d9da11c83aaf8", "date": "2020-02-06 21:11:15 -0800", "subject": "Rename confessOnError to throwOnError.", - "body": "Confess is awfully Perl-ish and was likely copied verbatim during the migration. Rename to what we do now, i.e. throw." + "body": "Confess is awfully Perl-ish and was likely copied verbatim during the migration. Rename to what we do now, i.e. throw." }, { "commit": "2a06df93f379bdc28a6c5084d204cb1e5391dbb3", @@ -9697,7 +9697,7 @@ "commit": "697150eaf875742a4522e2ed8c00dfaf0489c4ac", "date": "2020-01-26 23:07:07 -0700", "subject": "Add more validations to the manifest on backup.", - "body": "Validate that checksums exist for zero size files. This means that the checksums for zero size files are explicitly set by backup even though they'll always be the same. Also validate that zero length files have the correct checksum.\n\nValidate that repo size is > 0 if size is > 0. No matter what compression type is used a non-zero amount of data cannot be stored in zero bytes." + "body": "Validate that checksums exist for zero size files. This means that the checksums for zero size files are explicitly set by backup even though they'll always be the same. Also validate that zero length files have the correct checksum.\n\nValidate that repo size is > 0 if size is > 0. No matter what compression type is used a non-zero amount of data cannot be stored in zero bytes." }, { "commit": "bb45a80d46b738a9b6672fc77158585395b04473", @@ -9713,13 +9713,13 @@ "commit": "7ab07dc580989452d807332bc91d26a42e5a0d3a", "date": "2020-01-26 21:58:59 -0700", "subject": "Validate checksums are set in the manifest on backup/restore.", - "body": "This is a modest start but it addresses the specific issue that was caused by the bug fixed in 45ec694a. This validation will produce an immediate error rather than erroring out partway through the restore.\n\nMore validations are planned but this is the most important one and seems safest for this release." + "body": "This is a modest start but it addresses the specific issue that was caused by the bug fixed in 45ec694a. This validation will produce an immediate error rather than erroring out partway through the restore.\n\nMore validations are planned but this is the most important one and seems safest for this release." }, { "commit": "45ec694af22dda84b124f0df242bbcfdd28726d1", "date": "2020-01-26 13:19:13 -0700", "subject": "Fix missing files corrupting the manifest.", - "body": "If a file was removed by PostgreSQL during the backup (or was missing from the standby) then the next file might not be copied and updated in the manifest. If this happened then the backup would error when restored.\n\nThe issue was that removing files from the manifest invalidated the pointers stored in the processing queues. When a file was removed, all the pointers shifted to the next file in the list, causing a file to be unprocessed. Since the unprocessed file was still in the manifest it would be saved with no checksum, causing a failure on restore.\n\nWhen process-max was > 1 then the bug would often not express since the file had already been pulled from the queue and updates to the manifest are done by name rather than by pointer." + "body": "If a file was removed by PostgreSQL during the backup (or was missing from the standby) then the next file might not be copied and updated in the manifest. If this happened then the backup would error when restored.\n\nThe issue was that removing files from the manifest invalidated the pointers stored in the processing queues. When a file was removed, all the pointers shifted to the next file in the list, causing a file to be unprocessed. Since the unprocessed file was still in the manifest it would be saved with no checksum, causing a failure on restore.\n\nWhen process-max was > 1 then the bug would often not express since the file had already been pulled from the queue and updates to the manifest are done by name rather than by pointer." }, { "commit": "9b47ff2746b74bf41d20fa43d6ed2697b0a99087", @@ -9747,7 +9747,7 @@ "commit": "b134175fc7a98836f49f20d552f7c31138b66b1b", "date": "2020-01-23 14:15:58 -0700", "subject": "Use designated initializers to initialize structs.", - "body": "Previously memNew() used memset() to initialize all struct members to 0, NULL, false, etc. While this appears to work in practice, it is a violation of the C specification. For instance, NULL == 0 must be true but neither NULL nor 0 must be represented with all zero bits.\n\nInstead use designated initializers to initialize structs. These guarantee that struct members will be properly initialized even if they are not specified in the initializer. Note that due to a quirk in the C99 specification at least one member must be explicitly initialized even if it needs to be the default value.\n\nSince pre-zeroed memory is no longer required, adjust memAllocInternal()/memReallocInternal() to return raw memory and update dependent functions accordingly. All instances of memset() have been removed except in debug/test code where needed.\n\nAdd memNewPtrArray() to allocate an array of pointers and automatically set all pointers to NULL.\n\nRename memGrowRaw() to the more logical memResize()." + "body": "Previously memNew() used memset() to initialize all struct members to 0, NULL, false, etc. While this appears to work in practice, it is a violation of the C specification. For instance, NULL == 0 must be true but neither NULL nor 0 must be represented with all zero bits.\n\nInstead use designated initializers to initialize structs. These guarantee that struct members will be properly initialized even if they are not specified in the initializer. Note that due to a quirk in the C99 specification at least one member must be explicitly initialized even if it needs to be the default value.\n\nSince pre-zeroed memory is no longer required, adjust memAllocInternal()/memReallocInternal() to return raw memory and update dependent functions accordingly. All instances of memset() have been removed except in debug/test code where needed.\n\nAdd memNewPtrArray() to allocate an array of pointers and automatically set all pointers to NULL.\n\nRename memGrowRaw() to the more logical memResize()." }, { "commit": "cf2024beafba82b3899d9c46c7ea64716835a292", @@ -9820,13 +9820,13 @@ "commit": "ec173f12fb9c26553b2021473dd62d73a006835a", "date": "2020-01-17 13:29:49 -0700", "subject": "Add MEM_CONTEXT_PRIOR() block and update current call sites.", - "body": "This macro block encapsulates the common pattern of switching to the prior (formerly called old) mem context to return results from a function.\n\nAlso rename MEM_CONTEXT_OLD() to memContextPrior(). This violates our convention of macros being in all caps but memContextPrior() will become a function very soon so this will reduce churn." + "body": "This macro block encapsulates the common pattern of switching to the prior (formerly called old) mem context to return results from a function.\n\nAlso rename MEM_CONTEXT_OLD() to memContextPrior(). This violates our convention of macros being in all caps but memContextPrior() will become a function very soon so this will reduce churn." }, { "commit": "b5fa9951e36641d0ad751f0c89751e84b5b1df0d", "date": "2020-01-17 13:08:47 -0700", "subject": "Use MEM_CONTEXT_BEGIN() block in varFree().", - "body": "We probably arrived at this unusual construction because of the complexity of getting the mem context. Whether or not this is a good way to store the mem context, it still makes sense to use the standard pattern for switching mem contexts." + "body": "We probably arrived at this unusual construction because of the complexity of getting the mem context. Whether or not this is a good way to store the mem context, it still makes sense to use the standard pattern for switching mem contexts." }, { "commit": "c6d6b7dbefb62578be64133e36836fc936945555", @@ -9850,13 +9850,13 @@ "commit": "8068a610d566cf9b5bcc79a3602a3a8442192bad", "date": "2020-01-16 16:19:45 -0700", "subject": "Use MEM_CONTEXT_NEW_BEGIN() block.", - "body": "This pattern makes more sense. The prior code was probably copy-pasted from code with slightly different requirements." + "body": "This pattern makes more sense. The prior code was probably copy-pasted from code with slightly different requirements." }, { "commit": "4274fcbf6f99f6d8414d2ba095809341bcf56797", "date": "2020-01-16 14:42:01 -0700", "subject": "Add missing semicolon.", - "body": "This worked when the FUNCTION_TEST_RETURN_VOID() macro expanded to nothing because of the final semicolon. If the FUNCTION_TEST_RETURN_VOID() macro expanded to something then there was one semicolon too few." + "body": "This worked when the FUNCTION_TEST_RETURN_VOID() macro expanded to nothing because of the final semicolon. If the FUNCTION_TEST_RETURN_VOID() macro expanded to something then there was one semicolon too few." }, { "commit": "731ffcfb3d77e6a9d17091f9819cf3e21e25d951", @@ -9916,7 +9916,7 @@ "commit": "fe263e87b1822b051ccea3fc0a8fea64f944dd7b", "date": "2020-01-12 11:31:06 -0700", "subject": "Allow path-style URIs in S3 driver.", - "body": "Although path-style URIs have been deprecated by AWS, they may still be used with products like Minio because no additional DNS configuration is required.\n\nPath-style URIs must be explicitly enabled since it is not clear how they can be auto-detected reliably. More importantly, faulty detection could cause regressions in current installations." + "body": "Although path-style URIs have been deprecated by AWS, they may still be used with products like Minio because no additional DNS configuration is required.\n\nPath-style URIs must be explicitly enabled since it is not clear how they can be auto-detected reliably. More importantly, faulty detection could cause regressions in current installations." }, { "commit": "069345d33959eb7b2ff0f4ae2b2ec65023ac7186", @@ -9933,7 +9933,7 @@ "commit": "0fe7bb2ec4d6ba10af8f2a6fd736e2c4cbdb2af2", "date": "2020-01-09 12:20:13 -0700", "subject": "Improve code that updates/removes pg options passed to a remote.", - "body": "The prior code was updating/removing hard-coded options but new options are added all the time and there was no indication that this code needed to be updated. For example, dc1e7ca2 added the pg-user option but this code was not updated.\n\nInstead find the options to update/remove dynamically. The new code uses prefixes, which is not perfect, but the patterns for pg options are pretty well established and this seems safer than the existing code." + "body": "The prior code was updating/removing hard-coded options but new options are added all the time and there was no indication that this code needed to be updated. For example, dc1e7ca2 added the pg-user option but this code was not updated.\n\nInstead find the options to update/remove dynamically. The new code uses prefixes, which is not perfect, but the patterns for pg options are pretty well established and this seems safer than the existing code." }, { "commit": "4c8653fc8bc6e8b3070c59ba12af4e5f2131c21c", @@ -9949,7 +9949,7 @@ "commit": "0c5c78e5e16b10f79dcfa023c6276a65ad87481d", "date": "2020-01-09 09:23:15 -0700", "subject": "Make quoting in cfgExeParam() optional.", - "body": "Parameter lists that are passed directly to exec*() do not need quoting when spaces are present. Worse, the quotes will not be stripped and the option value will be garbled.\n\nUnfortunately this still does not fix all issues with quoting since we don't know how it might need to be escaped to work with SSH command configuration. The answer seems to be to pass the options in the protocol layer but that's beyond the scope of this commit." + "body": "Parameter lists that are passed directly to exec*() do not need quoting when spaces are present. Worse, the quotes will not be stripped and the option value will be garbled.\n\nUnfortunately this still does not fix all issues with quoting since we don't know how it might need to be escaped to work with SSH command configuration. The answer seems to be to pass the options in the protocol layer but that's beyond the scope of this commit." }, { "commit": "7de5ce23ad387602125f478ceec2c834703080d5", @@ -10086,13 +10086,13 @@ "commit": "620386f034fd97371c467bda4dc0d90f423bf663", "date": "2019-12-17 20:14:45 -0500", "subject": "Remove integration tests that are now covered in the unit tests.", - "body": "Most of these tests are just checking that errors are thrown when required. These are well covered in various unit tests.\n\nThe \"cannot resume\" tests are also well covered in the backup unit tests.\n\nFinally, config warnings are well covered in the config unit tests.\n\nThere is more to be done here, but this accounts for the low-hanging fruit." + "body": "Most of these tests are just checking that errors are thrown when required. These are well covered in various unit tests.\n\nThe \"cannot resume\" tests are also well covered in the backup unit tests.\n\nFinally, config warnings are well covered in the config unit tests.\n\nThere is more to be done here, but this accounts for the low-hanging fruit." }, { "commit": "977ec2e307c5487df9ed6fadebd486a92c75f269", "date": "2019-12-17 15:23:07 -0500", "subject": "Integration test improvements for disk and memory efficiency.", - "body": "Set log-level-file=off when more that one test will run. In this case is it impossible to see the logs anyway since they will be automatically cleaned up after the test. This improves performance pretty dramatically since trace-level logging is expensive. If a singe integration test is run then log-level-file is trace by default but can be changed with the --log-level-test-file option.\n\nReduce buffer-size to 64k to save memory during testing and allow more processes to run in parallel.\n\nUpdate log replacement rules so that these options can change without affecting expect logs." + "body": "Set log-level-file=off when more that one test will run. In this case is it impossible to see the logs anyway since they will be automatically cleaned up after the test. This improves performance pretty dramatically since trace-level logging is expensive. If a singe integration test is run then log-level-file is trace by default but can be changed with the --log-level-test-file option.\n\nReduce buffer-size to 64k to save memory during testing and allow more processes to run in parallel.\n\nUpdate log replacement rules so that these options can change without affecting expect logs." }, { "commit": "ccea30b8d8970acf9c1826c5c988d819aabf2d0f", @@ -10104,7 +10104,7 @@ "commit": "6bd280f7bd667305317a7c825d913dac9719c475", "date": "2019-12-14 09:53:50 -0500", "subject": "Don't warn when stop-auto is enabled on PostgreSQL >= 9.6.", - "body": "PostgreSQL >= 9.6 uses non-exclusive backup which has implicit stop-auto since the backup will stop when the connection is terminated.\n\nThe warning was made more verbose in 1f2ce45e but this now seems like a bad idea since there are likely users with mixed version environments where stop-auto is enabled globally. There's no reason to fill their logs with warnings over a harmless option. If anything we should warn when stop-auto is explicitly set to false but this doesn't seem very important either.\n\nRevert to the prior behavior, which is to warn and reset when stop-auto is enabled on PostgreSQL < 9.3." + "body": "PostgreSQL >= 9.6 uses non-exclusive backup which has implicit stop-auto since the backup will stop when the connection is terminated.\n\nThe warning was made more verbose in 1f2ce45e but this now seems like a bad idea since there are likely users with mixed version environments where stop-auto is enabled globally. There's no reason to fill their logs with warnings over a harmless option. If anything we should warn when stop-auto is explicitly set to false but this doesn't seem very important either.\n\nRevert to the prior behavior, which is to warn and reset when stop-auto is enabled on PostgreSQL < 9.3." }, { "commit": "03849840b813fdfdc2e00874fa8e3bbe9ac95338", @@ -10116,13 +10116,13 @@ "commit": "f0ef73db7009cd6e08740d270a6ee7565efc9f8c", "date": "2019-12-13 17:55:41 -0500", "subject": "pgBackRest is now pure C.", - "body": "Remove embedded Perl from the distributed binary. This includes code, configure, Makefile, and packages. The distributed binary is now pure C.\n\nRemove storagePathEnforceSet() from the C Storage object which allowed Perl to write outside of the storage base directory. Update mock/all and real/all integration tests to use storageLocal() where they were violating this rule.\n\nRemove \"c\" option that allowed the remote to tell if it was being called from C or Perl.\n\nCode to convert options to JSON for passing to Perl (perl/config.c) has been moved to LibC since it is still required for Perl integration tests.\n\nUpdate build and installation instructions in the user guide.\n\nRemove all Perl unit tests.\n\nRemove obsolete Perl code. In particular this included all the Perl protocol code which required modifications to the Perl storage, manifest, and db objects that are still required for integration testing but only run locally. Any remaining Perl code is required for testing, documentation, or code generation.\n\nRename perlReq to binReq in define.yaml to indicate that the binary is required for a test. This had been the actual meaning for quite some time but the key was never renamed." + "body": "Remove embedded Perl from the distributed binary. This includes code, configure, Makefile, and packages. The distributed binary is now pure C.\n\nRemove storagePathEnforceSet() from the C Storage object which allowed Perl to write outside of the storage base directory. Update mock/all and real/all integration tests to use storageLocal() where they were violating this rule.\n\nRemove \"c\" option that allowed the remote to tell if it was being called from C or Perl.\n\nCode to convert options to JSON for passing to Perl (perl/config.c) has been moved to LibC since it is still required for Perl integration tests.\n\nUpdate build and installation instructions in the user guide.\n\nRemove all Perl unit tests.\n\nRemove obsolete Perl code. In particular this included all the Perl protocol code which required modifications to the Perl storage, manifest, and db objects that are still required for integration testing but only run locally. Any remaining Perl code is required for testing, documentation, or code generation.\n\nRename perlReq to binReq in define.yaml to indicate that the binary is required for a test. This had been the actual meaning for quite some time but the key was never renamed." }, { "commit": "1f2ce45e6b613edfb628ac40fd2369c9455692ba", "date": "2019-12-13 17:14:26 -0500", "subject": "The backup command is implemented entirely in C.", - "body": "For the most part this is a direct migration of the Perl code into C except as noted below.\n\nA backup can now be initiated from a linked directory. The link will not be stored in the manifest or recreated on restore. If a link or directory does not already exist in the restore location then a directory will be created.\n\nThe logic for creating backup labels has been improved and it should no longer be possible to get a backup label earlier than the latest backup even with timezone changes or clock skew. This has never been an issue in the field that we know of, but we found it in testing.\n\nFor online backups all times are fetched from the PostgreSQL primary host (before only copy start was). This doesn't affect backup integrity but it does prevent clock skew between hosts affecting backup duration reporting.\n\nArchive copy now works as expected when the archive and backup have different compression settings, i.e. when one is compressed and the other is not. This was a long-standing bug in the Perl code.\n\nResume will now work even if hardlink settings have been changed." + "body": "For the most part this is a direct migration of the Perl code into C except as noted below.\n\nA backup can now be initiated from a linked directory. The link will not be stored in the manifest or recreated on restore. If a link or directory does not already exist in the restore location then a directory will be created.\n\nThe logic for creating backup labels has been improved and it should no longer be possible to get a backup label earlier than the latest backup even with timezone changes or clock skew. This has never been an issue in the field that we know of, but we found it in testing.\n\nFor online backups all times are fetched from the PostgreSQL primary host (before only copy start was). This doesn't affect backup integrity but it does prevent clock skew between hosts affecting backup duration reporting.\n\nArchive copy now works as expected when the archive and backup have different compression settings, i.e. when one is compressed and the other is not. This was a long-standing bug in the Perl code.\n\nResume will now work even if hardlink settings have been changed." }, { "commit": "e206093beb43981f11b532f05897c383f93d8f63", @@ -10190,7 +10190,7 @@ "commit": "8c840c28a65dae3cca581691e88e51839cd64bca", "date": "2019-12-11 08:48:46 -0500", "subject": "Fix segfault on unexpected EOF in gzip decompression.", - "body": "If the compressed stream terminated early then the decompression process would get a flush request (NULL input buffer) since the filter was not marked as done. This could happen on a zero-length or truncated (i.e. invalid) compressed file.\n\nChange the existing assertion to an error to catch this condition in production gracefully." + "body": "If the compressed stream terminated early then the decompression process would get a flush request (NULL input buffer) since the filter was not marked as done. This could happen on a zero-length or truncated (i.e. invalid) compressed file.\n\nChange the existing assertion to an error to catch this condition in production gracefully." }, { "commit": "c933f12f9c39d7c076f84252c081c3430be72e55", @@ -10208,7 +10208,7 @@ "commit": "d7d663c2b93fd6a3bdfb1c6a4bd1feae0cd8cc63", "date": "2019-12-10 13:02:36 -0500", "subject": "Make buildPutDiffers() work with empty files.", - "body": "If the file was empty the timestamp was updated. If the file is empty and there is no content then file should not be saved." + "body": "If the file was empty the timestamp was updated. If the file is empty and there is no content then file should not be saved." }, { "commit": "800d2972b00bb381c92cdadead9a1c04141853be", @@ -10220,7 +10220,7 @@ "commit": "471d54a738887dd5c99afced36420cf6fb5c631b", "date": "2019-12-09 17:55:20 -0500", "subject": "Add stringz module to define some commonly used strings.", - "body": "This module will eventually contain various useful zero-terminated string functions.\n\nFor now, using NULL_Z instead of strPtr(NULL_STR) avoids a strict aliasing warning on RHEL 6. This is likely a compiler issue, but adding these constants seems like a good idea anyway and we are not going to get a fix in a gcc that old." + "body": "This module will eventually contain various useful zero-terminated string functions.\n\nFor now, using NULL_Z instead of strPtr(NULL_STR) avoids a strict aliasing warning on RHEL 6. This is likely a compiler issue, but adding these constants seems like a good idea anyway and we are not going to get a fix in a gcc that old." }, { "commit": "ca33545630a073579ef763294d897dd06820f5a5", @@ -10274,7 +10274,7 @@ "commit": "35a262951a1ed74431cdfa92c36ee2c9ae867c1a", "date": "2019-12-07 17:33:34 -0500", "subject": "Pq test harness usability and error reporting improvements.", - "body": "Pq script errors are now printed in test output in case they are being masked by a later error.\n\nOnce a script error occurs, the same error will be thrown forever rather than throwing a new error on the next item in the script.\n\nHRNPQ_MACRO_CLOSE() is not required in scripts unless harnessPqScriptStrictSet(true) is called. Most higher-level tests should not need to run in strict mode.\n\nThe command/check test seems to require strict mode but there's no apparent reason why it should. This would be a good thing to look into at some point." + "body": "Pq script errors are now printed in test output in case they are being masked by a later error.\n\nOnce a script error occurs, the same error will be thrown forever rather than throwing a new error on the next item in the script.\n\nHRNPQ_MACRO_CLOSE() is not required in scripts unless harnessPqScriptStrictSet(true) is called. Most higher-level tests should not need to run in strict mode.\n\nThe command/check test seems to require strict mode but there's no apparent reason why it should. This would be a good thing to look into at some point." }, { "commit": "d6479ddd0efda9bb442f4bbfe937daa0ab46ac72", @@ -10298,7 +10298,7 @@ "commit": "1b3770e248f4a5b59ba16f6a4c6cba7cf23de883", "date": "2019-12-07 09:48:33 -0500", "subject": "Recopy during backup when resumed file is missing or corrupt.", - "body": "A recopy would occur if the size or checksum was invalid but on error the backup would terminate.\n\nInstead, recopy the resumed file on any error. If the error is systemic (e.g. network failure) then it should show up again during the recopy." + "body": "A recopy would occur if the size or checksum was invalid but on error the backup would terminate.\n\nInstead, recopy the resumed file on any error. If the error is systemic (e.g. network failure) then it should show up again during the recopy." }, { "commit": "d3f717c89208ef132e75b634ad2e91efbda25a04", @@ -10401,7 +10401,7 @@ "commit": "686b6f91da4cfdbc8708b0f08ced460c10e4326c", "date": "2019-11-28 08:27:21 -0500", "subject": "Set archive-check option in manifest correctly when offline.", - "body": "Archive check does not run when in offline backup mode but the option was set to true in the manifest. It's harmless since these options are informational only but it could cause confusion when debugging." + "body": "Archive check does not run when in offline backup mode but the option was set to true in the manifest. It's harmless since these options are informational only but it could cause confusion when debugging." }, { "commit": "5506e5de27b1a61015f6e18d5582ca81f696e30f", @@ -10454,7 +10454,7 @@ "commit": "01aefc563dbabc7c7b1b7dbed27716692898597f", "date": "2019-11-25 07:37:09 -0500", "subject": "Update Perl page checksum expression.", - "body": "This expression determines which files contain page checksums but it was also including the directory above the relation directories. In a real PostgreSQL installation this not a problem because these directories don't contain any files.\n\nHowever, our tests place a file in `base` which the Perl code thought should have page checksums while the new C code says no.\n\nUpdate the expression to document the change and avoid churn in the expect logs later." + "body": "This expression determines which files contain page checksums but it was also including the directory above the relation directories. In a real PostgreSQL installation this not a problem because these directories don't contain any files.\n\nHowever, our tests place a file in `base` which the Perl code thought should have page checksums while the new C code says no.\n\nUpdate the expression to document the change and avoid churn in the expect logs later." }, { "commit": "18e43c5955217838ad55e68e176b83a9187533ec", @@ -10465,7 +10465,7 @@ "commit": "cace54151f3dc9f686b99923f1dbd1c010ab84f4", "date": "2019-11-23 10:32:57 -0500", "subject": "Add hostId to protocolLocalGet().", - "body": "Previously this function was only creating locals that talked to the repository. Backup will need to be able to talk to multiple PostgreSQL hosts." + "body": "Previously this function was only creating locals that talked to the repository. Backup will need to be able to talk to multiple PostgreSQL hosts." }, { "commit": "ab65ffdfacf47b6d182cbf6461d49a76c0cc8b00", @@ -10477,7 +10477,7 @@ "commit": "a4b9440d354c3cf95f7ca3aa68dc691c10c65e21", "date": "2019-11-22 19:25:49 -0500", "subject": "Only install specific lcov version when required.", - "body": "Installing lcov 1.14 everywhere turned out to be a problem just as using 1.13 on Ubuntu 19.04 was.\n\nSince we primarily use Ubuntu 18.04 for coverage testing and reporting, we definitely want to make sure that works. So, revert to using the default packaged lcov except when specified otherwise in VmTest.pm.\n\nPostgreSQL minor version releases are also included since all containers have been rebuilt." + "body": "Installing lcov 1.14 everywhere turned out to be a problem just as using 1.13 on Ubuntu 19.04 was.\n\nSince we primarily use Ubuntu 18.04 for coverage testing and reporting, we definitely want to make sure that works. So, revert to using the default packaged lcov except when specified otherwise in VmTest.pm.\n\nPostgreSQL minor version releases are also included since all containers have been rebuilt." }, { "commit": "52a3ba6b6f25443b5ed456969b794f63a809a606", @@ -10523,13 +10523,13 @@ "commit": "c524ec4f95462bfbc1484e139d754280ec05e94c", "date": "2019-11-21 16:06:27 -0500", "subject": "Remove obsolete integration tests from mock/all.", - "body": "The protocol timeout tests have been superceded by unit tests.\n\nThe TEST_BACKUP_RESUME test point was incorrectly included into a number of tests, probably a copy pasto. It didn't hurt anything but it did add 200ms to each test where it appeared.\n\nCatalog and control version tests were redundant. The database version and system id tests covered the important code paths and the C code gets these values from a lookup table.\n\nFinally, fix an incomplete update to the backup.info file while munging for tests." + "body": "The protocol timeout tests have been superceded by unit tests.\n\nThe TEST_BACKUP_RESUME test point was incorrectly included into a number of tests, probably a copy pasto. It didn't hurt anything but it did add 200ms to each test where it appeared.\n\nCatalog and control version tests were redundant. The database version and system id tests covered the important code paths and the C code gets these values from a lookup table.\n\nFinally, fix an incomplete update to the backup.info file while munging for tests." }, { "commit": "53cd530bbf3921c9500963d12393344425574d08", "date": "2019-11-21 12:09:24 -0500", "subject": "Safely initialize manifest object.", - "body": "Using a designated initializer is safer than zeroing the struct. It is also better for debugging because Valgrind should be able to detect access to areas that are not initialized due to alignment." + "body": "Using a designated initializer is safer than zeroing the struct. It is also better for debugging because Valgrind should be able to detect access to areas that are not initialized due to alignment." }, { "commit": "270f9496e432109b243e8db5d497805f72c56d91", @@ -10546,7 +10546,7 @@ "commit": "9f71a019c815af58bdd10ca2ceaaf6f1506fb560", "date": "2019-11-21 10:55:03 -0500", "subject": "Allow storageInfo() to operate outside the base storage path.", - "body": "It is occasionally useful to get information about a file outside of the base storage path. storageLocal() can be used in some cases but when the storage is remote is doesn't seem worth creating a separate storage object for adhoc info requests.\n\nstorageInfo() is a read-only operation so this seems pretty safe. The noPathEnforce parameter will make auditing exceptions easy." + "body": "It is occasionally useful to get information about a file outside of the base storage path. storageLocal() can be used in some cases but when the storage is remote is doesn't seem worth creating a separate storage object for adhoc info requests.\n\nstorageInfo() is a read-only operation so this seems pretty safe. The noPathEnforce parameter will make auditing exceptions easy." }, { "commit": "d3b1897625c5b200724fb087cedd975fce07672f", @@ -10564,7 +10564,7 @@ "commit": "cef9f0f37f33fe46ffcb229535a82479700b8de2", "date": "2019-11-21 09:40:15 -0500", "subject": "Process . in strPathAbsolute().", - "body": "A . in a link will always lead to an error since the destination will be inside PGDATA. However, it is accepted symlink syntax so it's better to resolve it and get the correct error message.\n\nAlso, we may have other uses for this function in the future." + "body": "A . in a link will always lead to an error since the destination will be inside PGDATA. However, it is accepted symlink syntax so it's better to resolve it and get the correct error message.\n\nAlso, we may have other uses for this function in the future." }, { "commit": "a6fc0bf2ca778844448885192f11d17bf933b551", @@ -10626,7 +10626,7 @@ "commit": "26e1da82e7970ac03b8300da5f55ed7da1c2ab96", "date": "2019-11-16 17:32:49 -0500", "subject": "Allow zero-length substrings to be extracted from the end of a string.", - "body": "The previous assert was a bit overzealous and did not allow this case. It's not very common but still occasionally useful." + "body": "The previous assert was a bit overzealous and did not allow this case. It's not very common but still occasionally useful." }, { "commit": "8a3de1e05a427bf40b7742ac476c01ef219f6b46", @@ -10683,7 +10683,7 @@ "commit": "3b879c2cb3cfdbcd03dad1d6cc898052f8d8863d", "date": "2019-11-14 16:48:41 -0500", "subject": "Filter logged command options based on the command definition.", - "body": "Previously, options were being filtered based on what was currently valid. For chained commands (e.g. backup then expire) some options may be valid for the first command but not the second.\n\nFilter based on the command definition rather than what is currently valid to avoid logging options that are not valid for subsequent commands. This reduces the number of options logged and will hopefully help avoid confusion and expect log churn." + "body": "Previously, options were being filtered based on what was currently valid. For chained commands (e.g. backup then expire) some options may be valid for the first command but not the second.\n\nFilter based on the command definition rather than what is currently valid to avoid logging options that are not valid for subsequent commands. This reduces the number of options logged and will hopefully help avoid confusion and expect log churn." }, { "commit": "c5b76d213bc24d703e7b973df171a2b01595dabc", @@ -10717,7 +10717,7 @@ "commit": "43171786336733d8e1efe672b801236fd3e2d829", "date": "2019-11-08 17:56:34 -0500", "subject": "Update MinIO to newest release.", - "body": "We had some problems with newer versions so had held off on updating. Those problems appear to have been resolved.\n\nIn addition, the --compat flag is no longer required. Prior versions of MinIO required all parts of a multi-part upload (except the last) to be of equal size. The --compat flag was introduced to restore the default S3 behavior. Now --compat is only required when ETag is being used for MD5 verification, which we don't do." + "body": "We had some problems with newer versions so had held off on updating. Those problems appear to have been resolved.\n\nIn addition, the --compat flag is no longer required. Prior versions of MinIO required all parts of a multi-part upload (except the last) to be of equal size. The --compat flag was introduced to restore the default S3 behavior. Now --compat is only required when ETag is being used for MD5 verification, which we don't do." }, { "commit": "edcc7306a39852ddff507d384555230001442bf4", @@ -10729,13 +10729,13 @@ "commit": "eca00e0be00bcd692b10de1b89a700f6525d51a7", "date": "2019-11-07 13:11:01 -0500", "subject": "Add building a development environment to contributing documentation.", - "body": "This documentation shows how to build a development environment on Ubuntu 19.04 and should work for other Debian-based distros.\n\nNote that this document is not included in automated testing due to some unresolved issues with Docker in Docker on Travis CI. We'll address this in the future when we add contributing documentation to the website." + "body": "This documentation shows how to build a development environment on Ubuntu 19.04 and should work for other Debian-based distros.\n\nNote that this document is not included in automated testing due to some unresolved issues with Docker in Docker on Travis CI. We'll address this in the future when we add contributing documentation to the website." }, { "commit": "8b682b75d2ade2e52c75b417fad2eb61150616d1", "date": "2019-11-02 10:35:48 +0100", "subject": "Allow mock integration tests for all VM types.", - "body": "Previously the mock integration tests would be skipped for VMs other than the standard four used in CI. Now VMs outside the standard four will run the same tests as VM4 (currently U18)." + "body": "Previously the mock integration tests would be skipped for VMs other than the standard four used in CI. Now VMs outside the standard four will run the same tests as VM4 (currently U18)." }, { "commit": "b3e5d88304860fbcb566059d08cbf85e2c3cd95a", @@ -10746,7 +10746,7 @@ "commit": "7168e0744018d7d27668d59bfe9b6deafe923efa", "date": "2019-10-30 14:55:25 +0100", "subject": "Use getcwd() to construct path when WAL path is relative.", - "body": "Using pg1-path, as we were doing previously, could lead to WAL being copied to/from unexpected places. PostgreSQL sets the current working directory to PGDATA so we can use that to resolve relative paths." + "body": "Using pg1-path, as we were doing previously, could lead to WAL being copied to/from unexpected places. PostgreSQL sets the current working directory to PGDATA so we can use that to resolve relative paths." }, { "commit": "e06db21e35f701b62c625b9f25436459b8fd5c26", @@ -10774,7 +10774,7 @@ "commit": "b4aeb217e665a166cb75577e07f80b01bc0345d1", "date": "2019-10-15 17:19:42 +0200", "subject": "Allow parameters to be passed to travis.pl.", - "body": "This makes configuring tests easier.\n\nAlso add a parameter for tests that require sudo. This should be retired at some point but some tests still require it." + "body": "This makes configuring tests easier.\n\nAlso add a parameter for tests that require sudo. This should be retired at some point but some tests still require it." }, { "commit": "f3b2189659a7f0e8e660c1be42175f0ff82cffb0", @@ -10798,7 +10798,7 @@ "commit": "64c6102a1536ba94f4957fe950038f31525748b5", "date": "2019-10-12 14:47:01 -0400", "subject": "Update packages required for Travis-CI builds.", - "body": "These packages are expected on the arm64 build even though we are using the same os image as amd64. It appears the arm64 image is slimmer." + "body": "These packages are expected on the arm64 build even though we are using the same os image as amd64. It appears the arm64 image is slimmer." }, { "commit": "35eef2b8676115036b9d68b14223629dd1e1b773", @@ -10826,19 +10826,19 @@ "commit": "93656db186e6da286bd704dc65235fcc209bb19d", "date": "2019-10-12 11:24:21 -0400", "subject": "Update lcov to 1.14.", - "body": "1.13 is not compatible with gcc 8 which is what ships with newer distributions. Build from source to get a more recent version.\n\n1.13 is not compatible with gcc 9 so we'll need to address that at a later date." + "body": "1.13 is not compatible with gcc 8 which is what ships with newer distributions. Build from source to get a more recent version.\n\n1.13 is not compatible with gcc 9 so we'll need to address that at a later date." }, { "commit": "11c7c8fabb1e3eb9dadce2231fbb9cb3d76d553f", "date": "2019-10-12 09:45:18 -0400", "subject": "Remove pgbackrest test user.", - "body": "This user was created before we tested in containers to ensure isolation between the pg and repo hosts which were then just directories. The downside is that this resulted in a lot of sudos to set the pgbackrest user and to remove files which did not belong to the main test user.\n\nContainers provide isolation without needing separate users so we can now safely remove the pgbackrest user. This allows us to remove most sudos, except where they are explicitly needed in tests.\n\nWhile we're at it, remove the code that installed the Perl C library (which also required sudo) and simply add the build path to @INC instead." + "body": "This user was created before we tested in containers to ensure isolation between the pg and repo hosts which were then just directories. The downside is that this resulted in a lot of sudos to set the pgbackrest user and to remove files which did not belong to the main test user.\n\nContainers provide isolation without needing separate users so we can now safely remove the pgbackrest user. This allows us to remove most sudos, except where they are explicitly needed in tests.\n\nWhile we're at it, remove the code that installed the Perl C library (which also required sudo) and simply add the build path to @INC instead." }, { "commit": "6f0e7f00af3e83d071713a671a6f155cd1fa3b88", "date": "2019-10-12 09:26:19 -0400", "subject": "Fix recovery test failing in PostgreSQL 12.0.", - "body": "This test was not creating recovery.signal when testing with --type=preserve. The preserve recovery type only keeps existing files and does not create any.\n\nRC1 was just ignoring recovery.signal and going right into recovery. Weirdly, 12.0 used restore_command to do crash recovery which made the problem harder to diagnose, but this has now been fixed in PostgreSQL and should be released in 12.1." + "body": "This test was not creating recovery.signal when testing with --type=preserve. The preserve recovery type only keeps existing files and does not create any.\n\nRC1 was just ignoring recovery.signal and going right into recovery. Weirdly, 12.0 used restore_command to do crash recovery which made the problem harder to diagnose, but this has now been fixed in PostgreSQL and should be released in 12.1." }, { "commit": "59a4a0c1b1726812126a13e57bee29689b0a8fdc", @@ -10972,7 +10972,7 @@ "commit": "4e4d1f414a153232079f2e2e9ce203ffc5a2362c", "date": "2019-10-08 16:04:27 -0400", "subject": "Add infoBackupLoadFileReconstruct() to InfoBackup object.", - "body": "Check the backup.info file against the backup path. Add any backups that are missing and remove any backups that no longer exist.\n\nIt's important to run this before backup or expire to be sure we are using the most up-to-date list of backups." + "body": "Check the backup.info file against the backup path. Add any backups that are missing and remove any backups that no longer exist.\n\nIt's important to run this before backup or expire to be sure we are using the most up-to-date list of backups." }, { "commit": "b2825b82c7b8cf140969dd4e9fd04552e921bbfc", @@ -11001,7 +11001,7 @@ "commit": "45881c74aeff4bb25559ec0254fa7fc1960d9cab", "date": "2019-10-08 12:06:30 -0400", "subject": "Allow most unit tests to run outside of a container.", - "body": "Three major changes were required to get this working:\n\n1) Provide the path to pgbackrest in the build directory when running outside a container. Tests in a container will continue to install and run against /usr/bin/pgbackrest.\n\n1) Set a per-test lock path so tests don't conflict on the default /tmp/pgbackrest path. Also set a per-test log-path while we are at it.\n\n2) Use localhost instead of a custom host for TLS test connections. Tests in containers will continue to update /etc/hosts and use the custom host.\n\nAdd infrastructure and update harnessCfgLoad*() to get the correct exe and paths loaded for testing.\n\nSince new tests are required to verify that running outside a container works, also rework the tests in Travis CI to provide coverage within a reasonable amount of time. Mainly, break up to doc tests by VM and run an abbreviated unit test suite on co6 and co7." + "body": "Three major changes were required to get this working:\n\n1) Provide the path to pgbackrest in the build directory when running outside a container. Tests in a container will continue to install and run against /usr/bin/pgbackrest.\n\n1) Set a per-test lock path so tests don't conflict on the default /tmp/pgbackrest path. Also set a per-test log-path while we are at it.\n\n2) Use localhost instead of a custom host for TLS test connections. Tests in containers will continue to update /etc/hosts and use the custom host.\n\nAdd infrastructure and update harnessCfgLoad*() to get the correct exe and paths loaded for testing.\n\nSince new tests are required to verify that running outside a container works, also rework the tests in Travis CI to provide coverage within a reasonable amount of time. Mainly, break up to doc tests by VM and run an abbreviated unit test suite on co6 and co7." }, { "commit": "77b0c6c993a0e6ff45a6a99f343c3709a016d152", @@ -11033,7 +11033,7 @@ "commit": "29e132f5e928298e9be135d76674f2016b122029", "date": "2019-10-01 13:20:43 -0400", "subject": "PostgreSQL 12 support.", - "body": "Recovery settings are now written into postgresql.auto.conf instead of recovery.conf. Existing recovery_target* settings will be commented out to help avoid conflicts.\n\nA comment is added before recovery settings to identify them as written by pgBackRest since it is unclear how, in general, old settings will be removed.\n\nrecovery.signal and standby.signal are automatically created based on the recovery settings." + "body": "Recovery settings are now written into postgresql.auto.conf instead of recovery.conf. Existing recovery_target* settings will be commented out to help avoid conflicts.\n\nA comment is added before recovery settings to identify them as written by pgBackRest since it is unclear how, in general, old settings will be removed.\n\nrecovery.signal and standby.signal are automatically created based on the recovery settings." }, { "commit": "6be7e6fde54f2dc3edbed802efc3f5cb1e050fd2", @@ -11049,7 +11049,7 @@ "commit": "f96c54c4ba0e526cda4cf9bed5cc1231b1d819b5", "date": "2019-09-30 12:39:38 -0400", "subject": "Add info command set option for detailed text output.", - "body": "The additional details include databases that can be used for selective restore and a list of tablespaces and symlinks with their default destinations.\n\nThis information is not included in the JSON output because it requires reading the manifest which is too IO intensive to do for all manifests. We plan to include this information for JSON in a future release." + "body": "The additional details include databases that can be used for selective restore and a list of tablespaces and symlinks with their default destinations.\n\nThis information is not included in the JSON output because it requires reading the manifest which is too IO intensive to do for all manifests. We plan to include this information for JSON in a future release." }, { "commit": "33ec5a3aacdfe0dd258463c0cbcd43f0d155152e", @@ -11067,7 +11067,7 @@ "commit": "f1ba428fb037eac7b6c6ea5b9f617d1ad4579026", "date": "2019-09-28 14:02:12 -0400", "subject": "Add performance test capability in C with scaling.", - "body": "Scaling allows the starting values to be increased from the command-line without code changes.\n\nAlso suppress valgrind and assertions when running performance testing. Optimization is left at -O0 because we should not be depending on compiler optimizations to make our code performant, and it makes profiling more informative." + "body": "Scaling allows the starting values to be increased from the command-line without code changes.\n\nAlso suppress valgrind and assertions when running performance testing. Optimization is left at -O0 because we should not be depending on compiler optimizations to make our code performant, and it makes profiling more informative." }, { "commit": "004ff99a2d918fa3a5079ce9d7ff2f5b120176e3", @@ -11079,7 +11079,7 @@ "commit": "cb62bebadf62613b770f79cd4fd43e20ec7db10c", "date": "2019-09-28 10:08:20 -0400", "subject": "Use bsearch() on sorted lists rather than an iterative method.", - "body": "bsearch() is far more efficient than an iterative approach except in the most trivial cases.\n\nFor now insert will reset the sort order to none and the list will need to be resorted before bsearch() can be used. This is necessary because item pointers are not stable after a sort, i.e. they can move around. Until lists are stable it's not a good idea to surprise the caller by mixing up their pointers on insert." + "body": "bsearch() is far more efficient than an iterative approach except in the most trivial cases.\n\nFor now insert will reset the sort order to none and the list will need to be resorted before bsearch() can be used. This is necessary because item pointers are not stable after a sort, i.e. they can move around. Until lists are stable it's not a good idea to surprise the caller by mixing up their pointers on insert." }, { "commit": "d3d2a7cd8606be9696957eb052ca6569db1a8167", @@ -11105,7 +11105,7 @@ "commit": "d82102d6ef5445e5c96e47a095210f36a618b800", "date": "2019-09-27 13:04:36 -0400", "subject": "Add explicit promotes to recovery integration tests.", - "body": "PostgreSQL 12 will shutdown in these cases which seems to be the correct action (according to the documentation) when hot_standby = off, but older versions are promoting instead. Set target_action explicitly so all versions will behave the same way.\n\nThis does beg the question of whether the PostgreSQL 12 behavior is wrong (though it matches the docs) or the previous versions are." + "body": "PostgreSQL 12 will shutdown in these cases which seems to be the correct action (according to the documentation) when hot_standby = off, but older versions are promoting instead. Set target_action explicitly so all versions will behave the same way.\n\nThis does beg the question of whether the PostgreSQL 12 behavior is wrong (though it matches the docs) or the previous versions are." }, { "commit": "833d0da0d96036ada9e2397e2b1484d2032c25fa", @@ -11129,7 +11129,7 @@ "commit": "03a7bda511b293cf40fd8324951ace1252d40ac2", "date": "2019-09-27 09:19:12 -0400", "subject": "Refactor recovery file generation.", - "body": "Separate the generation of recovery values and formatting them into recovery.conf format. This is generally a good idea, but also makes the code ready to deal with a different recovery file in PostgreSQL 12.\n\nAlso move the recovery file logic out of cmdRestore() into restoreRecoveryWrite()." + "body": "Separate the generation of recovery values and formatting them into recovery.conf format. This is generally a good idea, but also makes the code ready to deal with a different recovery file in PostgreSQL 12.\n\nAlso move the recovery file logic out of cmdRestore() into restoreRecoveryWrite()." }, { "commit": "cf1e96e827d154a62c0b52ff5531dd687695bf73", @@ -11153,7 +11153,7 @@ "commit": "451ae397bec3f3bc070c4db674cc5df61bd63498", "date": "2019-09-26 07:52:02 -0400", "subject": "The restore command is implemented entirely in C.", - "body": "For the most part this is a direct migration of the Perl code into C.\n\nThere is one important behavioral change with regard to how file permissions are handled. The Perl code tried to set ownership as it was in the manifest even when running as an unprivileged user. This usually just led to errors and frustration.\n\nThe C code works like this:\n\nIf a restore is run as a non-root user (the typical scenario) then all files restored will belong to the user/group executing pgBackRest. If existing files are not owned by the executing user/group then an error will result if the ownership cannot be updated to the executing user/group. In that case the file ownership will need to be updated by a privileged user before the restore can be retried.\n\nIf a restore is run as the root user then pgBackRest will attempt to recreate the ownership recorded in the manifest when the backup was made. Only user/group names are stored in the manifest so the same names must exist on the restore host for this to work. If the user/group name cannot be found locally then the user/group of the PostgreSQL data directory will be used and finally root if the data directory user/group cannot be mapped to a name." + "body": "For the most part this is a direct migration of the Perl code into C.\n\nThere is one important behavioral change with regard to how file permissions are handled. The Perl code tried to set ownership as it was in the manifest even when running as an unprivileged user. This usually just led to errors and frustration.\n\nThe C code works like this:\n\nIf a restore is run as a non-root user (the typical scenario) then all files restored will belong to the user/group executing pgBackRest. If existing files are not owned by the executing user/group then an error will result if the ownership cannot be updated to the executing user/group. In that case the file ownership will need to be updated by a privileged user before the restore can be retried.\n\nIf a restore is run as the root user then pgBackRest will attempt to recreate the ownership recorded in the manifest when the backup was made. Only user/group names are stored in the manifest so the same names must exist on the restore host for this to work. If the user/group name cannot be found locally then the user/group of the PostgreSQL data directory will be used and finally root if the data directory user/group cannot be mapped to a name." }, { "commit": "92e48c856ac87838c479c2a454fe007cf026d4bd", @@ -11181,13 +11181,13 @@ "commit": "71349c89aecdb3b158d1638d0310cdba6c96362a", "date": "2019-09-23 17:56:17 -0400", "subject": "Add TEST_TITLE() macro.", - "body": "This macro displays a title for each test. A test frequently has multiple parts and it was hard to tell which subparts went together. We used ad hoc indentation to do this.\n\nAnything that is a not a title is automatically indented so manually indenting is not longer needed. This should make the tests and the test output easier to read." + "body": "This macro displays a title for each test. A test frequently has multiple parts and it was hard to tell which subparts went together. We used ad hoc indentation to do this.\n\nAnything that is a not a title is automatically indented so manually indenting is not longer needed. This should make the tests and the test output easier to read." }, { "commit": "2fd2fe509f3039ef28766b865f3b66cc524dee04", "date": "2019-09-23 17:20:47 -0400", "subject": "Add TEST_RESULT_LOG*() and TEST_SYSTEM*() macros.", - "body": "These macros encapsulate the functionality provided by direct calls to harnessLogResult() and system(). They both have _FMT() variants.\n\nThe primary advantage is that {[path]}, {[user]}, and {[group]} will be replaced with the test path, user, and group respectively. This saves a log of strNewFmt() calls and makes the tests less noisy." + "body": "These macros encapsulate the functionality provided by direct calls to harnessLogResult() and system(). They both have _FMT() variants.\n\nThe primary advantage is that {[path]}, {[user]}, and {[group]} will be replaced with the test path, user, and group respectively. This saves a log of strNewFmt() calls and makes the tests less noisy." }, { "commit": "d3a7055ee5d290da0196ba266051846c107ddede", @@ -11205,13 +11205,13 @@ "commit": "c969137021da5dc03e5f95d36080f4e4fd08522b", "date": "2019-09-23 13:50:46 -0400", "subject": "Migrate backup manifest load/save to C.", - "body": "The backup manifest stores a complete list of all files, links, and paths in a backup along with metadata such as checksums, sizes,\ntimestamps, etc. A list of databases is also included for selective restore.\n\nThe purpose of the manifest is to allow the restore command to confidently reconstruct the PostgreSQL data directory and ensure that\nnothing is missing or corrupt. It is also useful for reporting, e.g. size of backup, backup time, etc.\n\nFor now, migrate enough functionality to implement the restore command." + "body": "The backup manifest stores a complete list of all files, links, and paths in a backup along with metadata such as checksums, sizes,\ntimestamps, etc. A list of databases is also included for selective restore.\n\nThe purpose of the manifest is to allow the restore command to confidently reconstruct the PostgreSQL data directory and ensure that\nnothing is missing or corrupt. It is also useful for reporting, e.g. size of backup, backup time, etc.\n\nFor now, migrate enough functionality to implement the restore command." }, { "commit": "5b64c93e8b1de010176b8d3927f80cc8039f4dbc", "date": "2019-09-20 17:50:49 -0400", "subject": "Add local option for cfgExecParam().", - "body": "cfgExecParam() was originally written to provide options for remote processes. Remotes processes do not have access to the local config so it was necessary to pass every non-default option.\n\nLocal processes on the other hand, e.g. archive-get, archive-get-async, archive-push-async, and local, do have access to the local config and therefore don't need every parameter to be passed on the command-line. The previous way was not wrong, but it was overly verbose and did not align with the way Perl had worked.\n\nUpdate cfgExecParam() to accept a local option which excludes options from the command line which can be read from local configs." + "body": "cfgExecParam() was originally written to provide options for remote processes. Remotes processes do not have access to the local config so it was necessary to pass every non-default option.\n\nLocal processes on the other hand, e.g. archive-get, archive-get-async, archive-push-async, and local, do have access to the local config and therefore don't need every parameter to be passed on the command-line. The previous way was not wrong, but it was overly verbose and did not align with the way Perl had worked.\n\nUpdate cfgExecParam() to accept a local option which excludes options from the command line which can be read from local configs." }, { "commit": "3f18040aab5707c9be9b51fe37f3464d222fceb5", @@ -11253,7 +11253,7 @@ "commit": "60d93df503c234730109c87114068dde4203804e", "date": "2019-09-18 07:15:16 -0400", "subject": "Use a callback to feed jobs to ProtocolParallel.", - "body": "Loading jobs in advance uses a lot of memory in the case that there are millions of jobs to be performed. We haven't seen this yet, but with backup and restore on the horizon it will become the norm.\n\nInstead, use a callback so that jobs are only created as they are needed and can be freed as soon as they are completed." + "body": "Loading jobs in advance uses a lot of memory in the case that there are millions of jobs to be performed. We haven't seen this yet, but with backup and restore on the horizon it will become the norm.\n\nInstead, use a callback so that jobs are only created as they are needed and can be freed as soon as they are completed." }, { "commit": "ce1c7b02520f2e37dfcfb283471fb4ec263b5d50", @@ -11301,7 +11301,7 @@ "commit": "15d04ca19c0bd1aafe9d4389e8429bcc392e842d", "date": "2019-09-12 16:29:50 -0400", "subject": "Add recursion and json output to the ls command.", - "body": "These features finally make the ls command practical.\n\nCurrently the JSON contains only name, type, and size. We may add more fields in the future, but these seem like the minimum needed to be useful." + "body": "These features finally make the ls command practical.\n\nCurrently the JSON contains only name, type, and size. We may add more fields in the future, but these seem like the minimum needed to be useful." }, { "commit": "e45baa1830ac5251fc4284d094c3e12920e50862", @@ -11319,13 +11319,13 @@ "commit": "f809d2f008a4d8d769b72a68ffff9ef75ab9de22", "date": "2019-09-12 15:16:42 -0400", "subject": "Ignore apt-get update errors in Travis CI.", - "body": "Broken vendor packages have been causing builds to break due to an error on apt-get update.\n\nIgnore errors and proceed directory to apt-get install. It's possible that we'll try to reference an expired package version and get an error anyway, but that seems better than a guaranteed hard error." + "body": "Broken vendor packages have been causing builds to break due to an error on apt-get update.\n\nIgnore errors and proceed directory to apt-get install. It's possible that we'll try to reference an expired package version and get an error anyway, but that seems better than a guaranteed hard error." }, { "commit": "506c10f7f270806aca8acecaa303bf4458463720", "date": "2019-09-12 12:04:25 -0400", "subject": "Sort and find improvements to List and StringList objects.", - "body": "Push the responsibility for sort and find down to the List object by introducing a general comparator function that can be used for both sorting and finding.\n\nUpdate insert and add functions to return the item added rather than the list. This is more useful in the core code, though numerous updates to the tests were required." + "body": "Push the responsibility for sort and find down to the List object by introducing a general comparator function that can be used for both sorting and finding.\n\nUpdate insert and add functions to return the item added rather than the list. This is more useful in the core code, though numerous updates to the tests were required." }, { "commit": "e4a071ce033ae53dafb35eb9ccd4daba1030064c", @@ -11348,13 +11348,13 @@ "commit": "f4f21d0df7217bcb583189d0fc4ecaac3faed146", "date": "2019-09-10 13:02:05 -0400", "subject": "Add groupIdFromName() and userIdFromName() to user module.", - "body": "Update StorageWritePosix to use the new functions.\n\nA side effect is that storageWritePosixOpen() will no longer error when the user/group name does not exist. It will simply retain the original user/group, i.e. the user that executed the restore.\n\nIn general this is a feature since completing a restore is more important than setting permissions exactly from the source host. However, some notification of this omission to the user would be beneficial." + "body": "Update StorageWritePosix to use the new functions.\n\nA side effect is that storageWritePosixOpen() will no longer error when the user/group name does not exist. It will simply retain the original user/group, i.e. the user that executed the restore.\n\nIn general this is a feature since completing a restore is more important than setting permissions exactly from the source host. However, some notification of this omission to the user would be beneficial." }, { "commit": "f8d0574759340f0260f06e377da412041987f582", "date": "2019-09-10 12:29:36 -0400", "subject": "Increase process timeout and emit occasional warnings.", - "body": "Travis will timeout after 10 minutes with no output. Emit a warning every 5 minutes to keep Travis alive and increase the total timeout to 20 minutes.\n\nDocumentation builds have been timing out a lot recently so hopefully this will help." + "body": "Travis will timeout after 10 minutes with no output. Emit a warning every 5 minutes to keep Travis alive and increase the total timeout to 20 minutes.\n\nDocumentation builds have been timing out a lot recently so hopefully this will help." }, { "commit": "e043c6b1bc388422aee9255077544c3644b8f69d", @@ -11383,7 +11383,7 @@ "commit": "0a96764cb8daa615121069e840268e216d1983ff", "date": "2019-09-07 18:04:39 -0400", "subject": "Remove most references to PostgreSQL control and catalog versions.", - "body": "The control and catalog versions were stored a variety of places in the optimistic hope that they would be useful. In fact they never were.\n\nWe can't remove them from the backup.info and backup.manifest files due to backwards compatibility concerns, but we can at least avoid loading and storing them in C structures.\n\nAdd functions to the PostgreSQL interface which will return the control and catalog versions for any supported version of PostgreSQL to allow backwards compatibility for backup.info and backup.manifest. These functions will be useful in other ways, e.g. generating the tablespace identifier in PostgreSQL >= 9.0." + "body": "The control and catalog versions were stored a variety of places in the optimistic hope that they would be useful. In fact they never were.\n\nWe can't remove them from the backup.info and backup.manifest files due to backwards compatibility concerns, but we can at least avoid loading and storing them in C structures.\n\nAdd functions to the PostgreSQL interface which will return the control and catalog versions for any supported version of PostgreSQL to allow backwards compatibility for backup.info and backup.manifest. These functions will be useful in other ways, e.g. generating the tablespace identifier in PostgreSQL >= 9.0." }, { "commit": "843a602080e13ea0f205720892f7bfee58e075fb", @@ -11423,13 +11423,13 @@ "commit": "5c314df098ee2295af400fb0b6b4f0cccd96fb69", "date": "2019-09-05 19:53:00 -0400", "subject": "Rename infoManifest module to manifest.", - "body": "The manifest is not an info file so if anything it should be called backupManifest. But that seems too long for such a commonly used object so manifest seems better.\n\nNote that unlike Perl there is no storage manifest method so this stands as the only manifest in the C code, as befits its importance." + "body": "The manifest is not an info file so if anything it should be called backupManifest. But that seems too long for such a commonly used object so manifest seems better.\n\nNote that unlike Perl there is no storage manifest method so this stands as the only manifest in the C code, as befits its importance." }, { "commit": "8df7d68c8dca2c61b665bbbd1f26800c0dca3ae8", "date": "2019-09-03 18:28:53 -0400", "subject": "Fix sudo missed in \"Build pgBackRest as an unprivileged user\".", - "body": "286a106a updated the documentation to build pgBackRest as an unprivileged user, but the wget command was missed. This command is not actually run, just displayed, because the release is not yet available when the documentation is built.\n\nUpdate the wget command to run as the local user." + "body": "286a106a updated the documentation to build pgBackRest as an unprivileged user, but the wget command was missed. This command is not actually run, just displayed, because the release is not yet available when the documentation is built.\n\nUpdate the wget command to run as the local user." }, { "commit": "005684bf1f55206f122e1d0fcb4164cc84875ff5", @@ -11451,7 +11451,7 @@ "commit": "7d8068f27b6b695f9616fd985517692236dda47d", "date": "2019-09-03 12:30:45 -0400", "subject": "Don't decode manifest data when it is generated on a remote.", - "body": "Decoding a manifest from the JSON provided by C to the hash required by Perl is an expensive process. If manifest() was called on a remote it was being decoded into a hash and then immediately re-encoded into JSON for transmission over the protocol layer.\n\nInstead, provide a function for the remote to get the raw JSON which can be transmitted as is and decoded in the calling process instead.\n\nThis makes remote manifest calls as fast as they were before 2.16, but local calls must still pay the decoding penalty and are therefore slower. This will continue to be true until the Perl storage interface is retired at the end of the C migration.\n\nNote that for reasonable numbers of tables there is no detectable difference. The case in question involved 250K tables with a 10 minute decode time (which was being doubled) on a fast workstation." + "body": "Decoding a manifest from the JSON provided by C to the hash required by Perl is an expensive process. If manifest() was called on a remote it was being decoded into a hash and then immediately re-encoded into JSON for transmission over the protocol layer.\n\nInstead, provide a function for the remote to get the raw JSON which can be transmitted as is and decoded in the calling process instead.\n\nThis makes remote manifest calls as fast as they were before 2.16, but local calls must still pay the decoding penalty and are therefore slower. This will continue to be true until the Perl storage interface is retired at the end of the C migration.\n\nNote that for reasonable numbers of tables there is no detectable difference. The case in question involved 250K tables with a 10 minute decode time (which was being doubled) on a fast workstation." }, { "commit": "1e55b876206c73d0ce32424814d100ef82d8da05", @@ -11462,7 +11462,7 @@ "commit": "7ade3fc1c31f09425547d1869e82863582b1b137", "date": "2019-09-02 21:09:43 -0400", "subject": "Move constants from the infoManifest module to the infoBackup module.", - "body": "These constants should be kept separate because the implementation of any info file might change in the future and only the interface should be expected to remain consistent.\n\nIn any case, infoBackup requires Variant constants while infoManifest uses String constants so they are not shareable. Modern compilers should combine the underlying const char * constants." + "body": "These constants should be kept separate because the implementation of any info file might change in the future and only the interface should be expected to remain consistent.\n\nIn any case, infoBackup requires Variant constants while infoManifest uses String constants so they are not shareable. Modern compilers should combine the underlying const char * constants." }, { "commit": "3a28b68b8bde996252ad8941107744c74a062453", @@ -11534,7 +11534,7 @@ "commit": "01c2669b9764491ec3ca96ffd0de6676bef9e5dc", "date": "2019-08-23 07:47:54 -0400", "subject": "Fix exclusions for special files.", - "body": "Prior to 2.16 the Perl manifest code would skip any file that began with a dot. This was not intentional but it allowed PostgreSQL socket files to be located in the data directory. The new C code in 2.16 did not have this unintentional exclusion so socket files in the data directory caused errors.\n\nWorse, the file type error was being thrown before the exclusion check so there was really no way around the issue except to move the socket files out of the data directory.\n\nSpecial file types (e.g. socket, pipe) will now be automatically skipped and a warning logged to notify the user of the exclusion. The warning can be suppressed with an explicit --exclude." + "body": "Prior to 2.16 the Perl manifest code would skip any file that began with a dot. This was not intentional but it allowed PostgreSQL socket files to be located in the data directory. The new C code in 2.16 did not have this unintentional exclusion so socket files in the data directory caused errors.\n\nWorse, the file type error was being thrown before the exclusion check so there was really no way around the issue except to move the socket files out of the data directory.\n\nSpecial file types (e.g. socket, pipe) will now be automatically skipped and a warning logged to notify the user of the exclusion. The warning can be suppressed with an explicit --exclude." }, { "commit": "2862f480cd1712ee9c188c28d79f2f9ed1bb9c6b", @@ -11552,13 +11552,13 @@ "commit": "f88012cef3acc29fb5311c73ea1a84c1409d88ee", "date": "2019-08-22 10:18:34 -0400", "subject": "Fix regexp to ignore ./.. directories in the Posix driver.", - "body": "In versions <= 2.15 the old regexp caused any file or directory beginning with . to be ignored during a backup. This has caused behavioral differences in 2.16 because the new C code correctly excludes ./.. directories.\n\nThis Perl code is only used for testing now, but it should still match the output of the C functions." + "body": "In versions <= 2.15 the old regexp caused any file or directory beginning with . to be ignored during a backup. This has caused behavioral differences in 2.16 because the new C code correctly excludes ./.. directories.\n\nThis Perl code is only used for testing now, but it should still match the output of the C functions." }, { "commit": "c002a2ce2fb7a1bc4bbeaa5dee2f3e7d719ccc24", "date": "2019-08-21 19:45:48 -0400", "subject": "Move info file checksum to the end of the file.", - "body": "Putting the checksum at the beginning of the file made it impossible to stream the file out when saving. The entire file had to be held in memory while it was checksummed so the checksum could be written at the beginning.\n\nInstead place the checksum at the end. This does not break the existing Perl or C code since the read is not order dependent.\n\nThere are no plans to improve the Perl code to take advantage of this change, but it will make the C implementation more efficient." + "body": "Putting the checksum at the beginning of the file made it impossible to stream the file out when saving. The entire file had to be held in memory while it was checksummed so the checksum could be written at the beginning.\n\nInstead place the checksum at the end. This does not break the existing Perl or C code since the read is not order dependent.\n\nThere are no plans to improve the Perl code to take advantage of this change, but it will make the C implementation more efficient." }, { "commit": "c733319063bbc9d67ba99b96c178f1d9f863b8fe", @@ -11581,7 +11581,7 @@ "commit": "fa640f22add9af88c2ddd584812d8fde93c8f8c5", "date": "2019-08-21 15:12:00 -0400", "subject": "Allow Info* objects to be created from scratch in C.", - "body": "Previously, info files (e.g. archive.info, backup.info) were created in Perl and only loaded in C.\n\nThe upcoming stanza commands in C need to create these files so refactor the Info* objects to allow new, empty objects to be created. Also, add functions needed to initialize each Info* object to a valid state." + "body": "Previously, info files (e.g. archive.info, backup.info) were created in Perl and only loaded in C.\n\nThe upcoming stanza commands in C need to create these files so refactor the Info* objects to allow new, empty objects to be created. Also, add functions needed to initialize each Info* object to a valid state." }, { "commit": "aa6f7eb862f71bf0b8f64d3c5e3d67903b048214", @@ -11603,7 +11603,7 @@ "commit": "27e823581201374df831abc4cf299ce4da29b910", "date": "2019-08-21 11:41:36 -0400", "subject": "Add repoIsLocalVerify() to verify repository locality.", - "body": "Some commands can only be run on a host where the repository is local. This function centralizes the check and error." + "body": "Some commands can only be run on a host where the repository is local. This function centralizes the check and error." }, { "commit": "6a09d9294d0ef13158fbc96c9d38a28ac7a6f150", @@ -11615,7 +11615,7 @@ "commit": "286a106ae4932ae8213d62528de934afa99c246e", "date": "2019-08-20 09:46:29 -0400", "subject": "Build pgBackRest as an unprivileged user.", - "body": "pgBackRest was being built by root in the documentation which is definitely not best practice.\n\nInstead build as the unprivileged default container user. Sudo privileges are still required to install." + "body": "pgBackRest was being built by root in the documentation which is definitely not best practice.\n\nInstead build as the unprivileged default container user. Sudo privileges are still required to install." }, { "commit": "6b5366a663f8c408e56bb2f8e5b27821898a91cc", @@ -11626,25 +11626,25 @@ "commit": "f6aef6e466ccba368fb53cc00450b501f35782fc", "date": "2019-08-19 21:45:54 -0400", "subject": "Properly reset conflicting pg-* options for the remote protocol.", - "body": "The pg1-socket-path and pg1-port options were not being reset when options from a higher index were being pushed down for processing by a remote. Since remotes only talk to one cluster they always use the options in index 1. This requires moving options from the original index to 1 before starting the remote. All options already set on index 1 must be removed if they are not being overwritten." + "body": "The pg1-socket-path and pg1-port options were not being reset when options from a higher index were being pushed down for processing by a remote. Since remotes only talk to one cluster they always use the options in index 1. This requires moving options from the original index to 1 before starting the remote. All options already set on index 1 must be removed if they are not being overwritten." }, { "commit": "9eaeb33c882b79a1cade3ce65b5d1b51b6978b05", "date": "2019-08-19 21:36:01 -0400", "subject": "Improve slow manifest build for very large quantities of tables/segments.", - "body": "storagePosixInfoList() processed each directory in a single memory context. If the directory contained hundreds of thousands of files processing became very slow due to the number of allocations.\n\nInstead, reset the memory context every thousand files to minimize the number of allocations active at once, improving both speed and memory consumption." + "body": "storagePosixInfoList() processed each directory in a single memory context. If the directory contained hundreds of thousands of files processing became very slow due to the number of allocations.\n\nInstead, reset the memory context every thousand files to minimize the number of allocations active at once, improving both speed and memory consumption." }, { "commit": "d411321d28d2c94f3376b82984981624fe30e287", "date": "2019-08-19 21:16:10 -0400", "subject": "Add reset to temp memory contexts to save memory and processing time.", - "body": "Processing large datasets in a memory context can lead to high memory usage and long allocation times. Add a new MEM_CONTEXT_TEMP_RESET_BEGIN() macro that allows temp allocations to be automatically freed after N iterations." + "body": "Processing large datasets in a memory context can lead to high memory usage and long allocation times. Add a new MEM_CONTEXT_TEMP_RESET_BEGIN() macro that allows temp allocations to be automatically freed after N iterations." }, { "commit": "7d97d49f41e961dcaa0d54b56538d3c5a0b8f6ce", "date": "2019-08-18 20:46:34 -0400", "subject": "Add MostCommonValue object.", - "body": "Calculate the most common value in a list of variants. If there is a tie then the first value passed to mcvUpdate() wins.\n\nmcvResult() can be called multiple times because it does not end processing, but there is a cost to calculating the result each time\nsince it is not stored." + "body": "Calculate the most common value in a list of variants. If there is a tie then the first value passed to mcvUpdate() wins.\n\nmcvResult() can be called multiple times because it does not end processing, but there is a cost to calculating the result each time\nsince it is not stored." }, { "commit": "8aa1e552b00cb3a9cf5665812e6de782f0805675", @@ -11677,13 +11677,13 @@ "commit": "8fc1d3883b2d96d4f518ff2bfa810b082cc48b38", "date": "2019-08-17 17:43:56 -0400", "subject": "Fix expire not immediately writing into separate file after backup.", - "body": "Logging stayed in the backup log until the Perl code started. Fix this so it logs to the correct file and will still work after the Perl code is removed." + "body": "Logging stayed in the backup log until the Perl code started. Fix this so it logs to the correct file and will still work after the Perl code is removed." }, { "commit": "41b6795a374391d66dcfb2df0521f1d7acf84b3e", "date": "2019-08-17 14:15:37 -0400", "subject": "Create log directories/files with 0750/0640 mode.", - "body": "The log directories/files were being created with a mix of modes depending on whether they were created in C or Perl. In particular, the C code was creating log files with the execute bit set for the user and group which was just odd.\n\nStandardize on 750/640 for both code paths." + "body": "The log directories/files were being created with a mix of modes depending on whether they were created in C or Perl. In particular, the C code was creating log files with the execute bit set for the user and group which was just odd.\n\nStandardize on 750/640 for both code paths." }, { "commit": "bc5385142c75f5fa390531a8ff1bd660dd1380ef", @@ -11694,7 +11694,7 @@ "commit": "382ed9282504b2adced37275937d2414632bb968", "date": "2019-08-09 15:17:18 -0400", "subject": "The start/stop commands are implemented entirely in C.", - "body": "The Perl versions remain because they are still being used by the Perl stanza commands. Once the stanza commands are migrated they can be removed." + "body": "The Perl versions remain because they are still being used by the Perl stanza commands. Once the stanza commands are migrated they can be removed." }, { "commit": "fe196cb0dffacb5d2d1488e3b592cc8ba3f39a5e", @@ -11711,7 +11711,7 @@ "commit": "e9517dcec05c75a03fa73029250de4f5b21fdfd3", "date": "2019-08-08 18:47:02 -0400", "subject": "Add hash constants for zero-length data.", - "body": "No need to calculate a hash when the data length is known to be zero. Use one of these constants instead." + "body": "No need to calculate a hash when the data length is known to be zero. Use one of these constants instead." }, { "commit": "56c24b7669f1e636a8221106547469065eb7d161", @@ -11739,7 +11739,7 @@ "commit": "289b47902ba1f49a1baea9fdce53eec52d5358bb", "date": "2019-08-08 10:50:25 -0400", "subject": "Allow NULLs in strEq().", - "body": "Bring this function more in line with the way varEq() works. NULL == NULL but NULL != NOT NULL." + "body": "Bring this function more in line with the way varEq() works. NULL == NULL but NULL != NOT NULL." }, { "commit": "feec674b6ff88bdfa62b20dd706d724c79ca1bcd", @@ -11772,13 +11772,13 @@ "commit": "f9e1f3a79823ee1bd6656e4f7a8fb23e735b8ccf", "date": "2019-08-01 14:28:30 -0400", "subject": "Retry S3 RequestTimeTooSkewed errors instead of immediately terminating.", - "body": "The cause of this error seems to be that a failed request takes so long that a subsequent retry at the http level uses outdated headers.\n\nWe're not sure if pgBackRest it to blame here (in one case a kernel downgrade fixed it, in another case an incorrect network driver was the problem) so add retries to hopefully deal with the issue if it is not too persistent. If SSL_write() has long delays before reporting an error then this will obviously affect backup performance." + "body": "The cause of this error seems to be that a failed request takes so long that a subsequent retry at the http level uses outdated headers.\n\nWe're not sure if pgBackRest it to blame here (in one case a kernel downgrade fixed it, in another case an incorrect network driver was the problem) so add retries to hopefully deal with the issue if it is not too persistent. If SSL_write() has long delays before reporting an error then this will obviously affect backup performance." }, { "commit": "2eb3c9f95f6e57072b65da81cb07a1268ebbf38e", "date": "2019-08-01 09:58:24 -0400", "subject": "Improve error handling for SSL_write().", - "body": "Error codes were not being caught for SSL_write() so it was hard to see exactly what was happening in error cases. Report errors to aid in debugging.\n\nAlso add a retry for SSL_ERROR_WANT_READ. Even though we have not been able to reproduce this case it is required by SSL_write() so go ahead and implement it." + "body": "Error codes were not being caught for SSL_write() so it was hard to see exactly what was happening in error cases. Report errors to aid in debugging.\n\nAlso add a retry for SSL_ERROR_WANT_READ. Even though we have not been able to reproduce this case it is required by SSL_write() so go ahead and implement it." }, { "commit": "89c67287bcb8bc7884b5dd37703b0d7786c68b95", @@ -11790,7 +11790,7 @@ "commit": "893ae24284e506c551fbc28bec8719881a856e2a", "date": "2019-07-31 19:58:57 -0400", "subject": "Add timeout to walSegmentFind().", - "body": "Keep trying to locate the WAL segment until timeout. This is useful for the check and backup commands which must wait for segments to arrive in the archive." + "body": "Keep trying to locate the WAL segment until timeout. This is useful for the check and backup commands which must wait for segments to arrive in the archive." }, { "commit": "03b28da1cac8f39c7f57d2c15a83ad667a51b997", @@ -11818,7 +11818,7 @@ "commit": "f8b0676fd6edbe915bc23d2f082b4862004ca151", "date": "2019-07-25 20:15:06 -0400", "subject": "Allow modules to be included for testing without requiring coverage.", - "body": "Sometimes it is useful to get at the internals of a module that is not being tested for coverage in order to provide coverage for another module that is being tested. The include directive allows this.\n\nUpdate modules that had previously been added to coverage that only need to be included." + "body": "Sometimes it is useful to get at the internals of a module that is not being tested for coverage in order to provide coverage for another module that is being tested. The include directive allows this.\n\nUpdate modules that had previously been added to coverage that only need to be included." }, { "commit": "554d98746a96d5a93e3449d62c81c9b7a7c84e9d", @@ -11836,13 +11836,13 @@ "commit": "415542b4a3589ff7b25dbd97d1041a9b1ff87815", "date": "2019-07-25 14:50:02 -0400", "subject": "Add PostgreSQL query client.", - "body": "This direct interface to libpq allows simple queries to be run against PostgreSQL and supports timeouts.\n\nTesting is performed using a shim that can use scripted responses to test all aspects of the client code. The shim will be very useful for testing backup scenarios on complex topologies." + "body": "This direct interface to libpq allows simple queries to be run against PostgreSQL and supports timeouts.\n\nTesting is performed using a shim that can use scripted responses to test all aspects of the client code. The shim will be very useful for testing backup scenarios on complex topologies." }, { "commit": "59f135340d4f76c522c8b24ecb23d56b57b2b0f8", "date": "2019-07-25 14:34:16 -0400", "subject": "The local command for backup is implemented entirely in C.", - "body": "The local process is now entirely migrated to C. Since all major I/O operations are performed in the local process, the vast majority of I/O is now performed in C." + "body": "The local process is now entirely migrated to C. Since all major I/O operations are performed in the local process, the vast majority of I/O is now performed in C." }, { "commit": "54ec8f151e4164c3f363f417bece7c4b1533dfd6", @@ -11865,7 +11865,7 @@ "commit": "38ba458616f450232dc1090306e651561c9b3076", "date": "2019-07-18 08:42:42 -0400", "subject": "Add IoSink filter.", - "body": "Discard all data passed to the filter. Useful for calculating size/checksum on a remote system when no data needs to be returned.\n\nUpdate ioReadDrain() to automatically use the IoSink filter." + "body": "Discard all data passed to the filter. Useful for calculating size/checksum on a remote system when no data needs to be returned.\n\nUpdate ioReadDrain() to automatically use the IoSink filter." }, { "commit": "d1dd6add4853a77d2ebbf27c326bbb4670b78955", @@ -11877,7 +11877,7 @@ "commit": "3bdba4933d45451a3fa6f5dafcb79fd976bffef6", "date": "2019-07-17 16:49:42 -0400", "subject": "Fix incorrect handling of transfer-encoding response to HEAD request.", - "body": "The HTTP server can use either content-length or transfer-encoding to indicate that there is content in the response. HEAD requests do not include content but return all the same headers as GET. In the HEAD case we were ignoring content-length but not transfer-encoding which led to unexpected eof errors on AWS S3. Our test server, minio, uses content-length so this was not caught in integration testing.\n\nIgnore all content for HEAD requests (no matter how it is reported) and add a unit test for transfer-encoding to prevent a regression." + "body": "The HTTP server can use either content-length or transfer-encoding to indicate that there is content in the response. HEAD requests do not include content but return all the same headers as GET. In the HEAD case we were ignoring content-length but not transfer-encoding which led to unexpected eof errors on AWS S3. Our test server, minio, uses content-length so this was not caught in integration testing.\n\nIgnore all content for HEAD requests (no matter how it is reported) and add a unit test for transfer-encoding to prevent a regression." }, { "commit": "6f981c53bb63d3a644a8573cb9edb162c443d03b", @@ -11900,19 +11900,19 @@ "commit": "30f55a3c2a139ffd59547fd8e422c3fae3feecd3", "date": "2019-07-15 17:36:24 -0400", "subject": "Add compressed storage feature.", - "body": "This feature denotes storage that can compress files so that they take up less space than what was written. Currently this includes the Posix and CIFS drivers. The stored size of the file will be rechecked after write to determine if the reported size is different. This check would be wasted on object stores such as S3, and they might not report the file as existing immediately after write.\n\nAlso add tests to each storage driver to check features." + "body": "This feature denotes storage that can compress files so that they take up less space than what was written. Currently this includes the Posix and CIFS drivers. The stored size of the file will be rechecked after write to determine if the reported size is different. This check would be wasted on object stores such as S3, and they might not report the file as existing immediately after write.\n\nAlso add tests to each storage driver to check features." }, { "commit": "3e1062825dde7cab506225a3b0658552b44de7ce", "date": "2019-07-15 16:49:46 -0400", "subject": "Allow multiple filters to be pushed to the remote and return results.", - "body": "Previously only a single filter could be pushed to the remote since order was not being maintained. Now the filters are strictly ordered.\n\nResults are returned from the remote and set in the local IoFilterGroup so they can be retrieved.\n\nExpand remote filter support to include all filters." + "body": "Previously only a single filter could be pushed to the remote since order was not being maintained. Now the filters are strictly ordered.\n\nResults are returned from the remote and set in the local IoFilterGroup so they can be retrieved.\n\nExpand remote filter support to include all filters." }, { "commit": "d5654375a5152764f76335eb5265eb994d1df6bd", "date": "2019-07-15 08:44:41 -0400", "subject": "Add ioReadDrain().", - "body": "Read all data from an IoRead object and discard it. This is handy for calculating size, hash, etc. when the output is not needed.\n\nUpdate code where a loop was used before." + "body": "Read all data from an IoRead object and discard it. This is handy for calculating size, hash, etc. when the output is not needed.\n\nUpdate code where a loop was used before." }, { "commit": "cdb75ac8b38810e2e7c6f5d0161443650f94bb31", @@ -11934,13 +11934,13 @@ "commit": "e10577d0b0c8054a08513b069e2f2647ddd51fb7", "date": "2019-07-11 09:13:56 -0400", "subject": "Fix incorrect offline upper bound for ignoring page checksum errors.", - "body": "For offline backups the upper bound was being set to 0x0000FFFF0000FFFF rather than UINT64_MAX. This meant that page checksum errors might be ignored for databases with a lot of past WAL in offline mode.\n\nOnline mode is not affected since the upper bound is retrieved from pg_start_backup()." + "body": "For offline backups the upper bound was being set to 0x0000FFFF0000FFFF rather than UINT64_MAX. This meant that page checksum errors might be ignored for databases with a lot of past WAL in offline mode.\n\nOnline mode is not affected since the upper bound is retrieved from pg_start_backup()." }, { "commit": "2fd0ebb78aabe882d4b080d4014d7ecabbded3da", "date": "2019-07-10 15:08:35 -0400", "subject": "Fix links broken by non-standard version.", - "body": "Using version 2.15.1 fixed the duplicate tarball problem but broke the auto-generated links. Fix them manually since this should not be a common problem." + "body": "Using version 2.15.1 fixed the duplicate tarball problem but broke the auto-generated links. Fix them manually since this should not be a common problem." }, { "commit": "6a89c1526e1ecc2940ed2bf42258ceb9ef85f0e0", @@ -11991,25 +11991,25 @@ "commit": "9836578520e3fb5c038230ebfb123001a7eb32fa", "date": "2019-07-05 16:55:17 -0400", "subject": "Remove perl critic and coverage.", - "body": "No new Perl code is being developed, so these tools are just taking up time and making migrations to newer platforms harder. There are only a few Perl tests remaining with full coverage so the coverage tool does not warn of loss of coverage in most cases.\n\nRemove both tools and associated libraries." + "body": "No new Perl code is being developed, so these tools are just taking up time and making migrations to newer platforms harder. There are only a few Perl tests remaining with full coverage so the coverage tool does not warn of loss of coverage in most cases.\n\nRemove both tools and associated libraries." }, { "commit": "fc2101352206a87f8471fd37455d9cd990fce95d", "date": "2019-07-05 16:25:28 -0400", "subject": "Fix scoping violations exposed by optimizations in gcc 9.", - "body": "gcc < 9 makes all compound literals function scope, even though the C spec requires them to be invalid outside the current scope. Since the compiler and valgrind were not enforcing this we had a few violations which caused problems in gcc >= 9.\n\nEven though we are not quite ready to support gcc 9 officially, fix the scoping violations that currently exist in the codebase." + "body": "gcc < 9 makes all compound literals function scope, even though the C spec requires them to be invalid outside the current scope. Since the compiler and valgrind were not enforcing this we had a few violations which caused problems in gcc >= 9.\n\nEven though we are not quite ready to support gcc 9 officially, fix the scoping violations that currently exist in the codebase." }, { "commit": "1708f1d1514b3f2afb69664317df705a1fccfaf7", "date": "2019-07-02 22:20:35 -0400", "subject": "Use minio for integration testing.", - "body": "ScalityS3 has not received any maintenance in years and is slow to start which is bad for testing. Replace it with minio which starts quickly and ships as a single executable or a tiny container.\n\nMinio has stricter limits on allowable characters but should still provide enough coverage to show that our encoding is working correctly.\n\nThis commit also includes the upgrade to openssl 1.1.1 in the Ubuntu 18.04 container." + "body": "ScalityS3 has not received any maintenance in years and is slow to start which is bad for testing. Replace it with minio which starts quickly and ships as a single executable or a tiny container.\n\nMinio has stricter limits on allowable characters but should still provide enough coverage to show that our encoding is working correctly.\n\nThis commit also includes the upgrade to openssl 1.1.1 in the Ubuntu 18.04 container." }, { "commit": "b9b21315ead6610bb41ee19794763555a7265261", "date": "2019-07-02 22:09:12 -0400", "subject": "Updates for openssl 1.1.1.", - "body": "Some HTTP error tests were failing after the upgrade to openssl 1.1.1, though the rest of the unit and integration tests worked fine. This seemed to be related to the very small messages used in the error testing, but it pointed to an issue with the code not being fully compliant, made worse by auto-retry being enabled by default.\n\nDisable auto-retry and implement better error handling to bring the code in line with openssl recommendations.\n\nThere's no evidence this is a problem in the field, but having all the tests pass seems like a good idea and the new code is certainly more robust.\n\nCoverage will be complete in the next commit when openssl 1.1.1 is introduced." + "body": "Some HTTP error tests were failing after the upgrade to openssl 1.1.1, though the rest of the unit and integration tests worked fine. This seemed to be related to the very small messages used in the error testing, but it pointed to an issue with the code not being fully compliant, made worse by auto-retry being enabled by default.\n\nDisable auto-retry and implement better error handling to bring the code in line with openssl recommendations.\n\nThere's no evidence this is a problem in the field, but having all the tests pass seems like a good idea and the new code is certainly more robust.\n\nCoverage will be complete in the next commit when openssl 1.1.1 is introduced." }, { "commit": "c55009d0f9ebfea746ab270be167dc2d78976ce9", @@ -12043,7 +12043,7 @@ "commit": "4815752ccc46ff742f67b369bc75ad3efcf11204", "date": "2019-06-26 08:24:58 -0400", "subject": "Add Perl interface to C storage layer.", - "body": "Maintaining the storage layer/drivers in two languages is burdensome. Since the integration tests require the Perl storage layer/drivers we'll need them even after the core code is migrated to C. Create an interface layer so the Perl code can be removed and new storage drivers/features introduced without adding Perl equivalents.\n\nThe goal is to move the integration tests to C so this interface will eventually be removed. That being the case, the interface was designed for maximum compatibility to ease the transition. The result looks a bit hacky but we'll improve it as needed until it can be retired." + "body": "Maintaining the storage layer/drivers in two languages is burdensome. Since the integration tests require the Perl storage layer/drivers we'll need them even after the core code is migrated to C. Create an interface layer so the Perl code can be removed and new storage drivers/features introduced without adding Perl equivalents.\n\nThe goal is to move the integration tests to C so this interface will eventually be removed. That being the case, the interface was designed for maximum compatibility to ease the transition. The result looks a bit hacky but we'll improve it as needed until it can be retired." }, { "commit": "bd6c0941e9e3aa2dab84f7404da1f9dc60cd693b", @@ -12065,7 +12065,7 @@ "commit": "51fcaee43edf022aea0f94b76d254f2de8b6e1d1", "date": "2019-06-25 07:58:38 -0400", "subject": "Add host-repo-path variable internal replacement.", - "body": "This variable needs to be replaced right before being used without being added to the cache since the host repo path will vary from system to system.\n\nThis is frankly a bit of a hack to get the documentation to build in the Debian packages for the upcoming release. We'll need to come up with something more flexible going forward." + "body": "This variable needs to be replaced right before being used without being added to the cache since the host repo path will vary from system to system.\n\nThis is frankly a bit of a hack to get the documentation to build in the Debian packages for the upcoming release. We'll need to come up with something more flexible going forward." }, { "commit": "5cbe2dee855577ccd14051cad35f30376db21d50", @@ -12077,7 +12077,7 @@ "commit": "d7f12f268a370e388c6a6375666831ecd8fad72c", "date": "2019-06-24 19:27:13 -0400", "subject": "Redact secure options in the help command.", - "body": "Secure options could show up in the help as \"current\". While the user must have permissions to see the source of the options (e.g. environment, config file) it's still not a good idea to display them in an unexpected context.\n\nInstead show secure options as in the help command." + "body": "Secure options could show up in the help as \"current\". While the user must have permissions to see the source of the options (e.g. environment, config file) it's still not a good idea to display them in an unexpected context.\n\nInstead show secure options as in the help command." }, { "commit": "c22e10e4a938b444ac7912efc3b751829401360f", @@ -12089,19 +12089,19 @@ "commit": "b498188f01f8d2ccd4d0ce2cce3af2e5069d9ac3", "date": "2019-06-24 11:59:44 -0400", "subject": "Error on db history mismatch when expiring.", - "body": "Amend commit 434cd832 to error when the db history in archive.info and backup.info do not match.\n\nThe Perl code would attempt to reconcile the history by matching on system id and version but we are not planning to migrate that code to C. It's possible that there are users with mismatches but if so they should have been getting errors from info for the last six months. It's easy enough to manually fix these files if there are any mismatches in the field." + "body": "Amend commit 434cd832 to error when the db history in archive.info and backup.info do not match.\n\nThe Perl code would attempt to reconcile the history by matching on system id and version but we are not planning to migrate that code to C. It's possible that there are users with mismatches but if so they should have been getting errors from info for the last six months. It's easy enough to manually fix these files if there are any mismatches in the field." }, { "commit": "039e515a319216035187c89efccf97143d4cac03", "date": "2019-06-24 10:20:47 -0400", "subject": "Allow protocol compression when read/writing remote files.", - "body": "If the file is compressible (i.e. not encrypted or already compressed) it can be marked as such in storageNewRead()/storageNewWrite(). If the file is being read from/written to a remote it will be compressed in transit using gzip.\n\nSimplify filter group handling by having the IoRead/IoWrite objects create the filter group automatically. This removes the need for a lot of NULL checking and has a negligible effect on performance since a filter group needs to be created eventually unless the source file is missing.\n\nAllow filters to be created using a VariantList so filter parameters can be passed to the remote." + "body": "If the file is compressible (i.e. not encrypted or already compressed) it can be marked as such in storageNewRead()/storageNewWrite(). If the file is being read from/written to a remote it will be compressed in transit using gzip.\n\nSimplify filter group handling by having the IoRead/IoWrite objects create the filter group automatically. This removes the need for a lot of NULL checking and has a negligible effect on performance since a filter group needs to be created eventually unless the source file is missing.\n\nAllow filters to be created using a VariantList so filter parameters can be passed to the remote." }, { "commit": "62715ebf2d8b0585c35cd2ee14d6aabf9cc0f1f8", "date": "2019-06-19 17:49:38 -0400", "subject": "Fix archive retention expiring too aggressively.", - "body": "The problem expressed when repo1-archive-retention-type was set to diff. In this case repo1-archive-retention ended up being effectively equal to one, which meant PITR recovery was only possible from the last backup. WAL required for consistency was still preserved for all backups.\n\nThis issue is not present in the C migration committed at 434cd832, which was written before this bug was reported. Even so, we wanted to note this issue in the release notes in case any other users have been affected.\n\nFixed by Cynthia Shang.\nReported by Mohamad El-Rifai." + "body": "The problem expressed when repo1-archive-retention-type was set to diff. In this case repo1-archive-retention ended up being effectively equal to one, which meant PITR recovery was only possible from the last backup. WAL required for consistency was still preserved for all backups.\n\nThis issue is not present in the C migration committed at 434cd832, which was written before this bug was reported. Even so, we wanted to note this issue in the release notes in case any other users have been affected.\n\nFixed by Cynthia Shang.\nReported by Mohamad El-Rifai." }, { "commit": "a7d64bab7abe56132ad1c83eb6bfcca1166e0000", @@ -12142,7 +12142,7 @@ "commit": "0a96a2895d7a1e4c5eec402d3742f1d1e25cc126", "date": "2019-06-17 09:16:44 -0400", "subject": "Add storage layer for tests and documentation.", - "body": "The tests and documentation have been using the core storage layer but soon that will depend entirely on the C library, creating a bootstrap problem (i.e. the storage layer will be needed to build the C library).\n\nCreate a simplified Posix storage layer to be used by documentation and the parts of the test code that build and execute the actual tests. The actual tests will still use the core storage driver so they can interact with any type of storage." + "body": "The tests and documentation have been using the core storage layer but soon that will depend entirely on the C library, creating a bootstrap problem (i.e. the storage layer will be needed to build the C library).\n\nCreate a simplified Posix storage layer to be used by documentation and the parts of the test code that build and execute the actual tests. The actual tests will still use the core storage driver so they can interact with any type of storage." }, { "commit": "ceafd8e19d416106f50c4ff440bf1cf7fcb5553f", @@ -12172,13 +12172,13 @@ "commit": "f05fbc54a8f00a56937439fb88eafe8e239781e8", "date": "2019-06-14 08:04:28 -0400", "subject": "Fix filters not processing when there is no input.", - "body": "Some filters (e.g. encryption and compression) produce output even if there is no input. Since the filter group was marked as \"done\" initially, processing would not run when there was zero input and that resulted in zero output.\n\nAll filters start not done so start the filter group the same way." + "body": "Some filters (e.g. encryption and compression) produce output even if there is no input. Since the filter group was marked as \"done\" initially, processing would not run when there was zero input and that resulted in zero output.\n\nAll filters start not done so start the filter group the same way." }, { "commit": "9ba95e993ba3dbf73f8967a875608980e6380633", "date": "2019-06-13 17:58:33 -0400", "subject": "Use retries to wait for test S3 server to start.", - "body": "The prior method of tailing the docker log no longer seems reliable. Instead, keep retrying the make bucket command until it works and show the error if it times out." + "body": "The prior method of tailing the docker log no longer seems reliable. Instead, keep retrying the make bucket command until it works and show the error if it times out." }, { "commit": "b9233f7412e44c8daf41db05bd9699fa088ece5d", @@ -12195,19 +12195,19 @@ "commit": "fdd375b63d3962845efbb38a7020d852143b97fd", "date": "2019-06-11 16:26:32 -0400", "subject": "Integrate S3 storage driver with HTTP client cache.", - "body": "This allows copying from one S3 object to another. We generally try to avoid doing this but there are a few cases where it is needed and the tests do it quite a bit.\n\nOne thing to look out for here is that reads require the http client to be explicitly released by calling httpClientDone(). This means than clients could grow if they are not released properly. The http statistics will hopefully alert us if this is happening." + "body": "This allows copying from one S3 object to another. We generally try to avoid doing this but there are a few cases where it is needed and the tests do it quite a bit.\n\nOne thing to look out for here is that reads require the http client to be explicitly released by calling httpClientDone(). This means than clients could grow if they are not released properly. The http statistics will hopefully alert us if this is happening." }, { "commit": "ced42d6511e9b5735a2281dd0e1faec41870fd37", "date": "2019-06-11 10:48:22 -0400", "subject": "Add HTTP client cache.", - "body": "This cache manages multiple http clients and returns one to the caller that is not busy. It is the responsibility of the caller to indicate when they are done with a client. If returnContent is set then the client will automatically be marked done.\n\nAlso add special handing for HEAD requests to recognize that content-length is informational only and no content is expected." + "body": "This cache manages multiple http clients and returns one to the caller that is not busy. It is the responsibility of the caller to indicate when they are done with a client. If returnContent is set then the client will automatically be marked done.\n\nAlso add special handing for HEAD requests to recognize that content-length is informational only and no content is expected." }, { "commit": "6e809e578fbd24269205de25cc6e9c84cefc5647", "date": "2019-06-11 10:34:42 -0400", "subject": "Add tag to specify minio version to use for documentation build.", - "body": "The new minio major release broke the build. We'll need to figure that out but for now use the last major version, which is known to work." + "body": "The new minio major release broke the build. We'll need to figure that out but for now use the last major version, which is known to work." }, { "commit": "7f2f535460e3499c1811b5beca9130f84b78c982", @@ -12248,7 +12248,7 @@ "commit": "6ff3325c7744a8f6f0e1e965fa673040a2f40a71", "date": "2019-06-05 11:43:17 -0400", "subject": "Enforce requiring repo-cipher-pass at config parse time.", - "body": "This was not enforced at parse time because repo1-cipher-type could be passed on the command-line even in cases where encryption was not needed by the subprocess.\n\nFilter repo-cipher-type so it is never passed on the command line. If the subprocess does not have access to the passphrase then knowing the encryption type is useless anyway." + "body": "This was not enforced at parse time because repo1-cipher-type could be passed on the command-line even in cases where encryption was not needed by the subprocess.\n\nFilter repo-cipher-type so it is never passed on the command line. If the subprocess does not have access to the passphrase then knowing the encryption type is useless anyway." }, { "commit": "d7bd0c58cdd9a434aa5893db16e6bcf8425e26b9", @@ -12271,7 +12271,7 @@ "commit": "4b91259de8d7feec6d33d154032169aa32c879e3", "date": "2019-06-04 12:56:04 -0400", "subject": "Make working with filter groups less restrictive.", - "body": "Filter groups could not be manipulated once they had been assigned to an IO object. Now they can be freely manipulated up to the time the IO object is opened.\n\nAlso, move the filter group into the IO object's context so they don't need to be tracked separately." + "body": "Filter groups could not be manipulated once they had been assigned to an IO object. Now they can be freely manipulated up to the time the IO object is opened.\n\nAlso, move the filter group into the IO object's context so they don't need to be tracked separately." }, { "commit": "92e04ea9f4e0eb3f6cd88a72497a026cbd348280", @@ -12283,13 +12283,13 @@ "commit": "44eb21ea935fdaa4e96d4637c7b62cb5a73d3e77", "date": "2019-06-04 10:05:27 -0400", "subject": "Use HEAD to check if a file exists on S3.", - "body": "The previous implementation searched for the file in a list which worked but was not optimal. For arbitrary bucket structures it would also produce a false negative if a match was not found in the first 1000 entries. This was not an issue for our repo structure since the max hits on exists calls is two but it seems worth fixing to avoid future complications." + "body": "The previous implementation searched for the file in a list which worked but was not optimal. For arbitrary bucket structures it would also produce a false negative if a match was not found in the first 1000 entries. This was not an issue for our repo structure since the max hits on exists calls is two but it seems worth fixing to avoid future complications." }, { "commit": "15b8e3b6af7327179d8bdd2e0c6833ed28005b0b", "date": "2019-06-04 09:39:08 -0400", "subject": "Make C S3 requests use the same host logic as Perl.", - "body": "The C code was passing the host (if specified) with the request which could force the server into path-style URLs, which are not supported.\n\nInstead, use the Perl logic of always passing bucket.endpoint in the request no matter what host is used for the HTTPS connection.\n\nIt's an open question whether we should support path-style URLs but since we don't it's useless to tell the server otherwise. Note that Amazon S3 has deprecated path-style URLs and they are no longer supported on newly created buckets." + "body": "The C code was passing the host (if specified) with the request which could force the server into path-style URLs, which are not supported.\n\nInstead, use the Perl logic of always passing bucket.endpoint in the request no matter what host is used for the HTTPS connection.\n\nIt's an open question whether we should support path-style URLs but since we don't it's useless to tell the server otherwise. Note that Amazon S3 has deprecated path-style URLs and they are no longer supported on newly created buckets." }, { "commit": "5f92c36b30072edb71b12f79163c37c67d9806bc", @@ -12318,7 +12318,7 @@ "commit": "388ba0458c37ab4e8b82df0c0fa2b6a4d7462ecb", "date": "2019-05-31 18:37:31 -0400", "subject": "Fix build.flags being removed on each build.", - "body": "This was being removed by rsync which forced a full build even when a partial should have been fine. Rewrite the file after the rsync so it is preserved." + "body": "This was being removed by rsync which forced a full build even when a partial should have been fine. Rewrite the file after the rsync so it is preserved." }, { "commit": "6cba50c3f23a7a7822ae0fd03ff3b5a3f6d8a32a", @@ -12339,13 +12339,13 @@ "commit": "64260b2e9878944116dc78812a3a21633c5f3d15", "date": "2019-05-29 08:38:45 -0400", "subject": "Build all docs with S3 using --var=s3-all=y", - "body": "Force repo-type=s3 for all tests. This is not currently the default for any OS builds." + "body": "Force repo-type=s3 for all tests. This is not currently the default for any OS builds." }, { "commit": "404284b90ff0f67fe29fa5a7b2831ebf51d459aa", "date": "2019-05-28 12:18:05 -0400", "subject": "Add internal flag for commands.", - "body": "Allow commands to be skipped by default in the command help but still work if help is requested for the command directly. There may be other uses for the flag in the future.\n\nUpdate help for ls now that it is exposed." + "body": "Allow commands to be skipped by default in the command help but still work if help is requested for the command directly. There may be other uses for the flag in the future.\n\nUpdate help for ls now that it is exposed." }, { "commit": "20e5b92f366848ec1464e2272c6af2c29ac7b36d", @@ -12363,13 +12363,13 @@ "commit": "3e1b06acaa84399abcfaa8c684f437b63aa38de5", "date": "2019-05-27 07:37:20 -0400", "subject": "Use minio as local S3 emulator in documentation.", - "body": "The documentation was relying on a ScalityS3 container built for testing which wasn't very transparent. Instead, use the stock minio container and configure it in the documentation.\n\nAlso, install certificates and CA so that TLS verification can be enabled." + "body": "The documentation was relying on a ScalityS3 container built for testing which wasn't very transparent. Instead, use the stock minio container and configure it in the documentation.\n\nAlso, install certificates and CA so that TLS verification can be enabled." }, { "commit": "a474ba54c5c9c7bdba6ffa3e92671bb9565889f6", "date": "2019-05-26 12:41:15 -0400", "subject": "Refactoring path support in the storage module.", - "body": "Not all storage types support paths as a physical thing that must be created/destroyed. Add a feature to determine which drivers use paths and simplify the driver API as much as possible given that knowledge and by implementing as much path logic as possible in the Storage object.\n\nRemove the ignoreMissing parameter from pathSync() since it is not used and makes little sense.\n\nCreate a standard list of error messages for the drivers to use and apply them where the code was modified -- there is plenty of work still to be done here." + "body": "Not all storage types support paths as a physical thing that must be created/destroyed. Add a feature to determine which drivers use paths and simplify the driver API as much as possible given that knowledge and by implementing as much path logic as possible in the Storage object.\n\nRemove the ignoreMissing parameter from pathSync() since it is not used and makes little sense.\n\nCreate a standard list of error messages for the drivers to use and apply them where the code was modified -- there is plenty of work still to be done here." }, { "commit": "38f28bd52081405321939fec66046bd9ada35c23", @@ -12421,7 +12421,7 @@ "commit": "39cb6248314e21530924886e6e669ac395daeeb1", "date": "2019-05-24 07:45:03 -0400", "subject": "Add missing menus to the new user guides.", - "body": "Since the CentOS 6/7 user guides were generated as a single page they did not get menus. Generate the entire site for each user guide so menus are included." + "body": "Since the CentOS 6/7 user guides were generated as a single page they did not get menus. Generate the entire site for each user guide so menus are included." }, { "commit": "04f8b4ea52f89d1542b25d2fb0ba28a43ddaba6d", @@ -12432,13 +12432,13 @@ "commit": "ec9622cde883c649c1346cbc0b9057e7f3fcb787", "date": "2019-05-22 18:54:49 -0400", "subject": "Use the git log to ease release note management.", - "body": "The release notes are generally a direct reflection of the git log. So, ease the burden of maintaining the release notes by using the git log to determine what needs to be added.\n\nCurrently only non-dev items are required to be matched to a git commit but the goal is to account for all commits.\n\nThe git history cache is generated from the git log but can be modified to correct typos and match the release notes as they evolve. The commit hash is used to identify commits that have already been added to the cache.\n\nThere's plenty more to do here. For instance, links to the commits for each release item should be added to the release notes." + "body": "The release notes are generally a direct reflection of the git log. So, ease the burden of maintaining the release notes by using the git log to determine what needs to be added.\n\nCurrently only non-dev items are required to be matched to a git commit but the goal is to account for all commits.\n\nThe git history cache is generated from the git log but can be modified to correct typos and match the release notes as they evolve. The commit hash is used to identify commits that have already been added to the cache.\n\nThere's plenty more to do here. For instance, links to the commits for each release item should be added to the release notes." }, { "commit": "86482c7db943375d48fcee0e243ca96bdf50d35c", "date": "2019-05-22 18:23:44 -0400", "subject": "Reduce log level for all expect tests to detail.", - "body": "The C code is designed to be efficient rather than deterministic at the debug log level. As we move more testing from integration to unit tests it makes less sense to try and maintain the expect logs at this log level.\n\nMost of the expect logs have already been moved to detail level but mock/all still had tests at debug level. Change the logging defaults in the config file and remove as many references to log-level-console as possible." + "body": "The C code is designed to be efficient rather than deterministic at the debug log level. As we move more testing from integration to unit tests it makes less sense to try and maintain the expect logs at this log level.\n\nMost of the expect logs have already been moved to detail level but mock/all still had tests at debug level. Change the logging defaults in the config file and remove as many references to log-level-console as possible." }, { "commit": "e4cc008b982d47ac526962e310c18626e2aefbc2", @@ -12472,7 +12472,7 @@ "commit": "936b8a289c4884dd22f7ff0d3d624d6e70980512", "date": "2019-05-21 10:37:30 -0400", "subject": "Allow separate paragraphs in release items.", - "body": "The first paragraph should match the first line of the commit message as closely as possible. The following paragraphs add more information.\n\nRelease items have been updated back to 2.01." + "body": "The first paragraph should match the first line of the commit message as closely as possible. The following paragraphs add more information.\n\nRelease items have been updated back to 2.01." }, { "commit": "e3fe3434b4428398ffbee5359f0e0cdec8e55bcb", @@ -12527,7 +12527,7 @@ "commit": "c51274d1b6a85aeb8ecf1dfffcdc68a503ac3de9", "date": "2019-05-16 08:32:02 -0400", "subject": "Add user guides for CentOS/RHEL 6/7.", - "body": "It would be better if the documentation could be generated on multiple operating systems all in one go, but the doc system currently does not allow vars to be changed once they are set.\n\nThe solution is to run the docs for each required OS and stitch the documentation together. It's not pretty but it works and the automation in release.pl should at least make it easy to use." + "body": "It would be better if the documentation could be generated on multiple operating systems all in one go, but the doc system currently does not allow vars to be changed once they are set.\n\nThe solution is to run the docs for each required OS and stitch the documentation together. It's not pretty but it works and the automation in release.pl should at least make it easy to use." }, { "commit": "bc7b42e71811e1b3231c1386496efbdf8338e086", @@ -12584,7 +12584,7 @@ "commit": "15a33bf74be84bda822e96c91e37479419b451a9", "date": "2019-05-13 17:10:41 -0400", "subject": "Error on multiple option alternate names and simplify help command.", - "body": "There are currently no options with multiple alternate (deprecated) names so the code to render them in the help command could not be covered.\n\nRemove the uncovered code and add an error when multiple alternate names are configured. It's not clear that the current code was handling this correctly, so it will need to be reviewed if it comes up again." + "body": "There are currently no options with multiple alternate (deprecated) names so the code to render them in the help command could not be covered.\n\nRemove the uncovered code and add an error when multiple alternate names are configured. It's not clear that the current code was handling this correctly, so it will need to be reviewed if it comes up again." }, { "commit": "2d2bec842a2c424d5a36c98dc3a465da1696caf4", @@ -12601,19 +12601,19 @@ "commit": "31d0fe5f50a1a061c3ffbf7d194fafffee6a34b7", "date": "2019-05-11 18:20:57 -0400", "subject": "Improve log performance, simplify macros, rename logWill() to logAny().", - "body": "Pre-calculate the value used by logAny() to improve performance and make it more likely to be inlined.\n\nMove IF_LOG_ANY() into LOG_INTERNAL() to simplify the macros and improve performance of LOG() and LOG_PID(). If the message has no chance of being logged there's no reason to call logInternal().\n\nRename logWill() to logAny() because it seems more intuitive." + "body": "Pre-calculate the value used by logAny() to improve performance and make it more likely to be inlined.\n\nMove IF_LOG_ANY() into LOG_INTERNAL() to simplify the macros and improve performance of LOG() and LOG_PID(). If the message has no chance of being logged there's no reason to call logInternal().\n\nRename logWill() to logAny() because it seems more intuitive." }, { "commit": "87f36e814ea95696870711018c050725e8e7269f", "date": "2019-05-11 14:51:51 -0400", "subject": "Improve macros and coverage rules that were hiding missing coverage.", - "body": "The branch coverage exclusion rules were overly broad and included functions that ended in a capital letter, which disabled all coverage for the statement. Improve matching so that all characters in the name must be upper-case for a match.\n\nSome macros with internal branches accepted parameters that might contain conditionals. This made it impossible to tell which branches belonged to which, and in any case an overzealous exclusion rule was ignoring all branches in such cases. Add the DEBUG_COVERAGE flag to build a modified version of the macros without any internal branches to be used for coverage testing. In most cases, the branches were optimizations (like checking logWill()) that improve production performance but are not needed for testing. In other cases, a parameter needed to be added to the underlying function to handle the branch during coverage testing.\n\nAlso tweak the coverage rules so that macros without conditionals are automatically excluded from branch coverage as long as they are not themselves a parameter.\n\nFinally, update tests and code where missing coverage was exposed by these changes. Some code was updated to remove existing coverage exclusions when it was a simple change." + "body": "The branch coverage exclusion rules were overly broad and included functions that ended in a capital letter, which disabled all coverage for the statement. Improve matching so that all characters in the name must be upper-case for a match.\n\nSome macros with internal branches accepted parameters that might contain conditionals. This made it impossible to tell which branches belonged to which, and in any case an overzealous exclusion rule was ignoring all branches in such cases. Add the DEBUG_COVERAGE flag to build a modified version of the macros without any internal branches to be used for coverage testing. In most cases, the branches were optimizations (like checking logWill()) that improve production performance but are not needed for testing. In other cases, a parameter needed to be added to the underlying function to handle the branch during coverage testing.\n\nAlso tweak the coverage rules so that macros without conditionals are automatically excluded from branch coverage as long as they are not themselves a parameter.\n\nFinally, update tests and code where missing coverage was exposed by these changes. Some code was updated to remove existing coverage exclusions when it was a simple change." }, { "commit": "f819a32cdf94bc799a4902ab6aaa559cd11d4ef8", "date": "2019-05-11 07:57:49 -0400", "subject": "Improve efficiency of FUNCTION_LOG*() macros.", - "body": "Call stackTraceTestStop()/stackTraceTestStart() once per block instead of with every param call. This was done to be cautious but is not necessary and slows down development.\n\nThese functions were never built into production so had no impact there." + "body": "Call stackTraceTestStop()/stackTraceTestStart() once per block instead of with every param call. This was done to be cautious but is not necessary and slows down development.\n\nThese functions were never built into production so had no impact there." }, { "commit": "7e2f6a6a4365b48fc89672b5d7de4c1d8937caa6", @@ -12624,19 +12624,19 @@ "commit": "f0f105ddeca7b3434d9c3bb78e25247b3c6b7584", "date": "2019-05-09 12:10:46 -0400", "subject": "Improve filter's notion of \"done\" to optimize filter processing.", - "body": "Filters had different ideas about what \"done\" meant and this added complication to the group filter processing. For example, gzip decompression would detect end of stream and mark the filter as done before it had been flushed.\n\nImprove the IoFilter interface to give a consistent definition of done across all filters, i.e. no filter can be done until it has started flushing no matter what the underlying driver reports. This removes quite a bit of tricky logic in the processing loop which tried to determine when a filter was \"really\" done.\n\nAlso improve management of the input buffers by pointing directly to the prior output buffer (or the caller's input) to eliminate loops that set/cleared these buffers." + "body": "Filters had different ideas about what \"done\" meant and this added complication to the group filter processing. For example, gzip decompression would detect end of stream and mark the filter as done before it had been flushed.\n\nImprove the IoFilter interface to give a consistent definition of done across all filters, i.e. no filter can be done until it has started flushing no matter what the underlying driver reports. This removes quite a bit of tricky logic in the processing loop which tried to determine when a filter was \"really\" done.\n\nAlso improve management of the input buffers by pointing directly to the prior output buffer (or the caller's input) to eliminate loops that set/cleared these buffers." }, { "commit": "d5fac35fe3efe79f2b39a0e1b5ad3bb1cb1dd173", "date": "2019-05-09 09:53:24 -0400", "subject": "Improve zero-length content handling in HttpClient object.", - "body": "If content was zero-length then the IO object was not created. This put the burden on the caller to test that the IO object existed before checking eof.\n\nInstead, create an IO object even if it will immediately return eof. This has little cost and makes the calling code simpler.\n\nAlso add an explicit test for zero-length files in S3 and a few assertions." + "body": "If content was zero-length then the IO object was not created. This put the burden on the caller to test that the IO object existed before checking eof.\n\nInstead, create an IO object even if it will immediately return eof. This has little cost and makes the calling code simpler.\n\nAlso add an explicit test for zero-length files in S3 and a few assertions." }, { "commit": "15531151d7b3c07f51c442cb8951a5781bcc39d4", "date": "2019-05-09 08:55:48 -0400", "subject": "Add --c option to request a C remote.", - "body": "The rules for when a C remote is required are getting complicated and will get worse when restoreFile() is migrated.\n\nInstead, set the --c option when a C remote is required. This option will be removed when the remote is entirely implemented in C." + "body": "The rules for when a C remote is required are getting complicated and will get worse when restoreFile() is migrated.\n\nInstead, set the --c option when a C remote is required. This option will be removed when the remote is entirely implemented in C." }, { "commit": "c99c7c458b0a04d1f1a637be70bb4f8e8011feb0", @@ -12654,19 +12654,19 @@ "commit": "f1eea2312104ce7fa87119d6ef50a554b154e954", "date": "2019-05-03 18:52:54 -0400", "subject": "Add macros for object free functions.", - "body": "Most of the *Free() functions are pretty generic so add macros to make creating them as easy as possible.\n\nCreate a distinction between *Free() functions that the caller uses to free memory and callbacks that free third-party resources. There are a number of cases where a driver needs to free resources but does not need a normal *Free() because it is handled by the interface.\n\nAdd common/object.h for macros that make object maintenance easier. This pattern can also be used for many more object functions." + "body": "Most of the *Free() functions are pretty generic so add macros to make creating them as easy as possible.\n\nCreate a distinction between *Free() functions that the caller uses to free memory and callbacks that free third-party resources. There are a number of cases where a driver needs to free resources but does not need a normal *Free() because it is handled by the interface.\n\nAdd common/object.h for macros that make object maintenance easier. This pattern can also be used for many more object functions." }, { "commit": "7ae96949f18af90a9b07f1dd712f939ca7a1ec41", "date": "2019-05-03 18:09:58 -0400", "subject": "Various MemContext callback improvements.", - "body": "Rename memContextCallback() to memContextCallbackSet() to be more consistent with other parts of the code.\n\nFree all context memory when an exception is thrown from a callback. Previously only the child contexts would be freed and this resulted in some allocations being lost. In practice this is probably not a big deal since the process will likely terminate shortly, but there may well be cases where that is not true." + "body": "Rename memContextCallback() to memContextCallbackSet() to be more consistent with other parts of the code.\n\nFree all context memory when an exception is thrown from a callback. Previously only the child contexts would be freed and this resulted in some allocations being lost. In practice this is probably not a big deal since the process will likely terminate shortly, but there may well be cases where that is not true." }, { "commit": "4a20d44c6b118143f1c1607bacac0d4e268f6ffe", "date": "2019-05-03 17:49:57 -0400", "subject": "Add common/macro.h for general-purpose macros.", - "body": "Add GLUE() macro which is useful for creating identifiers.\n\nMove MACRO_TO_STR() here and rename it STRINGIFY(). This appears to be the standard name for this type of macro and it is also an awesome name." + "body": "Add GLUE() macro which is useful for creating identifiers.\n\nMove MACRO_TO_STR() here and rename it STRINGIFY(). This appears to be the standard name for this type of macro and it is also an awesome name." }, { "commit": "32ca27a20b13941cfab27672d6f383a0ba44fa20", @@ -12678,7 +12678,7 @@ "commit": "8c712d89ebe10e3d318952a0b4db8cd09f8a580c", "date": "2019-05-02 17:52:24 -0400", "subject": "Improve type safety of interfaces and drivers.", - "body": "The function pointer casting used when creating drivers made changing interfaces difficult and led to slightly divergent driver implementations. Unit testing caught production-level errors but there were a lot of small issues and the process was harder than it should have been.\n\nUse void pointers instead so that no casts are required. Introduce the THIS_VOID and THIS() macros to make dealing with void pointers a little safer.\n\nSince we don't want to expose void pointers in header files, driver functions have been removed from the headers and the various driver objects return their interface type. This cuts down on accessor methods and the vast majority of those functions were not being used. Move functions that are still required to .intern.h.\n\nRemove the special \"C\" crypto functions that were used in libc and instead use the standard interface." + "body": "The function pointer casting used when creating drivers made changing interfaces difficult and led to slightly divergent driver implementations. Unit testing caught production-level errors but there were a lot of small issues and the process was harder than it should have been.\n\nUse void pointers instead so that no casts are required. Introduce the THIS_VOID and THIS() macros to make dealing with void pointers a little safer.\n\nSince we don't want to expose void pointers in header files, driver functions have been removed from the headers and the various driver objects return their interface type. This cuts down on accessor methods and the vast majority of those functions were not being used. Move functions that are still required to .intern.h.\n\nRemove the special \"C\" crypto functions that were used in libc and instead use the standard interface." }, { "commit": "28359eea83206f6eeddd3fbc91a52cd42399e529", @@ -12694,19 +12694,19 @@ "commit": "498017bcf02560e101703e89b1119d0b48944835", "date": "2019-05-02 12:43:09 -0400", "subject": "Various Buffer improvements.", - "body": "Add bufDup() and bufNewUsedC().\n\nArrange bufNewC() params to match bufNewUsedC() since they have always seemed backward.\n\nFix bufHex() to only render the used portion of the buffer and fix some places where used was not being set correctly.\n\nUse a union to make macro assignments for all legal values without casting. This is much more likely to catch bad assignments." + "body": "Add bufDup() and bufNewUsedC().\n\nArrange bufNewC() params to match bufNewUsedC() since they have always seemed backward.\n\nFix bufHex() to only render the used portion of the buffer and fix some places where used was not being set correctly.\n\nUse a union to make macro assignments for all legal values without casting. This is much more likely to catch bad assignments." }, { "commit": "59234f249e73a4038ee1ba5d2e80f8acc11640ce", "date": "2019-04-29 18:36:57 -0400", "subject": "Use THROW_ON_SYS_ERROR*() to improve code coverage.", - "body": "There is only one instance in the core code where this helps. It is mostly helpful in the tests.\n\nThere is an argument to be made that only THROW_SYS_ERROR*() variants should be used in the core code to improve test coverage. If so, that will be the subject of a future commit." + "body": "There is only one instance in the core code where this helps. It is mostly helpful in the tests.\n\nThere is an argument to be made that only THROW_SYS_ERROR*() variants should be used in the core code to improve test coverage. If so, that will be the subject of a future commit." }, { "commit": "683b096e187605c5cf16a727cd2c413540e4d150", "date": "2019-04-29 18:03:32 -0400", "subject": "Don't append strerror() to error message when errno is 0.", - "body": "Some functions (e.g. getpwnam()/getgrnam()) will return an error but not set errno. In this case there's no use in appending strerror(), which will be \"Success\". This is confusing since an error has just been reported.\n\nAt least in the examples above, an error with no errno set just means \"missing\" and our current error message already conveys that." + "body": "Some functions (e.g. getpwnam()/getgrnam()) will return an error but not set errno. In this case there's no use in appending strerror(), which will be \"Success\". This is confusing since an error has just been reported.\n\nAt least in the examples above, an error with no errno set just means \"missing\" and our current error message already conveys that." }, { "commit": "6ad44db9a0a30ad933a6ca8012de52558b7b9c02", @@ -12728,7 +12728,7 @@ "commit": "d0c296bd5b67966fd35fd9f70c25788dd6de0c34", "date": "2019-04-29 16:10:27 -0400", "subject": "Fix segfault when process-max > 8 for archive-push/archive-get.", - "body": "The remote list was at most 9 (based on pg[1-8]-* max index) so anything over 8 wrote into unallocated memory.\n\nThe remote for the main process is (currently) stored in position zero so do the same for remotes started from locals, since there should only be one. The main process will need to start more remotes in the future which is why there is extra space." + "body": "The remote list was at most 9 (based on pg[1-8]-* max index) so anything over 8 wrote into unallocated memory.\n\nThe remote for the main process is (currently) stored in position zero so do the same for remotes started from locals, since there should only be one. The main process will need to start more remotes in the future which is why there is extra space." }, { "commit": "c935b1c9e83d6f14644de3aa5f36011234e69423", @@ -12757,13 +12757,13 @@ "commit": "027c2638719dffa9ba99250085c403e89a2a8a9a", "date": "2019-04-26 08:08:23 -0400", "subject": "Add configure script for improved multi-platform support.", - "body": "Use autoconf to provide a basic configure script. WITH_BACKTRACE is yet to be migrated to configure and the unit tests still use a custom Makefile.\n\nEach C file must include \"build.auto.conf\" before all other includes and defines. This is enforced by test.pl for includes, but it won't detect incorrect define ordering.\n\nUpdate packages to call configure and use standard flags to pass options." + "body": "Use autoconf to provide a basic configure script. WITH_BACKTRACE is yet to be migrated to configure and the unit tests still use a custom Makefile.\n\nEach C file must include \"build.auto.conf\" before all other includes and defines. This is enforced by test.pl for includes, but it won't detect incorrect define ordering.\n\nUpdate packages to call configure and use standard flags to pass options." }, { "commit": "3505559a808855600016dc73c5aec3843e51bfaf", "date": "2019-04-24 13:23:32 -0400", "subject": "Update test containers with PostgreSQL minor releases and liblz4.", - "body": "Update RHEL repos that have changed upstream. Remove PostgreSQL 9.3 since the RHEL6/7 packages have disappeared.\n\nRemove PostgreSQL versions from U12 that are still getting minor updates so the container does not need to be rebuilt.\n\nLZ4 is included for future development, but this seems like a good time to add it to the containers." + "body": "Update RHEL repos that have changed upstream. Remove PostgreSQL 9.3 since the RHEL6/7 packages have disappeared.\n\nRemove PostgreSQL versions from U12 that are still getting minor updates so the container does not need to be rebuilt.\n\nLZ4 is included for future development, but this seems like a good time to add it to the containers." }, { "commit": "1ae8a6a71665c12bec6e20741df29ba4792bf8f8", @@ -12821,7 +12821,7 @@ "commit": "f100ea0ff44e173299e8f691115b9e7ce68d422b", "date": "2019-04-22 17:52:23 -0400", "subject": "Add constant for maximum buffer sizes required by cvt*() functions.", - "body": "Also update Variant to use cvt*() in all cases. Variant was written before these functions were available and not all cases were updated." + "body": "Also update Variant to use cvt*() in all cases. Variant was written before these functions were available and not all cases were updated." }, { "commit": "f5739051eba3dae6cbee6d1fc0f72c36705c3be1", @@ -12855,19 +12855,19 @@ "commit": "e7255be108b6b85bb736934b19abdef32cbd39e8", "date": "2019-04-20 11:25:04 -0400", "subject": "Only process next filter in IoFilterGroup when input buffer is full or flushing.", - "body": "This greatly reduces calls to filter processing, which is a performance benefit, but also makes the trace logs smaller and easier to read.\n\nHowever, this means that ioWriteFlush() will no longer work with filters since a full flush of IoFilterGroup would require an expensive reset. Currently ioWriteFlush() is not used in this scenario so for now just add an assert to ensure it stays that way." + "body": "This greatly reduces calls to filter processing, which is a performance benefit, but also makes the trace logs smaller and easier to read.\n\nHowever, this means that ioWriteFlush() will no longer work with filters since a full flush of IoFilterGroup would require an expensive reset. Currently ioWriteFlush() is not used in this scenario so for now just add an assert to ensure it stays that way." }, { "commit": "e513c52c0973772465a81a5db2ba18d0f956d686", "date": "2019-04-20 08:16:17 -0400", "subject": "Add macros to create constant Buffer objects.", - "body": "These are more efficient than creating buffers in place when needed.\n\nAfter replacement discovered that bufNewStr() and BufNewZ() were not being used in the core code so removed them. This required using the macros in tests which is not the usual pattern." + "body": "These are more efficient than creating buffers in place when needed.\n\nAfter replacement discovered that bufNewStr() and BufNewZ() were not being used in the core code so removed them. This required using the macros in tests which is not the usual pattern." }, { "commit": "c9168028c6c43d535a5663cb17ce93945951cb7e", "date": "2019-04-19 14:38:11 -0400", "subject": "Improve performance of non-blocking reads by using maximum buffer size.", - "body": "Since the introduction of blocking read drivers (e.g. IoHandleRead, TlsClient) the non-blocking drivers have used the same rules for determining maximum buffer size, i.e. read only as much as requested. This is necessary so the blocking drivers don't get stuck waiting for data that might not be coming.\n\nInstead mark blocking drivers so IoRead knows how much buffer to allow for the read. The non-blocking drivers can now request the maximum number of bytes allowed by buffer-size." + "body": "Since the introduction of blocking read drivers (e.g. IoHandleRead, TlsClient) the non-blocking drivers have used the same rules for determining maximum buffer size, i.e. read only as much as requested. This is necessary so the blocking drivers don't get stuck waiting for data that might not be coming.\n\nInstead mark blocking drivers so IoRead knows how much buffer to allow for the read. The non-blocking drivers can now request the maximum number of bytes allowed by buffer-size." }, { "commit": "0c866f52c69ee2bd6af20e67af78bad94c4048af", @@ -12900,13 +12900,13 @@ "commit": "7390952d8e99649077a6d4b577a483e6c38a3ecd", "date": "2019-04-18 21:24:10 -0400", "subject": "Harden IO filters against zero input and optimize zero output case.", - "body": "Add production checks to ensure no filter gets a zero-size input buffer.\n\nAlso, optimize the case where a filter returns no output. There's no sense in running downstream filters if they have no new input." + "body": "Add production checks to ensure no filter gets a zero-size input buffer.\n\nAlso, optimize the case where a filter returns no output. There's no sense in running downstream filters if they have no new input." }, { "commit": "2d73de1d360e0b7bc66b4db9e6b3a0e441927230", "date": "2019-04-18 21:21:35 -0400", "subject": "Fix zero-length reads causing problems for IO filters that did not expect them.", - "body": "The IoRead object was passing zero-length buffers into the filter processing code but not all the filters were happy about getting them.\n\nIn particular, the gzip compression filter failed if it was given no input directly after it had flushed all of its buffers. This made the problem rather intermittent even though a zero-length buffer was being passed to the filter at the end of every file. It also explains why tweaking compress-level or buffer-size allowed the file to go through.\n\nSince this error was happening after all processing had completed, there does not appear to be any risk that successfully processed files were corrupted." + "body": "The IoRead object was passing zero-length buffers into the filter processing code but not all the filters were happy about getting them.\n\nIn particular, the gzip compression filter failed if it was given no input directly after it had flushed all of its buffers. This made the problem rather intermittent even though a zero-length buffer was being passed to the filter at the end of every file. It also explains why tweaking compress-level or buffer-size allowed the file to go through.\n\nSince this error was happening after all processing had completed, there does not appear to be any risk that successfully processed files were corrupted." }, { "commit": "670fa88a98c7e12c4e2e948d92476918d3048f7e", @@ -12924,7 +12924,7 @@ "commit": "b960919cf7322f03ee4f9681f0cb06da6b4c52de", "date": "2019-04-18 10:36:21 -0400", "subject": "Fix reliability of error reporting from local/remote processes.", - "body": "Asserts were only only reported on stderr rather than being returned through the protocol layer. This did not appear to be very reliable.\n\nInstead, report the assert through the protocol layer like any other error. Add a stack trace if an assert error or debug logging is enabled." + "body": "Asserts were only only reported on stderr rather than being returned through the protocol layer. This did not appear to be very reliable.\n\nInstead, report the assert through the protocol layer like any other error. Add a stack trace if an assert error or debug logging is enabled." }, { "commit": "281d2848b92bf7dba36018127cb9238662d7875b", @@ -12994,7 +12994,7 @@ "commit": "df12cbb1625ed61c74d43ae3a308445f943b0570", "date": "2019-04-10 17:48:34 -0400", "subject": "Fix C code to recognize host:port option format like Perl does.", - "body": "This was not an intentional feature in Perl, but it works, so it makes sense to implement the same syntax in C.\n\nThis is a break from other places where a -port option is explicitly supplied, so it may make sense to support both styles going forward. This commit does not address that, however." + "body": "This was not an intentional feature in Perl, but it works, so it makes sense to implement the same syntax in C.\n\nThis is a break from other places where a -port option is explicitly supplied, so it may make sense to support both styles going forward. This commit does not address that, however." }, { "commit": "3aa521fed0838021209b4be36849feb2c7e40a2e", @@ -13005,7 +13005,7 @@ "commit": "25cea0bd0a22de2f83a2ded786e2facc86ff4b10", "date": "2019-04-09 11:08:27 -0400", "subject": "Add process id to C archive-get and archive-push logging.", - "body": "This was missed in the original migration. There was no functional issue, but logging the process ids is useful for debugging." + "body": "This was missed in the original migration. There was no functional issue, but logging the process ids is useful for debugging." }, { "commit": "8c202c77dac74d919962f69acb279b583b4272ff", @@ -13017,19 +13017,19 @@ "commit": "4ace7edbd9e849d5a7a5d8b20e08c0c3e2098a02", "date": "2019-04-09 10:54:36 -0400", "subject": "Allow process id in C logging", - "body": "The default process id in C logging has always been zero. This should have been updated when multi-processing was introduced in C, but it was missed." + "body": "The default process id in C logging has always been zero. This should have been updated when multi-processing was introduced in C, but it was missed." }, { "commit": "6099729e922893312e001a3394db43d0c2b341ad", "date": "2019-04-08 19:38:06 -0400", "subject": "Improve error message when an S3 bucket name contains dots.", - "body": "The Perl lib we have been using for TLS allows dots in wildcards, but this is forbidden by RFC-2818. The new TLS implementation in C forbids this pattern, just as PostgreSQL and curl do.\n\nHowever, this does present a problem for users who have been using bucket names with dots in older versions of pgBackRest. Since this limitation exists for security reasons there appears to be no option but to take a hard line and do our best to notify the user of the issue as clearly as possible." + "body": "The Perl lib we have been using for TLS allows dots in wildcards, but this is forbidden by RFC-2818. The new TLS implementation in C forbids this pattern, just as PostgreSQL and curl do.\n\nHowever, this does present a problem for users who have been using bucket names with dots in older versions of pgBackRest. Since this limitation exists for security reasons there appears to be no option but to take a hard line and do our best to notify the user of the issue as clearly as possible." }, { "commit": "21c83eea59f5fb4d3c366a1e9782d26356801675", "date": "2019-04-08 17:21:20 -0400", "subject": "Fix issues when log-level-file=off is set for the archive-get command.", - "body": "This problem was not specific to archive-get, but that was the only place it was expressing in the last release. The new archive-push was also affected.\n\nThe issue was with daemon processes that had closed all their file descriptors. When exec'ing and setting up pipes to communicate with a child process the dup2() function created file descriptors that overlapped with the first descriptor (stdout) that was being duped into. This descriptor was subsequently closed and wackiness ensued.\n\nIf logging was enabled (the default) that increased all the file descriptors by one and everything worked.\n\nFix this by checking if the file descriptor to be closed is the same one being dup'd into. This solution may not be generally applicable but it works fine in this case." + "body": "This problem was not specific to archive-get, but that was the only place it was expressing in the last release. The new archive-push was also affected.\n\nThe issue was with daemon processes that had closed all their file descriptors. When exec'ing and setting up pipes to communicate with a child process the dup2() function created file descriptors that overlapped with the first descriptor (stdout) that was being duped into. This descriptor was subsequently closed and wackiness ensued.\n\nIf logging was enabled (the default) that increased all the file descriptors by one and everything worked.\n\nFix this by checking if the file descriptor to be closed is the same one being dup'd into. This solution may not be generally applicable but it works fine in this case." }, { "commit": "8ac422dca95c8761fc57450ed74061ff286ff0c3", @@ -13106,7 +13106,7 @@ "commit": "251dbede8ff8a7206986106b8df8b08257cd7ebd", "date": "2019-03-27 21:14:06 +0000", "subject": "Add locking capability to the remote command.", - "body": "When a repository server is configured, commands that modify the repository acquire a remote lock as well as a local lock for extra protection against multiple writers.\n\nInstead of the custom logic used in Perl, make remote locking part of the command configuration.\n\nThis also means that the C remote needs the stanza since it is used to construct the lock name. We may need to revisit this at a later date." + "body": "When a repository server is configured, commands that modify the repository acquire a remote lock as well as a local lock for extra protection against multiple writers.\n\nInstead of the custom logic used in Perl, make remote locking part of the command configuration.\n\nThis also means that the C remote needs the stanza since it is used to construct the lock name. We may need to revisit this at a later date." }, { "commit": "7db8cedd68477dd511bb8bdf8b94b2d2925470dd", @@ -13140,7 +13140,7 @@ "commit": "5ee8388f482adc52129a9960bdb077e8c054edb0", "date": "2019-03-26 08:20:55 +0200", "subject": "Build test harness with the same warnings as code being tested.", - "body": "The test harness was not being built with warnings which caused some wackiness with an improperly structured switch. Just use the same warnings as the code being tested.\n\nAlso enable warnings on code that is not directly being tested since other code modules are frequently modified during testing." + "body": "The test harness was not being built with warnings which caused some wackiness with an improperly structured switch. Just use the same warnings as the code being tested.\n\nAlso enable warnings on code that is not directly being tested since other code modules are frequently modified during testing." }, { "commit": "f709334851cfbe254ce9f13266adba4504e9af64", @@ -13169,7 +13169,7 @@ "commit": "8820d695747b40b73ead6d32a408e7d2804bd192", "date": "2019-03-25 08:12:38 +0400", "subject": "Use a single file to handle global errors in async archiving.", - "body": "The prior behavior on a global error (i.e. not file specific) was to write an individual error file for each WAL file being processed. On retry each of these error files would be removed, and if the error was persistent, they would then be recreated. In a busy environment this could mean tens or hundreds of thousands of files.\n\nAnother issue was that the error files could not be written until a list of WAL files to process had been generated. This was easy enough for archive-get but archive-push requires more processing and any errors that happened when generating the list would only be reported in the pgBackRest log rather than the PostgreSQL log.\n\nInstead write a global.error file that applies to any WAL file that does not have an explicit ok or error file. This reduces churn and allows more errors to be reported directly to PostgreSQL." + "body": "The prior behavior on a global error (i.e. not file specific) was to write an individual error file for each WAL file being processed. On retry each of these error files would be removed, and if the error was persistent, they would then be recreated. In a busy environment this could mean tens or hundreds of thousands of files.\n\nAnother issue was that the error files could not be written until a list of WAL files to process had been generated. This was easy enough for archive-get but archive-push requires more processing and any errors that happened when generating the list would only be reported in the pgBackRest log rather than the PostgreSQL log.\n\nInstead write a global.error file that applies to any WAL file that does not have an explicit ok or error file. This reduces churn and allows more errors to be reported directly to PostgreSQL." }, { "commit": "1f6f3f673e49a2dff4dff465d183a2f6928138e6", @@ -13191,7 +13191,7 @@ "commit": "7cf7373761cb83253a502e8d0af4f925c86a7944", "date": "2019-03-21 21:11:36 +0400", "subject": "Refactor PostgreSQL interface to remove most code duplication.", - "body": "Having a copy per version worked well until it was time to add new features or modify existing functions. Then it was necessary to modify every version and try to keep them all in sync.\n\nConsolidate all the PostgreSQL types into a single file using #if for type versions. Many types do not change or change infrequently so this cuts down on duplication. In addition, it is far easier to see what has changed when a new version is added.\n\nUse macros to write the interface functions. There is still duplication here since some changes require a new copy of the macro, but it is far less than before." + "body": "Having a copy per version worked well until it was time to add new features or modify existing functions. Then it was necessary to modify every version and try to keep them all in sync.\n\nConsolidate all the PostgreSQL types into a single file using #if for type versions. Many types do not change or change infrequently so this cuts down on duplication. In addition, it is far easier to see what has changed when a new version is added.\n\nUse macros to write the interface functions. There is still duplication here since some changes require a new copy of the macro, but it is far less than before." }, { "commit": "e938a89250bd6a6512d4d6b5217a9750c848ca49", @@ -13220,7 +13220,7 @@ "commit": "856a369b863fb134ec249b7036eea70ba56d89ac", "date": "2019-03-17 22:00:54 +0400", "subject": "Add file write to the S3 storage driver.", - "body": "Now that repositories are writable the storage drivers that don't yet support file writes need to be updated to do so.\n\nNote that the part size for multi-part upload has not been defined as a proper constant. This will become an option in the near future so it doesn't seem worth creating a constant that we might then forget to remove." + "body": "Now that repositories are writable the storage drivers that don't yet support file writes need to be updated to do so.\n\nNote that the part size for multi-part upload has not been defined as a proper constant. This will become an option in the near future so it doesn't seem worth creating a constant that we might then forget to remove." }, { "commit": "7193738288e7c25bba526f5133fad9d4cf70d4c0", @@ -13255,13 +13255,13 @@ "commit": "66c2f4cd2e43f05d65e06f3c89094fbdf702d60d", "date": "2019-03-16 15:27:38 +0400", "subject": "Make notion of current PostgreSQL info ID in C align with Perl.", - "body": "The C code was assuming that the current PostgreSQL version in archive.info/backup.info was the most recent item in the history, but this is not always the case with some stanza-upgrade scenarios. If a cluster is restored from before the upgrade and stanza-upgrade is run again, it will revert db-id to the original history item.\n\nInstead, load db-id from the db section explicitly as the Perl code does.\n\nThis did not affect archive-get since it does a reverse scan through the history versions and does not rely on the current version." + "body": "The C code was assuming that the current PostgreSQL version in archive.info/backup.info was the most recent item in the history, but this is not always the case with some stanza-upgrade scenarios. If a cluster is restored from before the upgrade and stanza-upgrade is run again, it will revert db-id to the original history item.\n\nInstead, load db-id from the db section explicitly as the Perl code does.\n\nThis did not affect archive-get since it does a reverse scan through the history versions and does not rely on the current version." }, { "commit": "b2b2cf0511b326c695e825bee0c5f5faf648224c", "date": "2019-03-16 15:00:02 +0400", "subject": "Fix issues with remote/local command logging options.", - "body": "Logging was being enable on local/remote processes even if --log-subprocess was not specified, so fix that.\n\nAlso, make sure that stderr is enabled at error level as it was on Perl. This helps expose error information for debugging.\n\nFor remotes, suppress log and lock paths since these are not applicable on remote hosts. These options should be set in the local config if they need to be overridden." + "body": "Logging was being enable on local/remote processes even if --log-subprocess was not specified, so fix that.\n\nAlso, make sure that stderr is enabled at error level as it was on Perl. This helps expose error information for debugging.\n\nFor remotes, suppress log and lock paths since these are not applicable on remote hosts. These options should be set in the local config if they need to be overridden." }, { "commit": "d377e926c806faa8d61744e27eacfa7bf610c445", @@ -13334,7 +13334,7 @@ "commit": "982b47c5ecec475703860f52a0eabe7d501a96e2", "date": "2019-03-14 13:28:33 +0400", "subject": "Add CIFS storage driver.", - "body": "This driver borrows heavily from the Posix driver.\n\nAt this point the only difference is that CIFS does not allow explicit directory fsyncs so they need to be suppressed. At some point the CIFS diver will also omit link support.\n\nWith the addition of this driver repository storage is now writable." + "body": "This driver borrows heavily from the Posix driver.\n\nAt this point the only difference is that CIFS does not allow explicit directory fsyncs so they need to be suppressed. At some point the CIFS diver will also omit link support.\n\nWith the addition of this driver repository storage is now writable." }, { "commit": "941dbb47313b25339d438e7a57975bcd88c65168", @@ -13378,7 +13378,7 @@ "commit": "21f56f64ebe0af34b61793efad1730707a8e9f39", "date": "2019-03-10 10:38:12 +0200", "subject": "Add hints when unable to find a WAL segment in the archive.", - "body": "When this error happens in the context of a backup it can be a bit mystifying as to why the backup is failing. Add some hints to get the user started.\n\nThese hints will appear any time a WAL segment can't be found, which makes the hint about the check command redundant when the user is actually running the check command, but it doesn't seem worth trying to exclude the hint in that case." + "body": "When this error happens in the context of a backup it can be a bit mystifying as to why the backup is failing. Add some hints to get the user started.\n\nThese hints will appear any time a WAL segment can't be found, which makes the hint about the check command redundant when the user is actually running the check command, but it doesn't seem worth trying to exclude the hint in that case." }, { "commit": "bc9fb0f59ae4d9cc843fa8ac2d9d434cf850910d", @@ -13438,7 +13438,7 @@ "commit": "90709dfd213b1eb1dab3daff0101768f61be4b5f", "date": "2019-03-01 14:57:01 +0200", "subject": "Improve performance of context and memory allocations in MemContext module.", - "body": "Allocations required a sequential scan through the allocation list for both contexts and memory. This was very inefficient since for the most part individual memory allocations are seldom freed directly, rather they are freed when their context is freed.\n\nFor both types of allocations track an index for the lowest free position. After an allocation of the free position, a sequential search will be required for the next allocation but this is still far better than doing a scan for every allocation.\n\nWith a moderately-sized dataset (500 history entries in backup.info), there is a 237X performance improvement when combined with the f74e88bb refactor.\n\nBefore:\n\n % cumulative self\n time seconds seconds name\n 65.11 331.37 331.37 memContextAlloc\n 16.19 413.78 82.40 memContextCurrent\n 14.74 488.81 75.03 memContextTop\n 2.65 502.29 13.48 memContextNewIndex\n 1.18 508.31 6.02 memFind\n\nAfter:\n\n % cumulative self\n time seconds seconds name\n 94.69 2.14 2.14 memFind\n\nFinding memory allocations in order to free or resize them is the next bottleneck, but this does not seem to be a major issue presently." + "body": "Allocations required a sequential scan through the allocation list for both contexts and memory. This was very inefficient since for the most part individual memory allocations are seldom freed directly, rather they are freed when their context is freed.\n\nFor both types of allocations track an index for the lowest free position. After an allocation of the free position, a sequential search will be required for the next allocation but this is still far better than doing a scan for every allocation.\n\nWith a moderately-sized dataset (500 history entries in backup.info), there is a 237X performance improvement when combined with the f74e88bb refactor.\n\nBefore:\n\n % cumulative self\n time seconds seconds name\n 65.11 331.37 331.37 memContextAlloc\n 16.19 413.78 82.40 memContextCurrent\n 14.74 488.81 75.03 memContextTop\n 2.65 502.29 13.48 memContextNewIndex\n 1.18 508.31 6.02 memFind\n\nAfter:\n\n % cumulative self\n time seconds seconds name\n 94.69 2.14 2.14 memFind\n\nFinding memory allocations in order to free or resize them is the next bottleneck, but this does not seem to be a major issue presently." }, { "commit": "f74e88bba9c7a7912f0b1fc322823f1b527042f8", @@ -13462,13 +13462,13 @@ "commit": "cb3b4fa24bbe271a517b50a3522bc5075d8fe6c7", "date": "2019-02-28 14:33:29 +0200", "subject": "Enable socket keep-alive on older Perl versions.", - "body": "The prior method depended on IO:Socket:SSL to push the keep-alive options down to the socket but it only worked for recent versions of the module.\n\nInstead, create the socket directly using IO::Socket::IP if available or IO:Socket:INET as a fallback. The keep-alive option is set directly on the socket before it is passed to IO:Socket:SSL." + "body": "The prior method depended on IO:Socket:SSL to push the keep-alive options down to the socket but it only worked for recent versions of the module.\n\nInstead, create the socket directly using IO::Socket::IP if available or IO:Socket:INET as a fallback. The keep-alive option is set directly on the socket before it is passed to IO:Socket:SSL." }, { "commit": "0913523096cf9178ffa46595171dc16ab815148b", "date": "2019-02-28 09:51:19 +0200", "subject": "Cleanup local/remote protocol interaction from 9367cc46.", - "body": "The command option was not being set correctly when a remote was started from a local. It was being set as 'local' rather than the command that the local was running as.\n\nAlso automatically select the remote protocol id based on whether it is started from a local (use the local protocol id) or from the main process (use 0).\n\nThese were not live issues but could cause strange behaviors as new features are added that might be hard to diagnose." + "body": "The command option was not being set correctly when a remote was started from a local. It was being set as 'local' rather than the command that the local was running as.\n\nAlso automatically select the remote protocol id based on whether it is started from a local (use the local protocol id) or from the main process (use 0).\n\nThese were not live issues but could cause strange behaviors as new features are added that might be hard to diagnose." }, { "commit": "db4b447be89878496f0a50905b2f51c1306b9de5", @@ -13545,7 +13545,7 @@ "commit": "d489eb87f7f7da58dd02812e5088f507e1cab491", "date": "2019-02-23 15:59:39 +0200", "subject": "Create test matrix for mock/archive to increase coverage and reduce tests.", - "body": "The same test configurations are run on all four test VMs, which seems a real waste of resources.\n\nVary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once." + "body": "The same test configurations are run on all four test VMs, which seems a real waste of resources.\n\nVary the tests per VM to increase coverage while reducing the total number of tests. Be sure to include each major feature (remote, s3, encryption) in each VM at least once." }, { "commit": "4a7588e604d3b39858f09e8c879e973bd4432ded", @@ -13563,19 +13563,19 @@ "commit": "a9cbf23f4cacb18d85e7a6f02cff198790a9e21f", "date": "2019-02-23 07:28:27 +0200", "subject": "Improve error when hostname cannot be found in a certificate.", - "body": "Update error message with the hostname and more detail about what went wrong. Hopefully this will help in diagnosing certificate/hostname issues." + "body": "Update error message with the hostname and more detail about what went wrong. Hopefully this will help in diagnosing certificate/hostname issues." }, { "commit": "1f66bda02ec9ae92403af2feeaf61505b3482932", "date": "2019-02-22 12:02:26 +0200", "subject": "Fix non-compliant JSON for options passed from C to Perl.", - "body": "We have been using a hacked-up JSON generator to pass options from C to Perl since the C binary was introduced. This generator was not very compliant which led to issues with \\n, \", etc. inside strings.\n\nWe have a fully-compliant JSON generator now so use that instead." + "body": "We have been using a hacked-up JSON generator to pass options from C to Perl since the C binary was introduced. This generator was not very compliant which led to issues with \\n, \", etc. inside strings.\n\nWe have a fully-compliant JSON generator now so use that instead." }, { "commit": "70c30dfb619a3ec2ea7d465d5b8e2014518b723d", "date": "2019-02-22 11:40:30 +0200", "subject": "Disable test-level stack trace by default.", - "body": "Detailed stack traces for low-level functions (e.g. strCat, bufMove) can be very useful for debugging but leaving them on for all tests has become quite burdensome in terms of time. Complex operations like generating JSON on a large KevValue can lead to timeouts even with generous values.\n\nAdd a new param, --debug-trace, to enable test-level stack trace, but leave it off by default." + "body": "Detailed stack traces for low-level functions (e.g. strCat, bufMove) can be very useful for debugging but leaving them on for all tests has become quite burdensome in terms of time. Complex operations like generating JSON on a large KevValue can lead to timeouts even with generous values.\n\nAdd a new param, --debug-trace, to enable test-level stack trace, but leave it off by default." }, { "commit": "ae86e6d5b2f6f86bc3ec1a4b1378d9b49e89799e", @@ -13592,13 +13592,13 @@ "commit": "e14c0eeb65a998a419e5a2e6c0ce833e8d7d03dc", "date": "2019-02-21 16:20:46 +0200", "subject": "Use driver for remote protocol introduced in da628be8.", - "body": "The remote protocol was calling into the Storage object but this required some translation which will get more awkward as time goes by.\n\nInstead, call directly into the local driver so the communication is directly driver to driver. This still requires resolving the path and may eventually have more duplication with the Storage object methods but it seems the right thing to do." + "body": "The remote protocol was calling into the Storage object but this required some translation which will get more awkward as time goes by.\n\nInstead, call directly into the local driver so the communication is directly driver to driver. This still requires resolving the path and may eventually have more duplication with the Storage object methods but it seems the right thing to do." }, { "commit": "b1eb8af7d5093a26a2e221a428540e5e7b356eed", "date": "2019-02-21 15:40:21 +0200", "subject": "Resolve storage path expressions before passing to remote.", - "body": "Expressions such as require a stanza name in order to be resolved correctly. However, if the stanza name is passed to the remote then that remote will only work correctly for that one stanza.\n\nInstead, resolved the expressions locally but still pass a relative path to the remote. That way, a storage path that is only configured on the remote does not need to be known locally." + "body": "Expressions such as require a stanza name in order to be resolved correctly. However, if the stanza name is passed to the remote then that remote will only work correctly for that one stanza.\n\nInstead, resolved the expressions locally but still pass a relative path to the remote. That way, a storage path that is only configured on the remote does not need to be known locally." }, { "commit": "b4d4680f8c3126d1c2016d30aa2a1b77aac085c0", @@ -13610,7 +13610,7 @@ "commit": "be6a3f131e746304c6a30c39fcab4fe4b44dcb7f", "date": "2019-02-21 14:26:06 +0200", "subject": "Improve null-handling of strToLog().", - "body": "NULL was returning {\"(null)\"} which was comprehensible but not very pretty. Instead return null on NULL." + "body": "NULL was returning {\"(null)\"} which was comprehensible but not very pretty. Instead return null on NULL." }, { "commit": "1fd89f05afd8d8614d7b38a006eda4418f6324ce", @@ -13622,7 +13622,7 @@ "commit": "80df1114bdc97a930421f491bb7f41d9aeab6069", "date": "2019-02-21 12:09:12 +0200", "subject": "Fix info command missing WAL min/max when stanza specified.", - "body": "This issue was a result of STORAGE_REPO_PATH prepending an extra stanza when the stanza was specified on the command line.\n\nThe tests missed this because by some strange coincidence the WAL dirs were empty for each test that specified a stanza. Add new tests to prevent a regression.\n\nFixed by Stefan Fercot." + "body": "This issue was a result of STORAGE_REPO_PATH prepending an extra stanza when the stanza was specified on the command line.\n\nThe tests missed this because by some strange coincidence the WAL dirs were empty for each test that specified a stanza. Add new tests to prevent a regression.\n\nFixed by Stefan Fercot." }, { "commit": "1519f5b04540d5a431044f83a45b604ad8757397", @@ -13645,7 +13645,7 @@ "commit": "71bc5697b1a35c8460af7885b888b1d43bd994b1", "date": "2019-02-20 22:23:19 +0200", "subject": "Increase per-call stack trace size to 4096.", - "body": "This was previously 256, which was too small to log protocol parameters. Not only did this truncate important debug information but varying path lengths caused spurious differences in the expect logs." + "body": "This was previously 256, which was too small to log protocol parameters. Not only did this truncate important debug information but varying path lengths caused spurious differences in the expect logs." }, { "commit": "73be64ce49aaff9d94f70c47590476c404365ce4", @@ -13669,7 +13669,7 @@ "commit": "d211c2b8b51ae5b796bbb581d21a4a406e3ed972", "date": "2019-02-15 11:52:39 +0200", "subject": "Fix possible truncated WAL segments when an error occurs mid-write.", - "body": "The file write object destructors called close() and finalized the file even if it was not completely written. This was an issue in both the C and Perl code.\n\nRewrite the destructors to simply free resources (like file handles) rather than calling the close() method. This leaves the temp file in place for filesystems that use temp files.\n\nAdd unit tests to prevent regression." + "body": "The file write object destructors called close() and finalized the file even if it was not completely written. This was an issue in both the C and Perl code.\n\nRewrite the destructors to simply free resources (like file handles) rather than calling the close() method. This leaves the temp file in place for filesystems that use temp files.\n\nAdd unit tests to prevent regression." }, { "commit": "2cd204f38037f1465c84bb4e6b55893204ee8f93", @@ -13720,7 +13720,7 @@ "commit": "b29a8dd9c541b755381f2e103abbe0b05fc65190", "date": "2019-02-02 15:03:19 +0200", "subject": "Automatically adjust db-timeout when protocol-timeout is smaller.", - "body": "This already worked in reverse, but this case is needed when a command that only uses protocol-timeout (e.g. info) calls a remote process where protocol-timeout and db-timeout can be set. If protocol-timeout was set to less than the default db-timeout then an error resulted." + "body": "This already worked in reverse, but this case is needed when a command that only uses protocol-timeout (e.g. info) calls a remote process where protocol-timeout and db-timeout can be set. If protocol-timeout was set to less than the default db-timeout then an error resulted." }, { "commit": "abc613b454f8f1aa9da5e842c5da822d26504163", @@ -13746,7 +13746,7 @@ "commit": "aa3e5b8c72a75a4f5fea93b543a8fbedcd713f02", "date": "2019-01-30 17:03:17 +0200", "subject": "Allow primary gid for the test user to be different from uid.", - "body": "Apparently up until now they have always been the same, which is pretty typical. However, if they were not then ContainerTest.pm was not happy." + "body": "Apparently up until now they have always been the same, which is pretty typical. However, if they were not then ContainerTest.pm was not happy." }, { "commit": "711b3e67cbfc7b404ffe2ce196ffb42159c95040", @@ -13787,7 +13787,7 @@ "commit": "d29aa6128681a3912c1c48ed71cdad9395d0d347", "date": "2019-01-27 11:50:09 +0200", "subject": "Allocate extra space for concatenations in the String object.", - "body": "The string object was reallocating memory with every concatenation which is not very efficient. This is especially true for JSON rendering which does a lot of concatenations.\n\nInstead allocate a pool of extra memory on the first concatenation (50% of size) to be used for future concatenations and reallocate when needed.\n\nAlso add a 1GB size limit to ensure that there are no overflows." + "body": "The string object was reallocating memory with every concatenation which is not very efficient. This is especially true for JSON rendering which does a lot of concatenations.\n\nInstead allocate a pool of extra memory on the first concatenation (50% of size) to be used for future concatenations and reallocate when needed.\n\nAlso add a 1GB size limit to ensure that there are no overflows." }, { "commit": "82c2d615b3ecdb851029ad691f5a8061f949535b", @@ -13804,13 +13804,13 @@ "commit": "8f6d324b2c2576456bd0ba7f691297df1f5aba1e", "date": "2019-01-26 16:59:54 +0200", "subject": "Fix issue with multiple async status files causing a hard error.", - "body": "Multiple status files were being created by asynchronous archiving if a high-level error occurred after one or more WAL segments had already been transferred successfully. Error files were being written for every file in the queue regardless of whether it had already succeeded. To fix this, add an option to skip writing error files when an ok file already exists.\n\nThere are other situations where both files might exist (various fsync and filesystem error scenarios) so it seems best to retry in the case that multiple status files are found rather than throwing a hard error (which then means that archiving is completely stuck). In the case of multiple status files, a warning will be logged to alert the user that something unusual is happening and the command will be retried." + "body": "Multiple status files were being created by asynchronous archiving if a high-level error occurred after one or more WAL segments had already been transferred successfully. Error files were being written for every file in the queue regardless of whether it had already succeeded. To fix this, add an option to skip writing error files when an ok file already exists.\n\nThere are other situations where both files might exist (various fsync and filesystem error scenarios) so it seems best to retry in the case that multiple status files are found rather than throwing a hard error (which then means that archiving is completely stuck). In the case of multiple status files, a warning will be logged to alert the user that something unusual is happening and the command will be retried." }, { "commit": "f3ae3c4f9d0dd63c583ab27ef1f6aa869db12a3a", "date": "2019-01-26 13:48:46 +0200", "subject": "Include Posix-compliant header for strcasecmp().", - "body": "gcc has apparently merged this function in string.h but Posix specifies that it should be in strings.h. FreeBSD at at least is sticking to the standard.\n\nIn the long run it might be better to implement our own strcasecmp() function but for now just add the header." + "body": "gcc has apparently merged this function in string.h but Posix specifies that it should be in strings.h. FreeBSD at at least is sticking to the standard.\n\nIn the long run it might be better to implement our own strcasecmp() function but for now just add the header." }, { "commit": "1401c023f0cfdd0266144bdd8b181e8194bbd91e", @@ -13844,7 +13844,7 @@ "commit": "db08656537b22c39e5278f9f2770b816278e771a", "date": "2019-01-21 17:41:59 +0200", "subject": "Rename FUNCTION_DEBUG_* and consolidate ASSERT_* macros for consistency.", - "body": "Rename FUNCTION_DEBUG_* macros to FUNCTION_LOG_* to more accurately reflect what they do. Further rename FUNCTION_DEBUG_RESULT* macros to FUNCTION_LOG_RETURN* to make it clearer that they return from the function as well as logging. Leave FUNCTION_TEST_* macros as they are.\n\nConsolidate the various ASSERT* macros into a single ASSERT macro that is always compiled out of production builds. It was difficult to figure out when an assert would be checked with all the different types in play. When ASSERTs are compiled in they will always be checked regardless of the log level -- tying these two concepts together was not a good idea." + "body": "Rename FUNCTION_DEBUG_* macros to FUNCTION_LOG_* to more accurately reflect what they do. Further rename FUNCTION_DEBUG_RESULT* macros to FUNCTION_LOG_RETURN* to make it clearer that they return from the function as well as logging. Leave FUNCTION_TEST_* macros as they are.\n\nConsolidate the various ASSERT* macros into a single ASSERT macro that is always compiled out of production builds. It was difficult to figure out when an assert would be checked with all the different types in play. When ASSERTs are compiled in they will always be checked regardless of the log level -- tying these two concepts together was not a good idea." }, { "commit": "d245f8eb425322f0efcd05ab4c12b3b45cc87138", @@ -13866,13 +13866,13 @@ "commit": "7355248d6b2b1db0a7ca997db0dc2943331325a0", "date": "2019-01-18 22:04:37 +0200", "subject": "Add remote storage objects.", - "body": "This is a partial implementation of remote storage with just enough functionality to get the info command working. The client is written in C but the server is still in Perl, which limits progress until a C server is written." + "body": "This is a partial implementation of remote storage with just enough functionality to get the info command working. The client is written in C but the server is still in Perl, which limits progress until a C server is written." }, { "commit": "88201f37a3b7156dcf6f4aaa659136215485c40d", "date": "2019-01-18 21:32:51 +0200", "subject": "Add ProtocolClient object and helper functions.", - "body": "This is a complete protocol client implementation in C.\n\nCurrently there is no C server implementation so the C client is talking to a Perl server. This won't work very long, though, as the protocol format, even though in JSON, has a lot of language-specific structure. While it would be possible to maintain compatibility between C and Perl it's probably not worth the effort in the long run.\n\nJust as in Perl there are helper functions to make constructing protocol objects easier. Currently only repository remotes are supported." + "body": "This is a complete protocol client implementation in C.\n\nCurrently there is no C server implementation so the C client is talking to a Perl server. This won't work very long, though, as the protocol format, even though in JSON, has a lot of language-specific structure. While it would be possible to maintain compatibility between C and Perl it's probably not worth the effort in the long run.\n\nJust as in Perl there are helper functions to make constructing protocol objects easier. Currently only repository remotes are supported." }, { "commit": "0986db630cf3c931ccb135a86e60ee098fdc2dec", @@ -13901,7 +13901,7 @@ "commit": "ecd56105e688e74ad71f8a1fc0fc3e785989abe5", "date": "2019-01-17 22:08:31 +0200", "subject": "Add IoHandleRead and IoHandleWrite objects.", - "body": "General i/o objects for reading and writing file descriptors, in particular those that can block. In other words, these are not generally to be used with file descriptors for actual files, but rather pipes, sockets, etc." + "body": "General i/o objects for reading and writing file descriptors, in particular those that can block. In other words, these are not generally to be used with file descriptors for actual files, but rather pipes, sockets, etc." }, { "commit": "bf0c41d9d6d5fd53cc4bf190b0c577f506323b5b", @@ -13912,13 +13912,13 @@ "commit": "7d4bbf290cf33de06fca9854f21ec9690cfd51bb", "date": "2019-01-16 22:16:50 +0200", "subject": "Fix difference in cipher type reporting missed in 8304d452.", - "body": "The C code can't get the cipher type from the storage object because the C storage object does not have encryption baked in like the Perl code does.\n\nInstead, check backup.info to see if encryption is enabled. This will need to rethought if another cipher type is added but for now it works fine." + "body": "The C code can't get the cipher type from the storage object because the C storage object does not have encryption baked in like the Perl code does.\n\nInstead, check backup.info to see if encryption is enabled. This will need to rethought if another cipher type is added but for now it works fine." }, { "commit": "e68d1e73042342c4cea0881e216e6979a3bcb140", "date": "2019-01-16 19:23:10 +0200", "subject": "Simplify info command text message when no stanzas are present.", - "body": "Replace the repository path with just \"the repository\". The path is not important in this context and it is clearer to state where the stanzas are missing from." + "body": "Replace the repository path with just \"the repository\". The path is not important in this context and it is clearer to state where the stanzas are missing from." }, { "commit": "ef9dc89e080d7b364a00fec138d1961c703dd02a", @@ -13930,7 +13930,7 @@ "commit": "b4146b6bff99c8d82445bfda96b237ef0cff3050", "date": "2019-01-16 18:45:19 +0200", "subject": "Update Perl repo rules to work when stanza is not specified.", - "body": "The C storage object strives to use rules whenever possible instead of generating absolute paths. This change helps the C and Perl storage work together via the protocol layer." + "body": "The C storage object strives to use rules whenever possible instead of generating absolute paths. This change helps the C and Perl storage work together via the protocol layer." }, { "commit": "0014e159443354a46197e58f0631d7363d038cde", @@ -13964,7 +13964,7 @@ "commit": "aab9e38b9a070e8707c0a316784b456ecaeba64e", "date": "2019-01-14 21:34:22 +0200", "subject": "Return UnknownError from errorTypeFromCode() for invalid error codes.", - "body": "The prior behavior was to throw an exception but this was not very helpful when something unexpected happened. Better to at least emit the error message even if the error code is not very helpful." + "body": "The prior behavior was to throw an exception but this was not very helpful when something unexpected happened. Better to at least emit the error message even if the error code is not very helpful." }, { "commit": "2b02d37602e3bfc451d598032d0e3793a06e8673", @@ -13975,7 +13975,7 @@ "commit": "8304d452b3683d5d1d59cf7a6cc14801ddc35efd", "date": "2019-01-13 22:44:58 +0200", "subject": "Make the C version of the info command conform to the Perl version.", - "body": "There were some small differences in ordering and how the C version handled missing directories. It may be that the C version is more consistent, but for now it is more important to be compatible with the Perl version.\n\nThese differences were missed because the C info command was not wired into main.c so it was not being tested in regression. This commit does not fix the wiring issue because there will likely be a release soon and it is too big a change to put in at the last moment." + "body": "There were some small differences in ordering and how the C version handled missing directories. It may be that the C version is more consistent, but for now it is more important to be compatible with the Perl version.\n\nThese differences were missed because the C info command was not wired into main.c so it was not being tested in regression. This commit does not fix the wiring issue because there will likely be a release soon and it is too big a change to put in at the last moment." }, { "commit": "f314a1f8aa101b5f73cbb0902af7ebed9804a143", @@ -14004,7 +14004,7 @@ "commit": "7272d6e2475f7fe8dd327206591b83cf4b9f498b", "date": "2019-01-06 17:28:17 +0200", "subject": "Add _DARWIN_C_SOURCE flag to Makefile for MacOS builds.", - "body": "For some reason adding -D_POSIX_C_SOURCE=200112L caused MacOS builds to stop working. Combining both flags seems to work fine for all tested systems." + "body": "For some reason adding -D_POSIX_C_SOURCE=200112L caused MacOS builds to stop working. Combining both flags seems to work fine for all tested systems." }, { "commit": "9560baf659cafec2614e90ce72d988635cb945c1", @@ -14093,7 +14093,7 @@ "commit": "72865ca33ba383fba54da8db0bb8595890a12a3e", "date": "2018-12-30 16:40:20 +0200", "subject": "Add admonitions to documentation renderers.", - "body": "Admonitions call out places where the user should take special care.\n\nSupport added for HTML, PDF, Markdown and help text renderers. XML files have been updated accordingly." + "body": "Admonitions call out places where the user should take special care.\n\nSupport added for HTML, PDF, Markdown and help text renderers. XML files have been updated accordingly." }, { "commit": "3dc327fd05882b2dc27e77a7922f5c275a7e2b25", @@ -14129,13 +14129,13 @@ "commit": "35bbb5bd6881e3f405cbd99c3992c32aa5c6d69f", "date": "2018-12-14 18:25:31 -0500", "subject": "Reorder info command text output so most recent backup is output last.", - "body": "After a stanza-upgrade backups for the old cluster are displayed until they expire. Cluster info was output newest to oldest which meant after an upgrade the most recent backup would no longer be output last.\n\nUpdate the text output ordering so the most recent backup is always output last." + "body": "After a stanza-upgrade backups for the old cluster are displayed until they expire. Cluster info was output newest to oldest which meant after an upgrade the most recent backup would no longer be output last.\n\nUpdate the text output ordering so the most recent backup is always output last." }, { "commit": "205525b60780587366b9786750535a55977f0221", "date": "2018-12-13 16:22:34 -0500", "subject": "Migrate local info command to C.", - "body": "The info command will only be executed in C if the repository is local, i.e. not located on a remote repository host. S3 is considered \"local\" in this case.\n\nThis is a direct migration from Perl to integrate as seamlessly with the remaining Perl code as possible. It should not be possible to determine if the C version is running unless debug-level logging is enabled." + "body": "The info command will only be executed in C if the repository is local, i.e. not located on a remote repository host. S3 is considered \"local\" in this case.\n\nThis is a direct migration from Perl to integrate as seamlessly with the remaining Perl code as possible. It should not be possible to determine if the C version is running unless debug-level logging is enabled." }, { "commit": "e6ef40e8a327d94e5111d21dd056eefbe5d19a86", @@ -14159,7 +14159,7 @@ "commit": "df947cfcb22928ac4487c74483a49075e35ea15d", "date": "2018-12-12 13:52:23 -0500", "subject": "Add documentation for building the documentation.", - "body": "A basic primer for building the documentation. Lots that could be added, but it's a start." + "body": "A basic primer for building the documentation. Lots that could be added, but it's a start." }, { "commit": "fdc76742c8e98e630fa4219ba8e9aad79c882e40", @@ -14171,7 +14171,7 @@ "commit": "ee04ebe3142b0f170ef6662db3b6be3e2ba2e32e", "date": "2018-12-12 11:15:09 -0500", "subject": "Fix Centos/RHEL 7 documentation builds.", - "body": "This was caused by a new container version that was released around December 5th. The new version explicitly denies user logons by leaving /var/run/nologin in place after boot.\n\nThe solution is to enable the service that is responsible for removing this file on a successful boot." + "body": "This was caused by a new container version that was released around December 5th. The new version explicitly denies user logons by leaving /var/run/nologin in place after boot.\n\nThe solution is to enable the service that is responsible for removing this file on a successful boot." }, { "commit": "2f15a90d18e0cdc0d91e9915b400acfc78ccaf26", @@ -14183,7 +14183,7 @@ "commit": "f0417ee524d1759b1aa5949abcae4769741ce550", "date": "2018-12-10 18:31:49 -0500", "subject": "Use cast to make for loop more readable in InfoPg module.", - "body": "The previous way worked but was a head-scratcher when reading the code. This cast hopefully makes it a bit more obvious what is going on." + "body": "The previous way worked but was a head-scratcher when reading the code. This cast hopefully makes it a bit more obvious what is going on." }, { "commit": "2514d08d0dc6d28a59bd884a751b734c70e2c0c0", @@ -14207,7 +14207,7 @@ "commit": "4f539db8d9c666fa5a9c4e020e387a3b26886b83", "date": "2018-12-10 17:01:33 -0500", "subject": "Allow NULL stanza in storage helper.", - "body": "Some commands (e.g. info) do not take a stanza or the stanza is optional. In that case it is the job of the command to construct the repository path with a stanza as needed.\n\nUpdate helper functions to omit the stanza from the constructed path when it is NULL." + "body": "Some commands (e.g. info) do not take a stanza or the stanza is optional. In that case it is the job of the command to construct the repository path with a stanza as needed.\n\nUpdate helper functions to omit the stanza from the constructed path when it is NULL." }, { "commit": "cbf514e191783937de63b5a276fcefc426e6f430", @@ -14231,7 +14231,7 @@ "commit": "1c5f8f45b68ad87553c2219ffb8e0da0314ad920", "date": "2018-12-07 12:32:10 -0500", "subject": "Add configuration to the standby so it works as a primary when promoted.", - "body": "This code was generated during testing and it seemed a good idea to keep it. It is only a partial solution since the primary also needs additional configuration to be able to fail back and forth." + "body": "This code was generated during testing and it seemed a good idea to keep it. It is only a partial solution since the primary also needs additional configuration to be able to fail back and forth." }, { "commit": "495391c7430ee8f4aafc006f6cea829e6a3ca981", @@ -14261,7 +14261,7 @@ "commit": "11181e69b818452a0b01d388ca5ce0879e6c3a3d", "date": "2018-12-06 09:04:01 -0500", "subject": "Disable Centos/RHEL 7 documentation builds.", - "body": "These were introduced in 33fa2ede and ran for a day or so before they started failing consistently on CI. Local builds work fine.\n\nDisable them to free the pipeline for further commits while we determine the issue." + "body": "These were introduced in 33fa2ede and ran for a day or so before they started failing consistently on CI. Local builds work fine.\n\nDisable them to free the pipeline for further commits while we determine the issue." }, { "commit": "e73416e9e39118a3eb7a10d4d3f434ef7cc1c4ba", @@ -14296,7 +14296,7 @@ "commit": "cc6447356ef436582fdd3501311cb8da0f4f0342", "date": "2018-12-05 09:15:45 -0500", "subject": "Fix test binary name for gprof.", - "body": "This got missed in 1f8931f7 when the test binary was renamed.\n\nAlso output call graph along with the flat report. The flat report is generally most useful but it doesn't hurt to have both." + "body": "This got missed in 1f8931f7 when the test binary was renamed.\n\nAlso output call graph along with the flat report. The flat report is generally most useful but it doesn't hurt to have both." }, { "commit": "33fa2ede7d106db6bf95712747bd0a8ed52f1be5", @@ -14308,13 +14308,13 @@ "commit": "baeff9e4f04c7f86f9cbe84efaf4dc66b3a4add4", "date": "2018-12-04 17:33:56 -0500", "subject": "Create common if expressions for testing os-type.", - "body": "These expressions simplify os-type testing. This will be especially true as more OS types are added." + "body": "These expressions simplify os-type testing. This will be especially true as more OS types are added." }, { "commit": "9e217d02564d30bcdcb865f87323221aa2969169", "date": "2018-12-04 13:17:55 -0500", "subject": "Documentation may be built with user-specified packages.", - "body": "By default the documentation builds pgBackRest from source, but the documentation is also a good way to smoke-test packages.\n\nAllow a package file to be specified by passing --var=package=/path/to/package.ext. This works for Debian and CentOS 6 builds." + "body": "By default the documentation builds pgBackRest from source, but the documentation is also a good way to smoke-test packages.\n\nAllow a package file to be specified by passing --var=package=/path/to/package.ext. This works for Debian and CentOS 6 builds." }, { "commit": "0db030fa63b74fb6e0a8ad58f1417e39cd1eac9e", @@ -14325,7 +14325,7 @@ "commit": "14190f9e6c290383676a4332ff5c71e9cc83c66a", "date": "2018-12-03 12:41:53 -0500", "subject": "Update URL for Docker install.", - "body": "As usual the old URL started providing a broken version of Docker rather than producing a clear error message. This happens once a year or so." + "body": "As usual the old URL started providing a broken version of Docker rather than producing a clear error message. This happens once a year or so." }, { "commit": "17e611cb883b6086a802c2adfbda384817feaa09", @@ -14349,7 +14349,7 @@ "commit": "64b97fd7ca3410cd6bca3c01bc438bb104191f78", "date": "2018-11-30 10:55:29 -0500", "subject": "Correct archive-get-queue-max to be size type.", - "body": "This somehow was not configured as a size option when it was added. It worked, but queue sizes could not be specified in shorthand, e.g. 128GB.\n\nThis is not a breaking change because currently configured integer values will be read as bytes." + "body": "This somehow was not configured as a size option when it was added. It worked, but queue sizes could not be specified in shorthand, e.g. 128GB.\n\nThis is not a breaking change because currently configured integer values will be read as bytes." }, { "commit": "1ad67644dade53595e2f210d0f75cc247b3ea2a5", @@ -14367,7 +14367,7 @@ "commit": "74b72df9dbfbfb00d5b729cb83bef35e77fd73c6", "date": "2018-11-28 18:41:21 -0500", "subject": "Improve error message when info files are missing/corrupt.", - "body": "The previous error message only showed the last error. In addition, some errors were missed (such as directory permission errors) that could prevent the copy from being checked.\n\nShow both errors below a generic \"unable to load\" error. Details are now given explaining exactly why the primary and copy failed.\n\nPreviously if one file could not be loaded a warning would be output. This has been removed because it is not clear what the user should do in this case. Should they do a stanza-create --force? Maybe the best idea is to automatically repair the corrupt file, but on the other hand that might just spread corruption if pgBackRest makes the wrong choice." + "body": "The previous error message only showed the last error. In addition, some errors were missed (such as directory permission errors) that could prevent the copy from being checked.\n\nShow both errors below a generic \"unable to load\" error. Details are now given explaining exactly why the primary and copy failed.\n\nPreviously if one file could not be loaded a warning would be output. This has been removed because it is not clear what the user should do in this case. Should they do a stanza-create --force? Maybe the best idea is to automatically repair the corrupt file, but on the other hand that might just spread corruption if pgBackRest makes the wrong choice." }, { "commit": "47687dd13a24a49409f07b99141b508739228791", @@ -14379,7 +14379,7 @@ "commit": "7c2fcb63e4ab62e880ebca935fb67e25fa03d5b3", "date": "2018-11-28 14:56:26 -0500", "subject": "Enable encryption for archive-get command in C.", - "body": "The decryption filter was added in archiveGetFile() and archiveGetCheck() was modified to return the WAL decryption key stored in archive.info. The rest was plumbing.\n\nThe mock/archive/1 integration test added encryption to provide coverage for the new code paths while mock/archive/2 dropped encryption to provide coverage for the existing code paths. This caused some churn in the expect logs but there was no change in behavior." + "body": "The decryption filter was added in archiveGetFile() and archiveGetCheck() was modified to return the WAL decryption key stored in archive.info. The rest was plumbing.\n\nThe mock/archive/1 integration test added encryption to provide coverage for the new code paths while mock/archive/2 dropped encryption to provide coverage for the existing code paths. This caused some churn in the expect logs but there was no change in behavior." }, { "commit": "6c23830991558d74ecade1d510fbd50acf5b0f18", @@ -14391,7 +14391,7 @@ "commit": "410a04a58ee59de3055483a2f971c2e42015267e", "date": "2018-11-28 14:20:12 -0500", "subject": "Allow arbitrary InOut filters to be chained in IoFilterGroup.", - "body": "If InOut filters were placed next to each other then the second filter would never get a NULL input signaling it to flush. This arrangement only worked if the second filter had some other indication that it should flush, such as a decompression filter where the flush is indicated in the input stream.\n\nThis is not a live issue because currently no InOut filters are chained together." + "body": "If InOut filters were placed next to each other then the second filter would never get a NULL input signaling it to flush. This arrangement only worked if the second filter had some other indication that it should flush, such as a decompression filter where the flush is indicated in the input stream.\n\nThis is not a live issue because currently no InOut filters are chained together." }, { "commit": "838cfa44b76ed885f030ebf75649df7eaace6593", @@ -14403,7 +14403,7 @@ "commit": "3e254f4cff349f2de95b557b77e3bb0baa1450c9", "date": "2018-11-28 12:42:36 -0500", "subject": "Add IoFilter interface to CipherBlock object.", - "body": "This allows CipherBlock to be used as a filter in an IoFilterGroup. The C-style functions used by Perl are now deprecated and should not be used for any new code.\n\nAlso add functions to convert between cipher names and CipherType." + "body": "This allows CipherBlock to be used as a filter in an IoFilterGroup. The C-style functions used by Perl are now deprecated and should not be used for any new code.\n\nAlso add functions to convert between cipher names and CipherType." }, { "commit": "c3a84ccae08cf7506f26a000344ea1ba2517f9a8", @@ -14433,7 +14433,7 @@ "commit": "315aa2c4512c2789889d68d1253366c5c75d405b", "date": "2018-11-25 08:39:41 -0500", "subject": "Conditional compilation of Perl logic in exit.c.", - "body": "This file is the only one to contain Perl logic outside of the perl module. Make the Perl logic conditional to improve reusability." + "body": "This file is the only one to contain Perl logic outside of the perl module. Make the Perl logic conditional to improve reusability." }, { "commit": "78fe642eaeae4f7f73796fdc45b5c2a81e9f3e9f", @@ -14468,7 +14468,7 @@ "commit": "beae37533041e63a71b33757e4df933e197e575f", "date": "2018-11-23 12:18:07 -0500", "subject": "Enable S3 storage for archive-get command in C.", - "body": "The only change required was to remove the filter that prevented S3 storage from being used. The archive-get command did not require any modification which demonstrates that the storage interface is working as intended.\n\nThe mock/archive/3 integration test was modified to run S3 storage locally to provide coverage for the new code paths while mock/stanza/3 was modified to run S3 storage remotely to provide coverage for the existing code paths. This caused some churn in the expect logs but there was no change in behavior." + "body": "The only change required was to remove the filter that prevented S3 storage from being used. The archive-get command did not require any modification which demonstrates that the storage interface is working as intended.\n\nThe mock/archive/3 integration test was modified to run S3 storage locally to provide coverage for the new code paths while mock/stanza/3 was modified to run S3 storage remotely to provide coverage for the existing code paths. This caused some churn in the expect logs but there was no change in behavior." }, { "commit": "b5690e21a4ad453f004087adf434adf13d109a8c", @@ -14498,7 +14498,7 @@ "commit": "ac426bc456425d68e33a7facc8393f26b9d17bb1", "date": "2018-11-21 18:13:37 -0500", "subject": "New test containers with static test certificates.", - "body": "Test certificates were generated dynamically but there are advantages to using static certificates. For example, it possible to use the same certificate between container versions. Mostly, it is easier to document the certificates if they are not buried deep in the container code.\n\nThe new test certificates are initially intended to be used with the C unit tests but they will eventually be used for integration tests as well.\n\nTwo new certificates have been defined. See test/certificate/README.md for details.\n\nThe old dynamic certificates will be retained until they are replaced." + "body": "Test certificates were generated dynamically but there are advantages to using static certificates. For example, it possible to use the same certificate between container versions. Mostly, it is easier to document the certificates if they are not buried deep in the container code.\n\nThe new test certificates are initially intended to be used with the C unit tests but they will eventually be used for integration tests as well.\n\nTwo new certificates have been defined. See test/certificate/README.md for details.\n\nThe old dynamic certificates will be retained until they are replaced." }, { "commit": "53e3651ccaaaa1652f53903534974377a2808aec", @@ -14516,7 +14516,7 @@ "commit": "6680130c6f5553e6d0c587dccc57b4a8508c4ab1", "date": "2018-11-20 19:24:53 -0500", "subject": "Require S3 key options except for local/remote commands.", - "body": "S3 key options (repo1-s3-key/repo1-s3-key-secret) were not required which meant that users got an ugly assertion when they were missing rather than a tidy configuration error.\n\nOnly the local/remote commands need them to be optional. This is because local/remote commands get all their options from the command line but secrets cannot be passed on the command line. Instead, secrets are passed to the local/remote commands via the protocol for any operation that needs them.\n\nThe configuration system allows required to be set per command so use that to improve the error messages while not breaking the local/remote commands." + "body": "S3 key options (repo1-s3-key/repo1-s3-key-secret) were not required which meant that users got an ugly assertion when they were missing rather than a tidy configuration error.\n\nOnly the local/remote commands need them to be optional. This is because local/remote commands get all their options from the command line but secrets cannot be passed on the command line. Instead, secrets are passed to the local/remote commands via the protocol for any operation that needs them.\n\nThe configuration system allows required to be set per command so use that to improve the error messages while not breaking the local/remote commands." }, { "commit": "f743d4e92418d73717225599e3dbd03a4af7dec9", @@ -14555,7 +14555,7 @@ "commit": "332a68ea8d713f24d1db08e20d5b95bcf32cc74e", "date": "2018-11-16 08:48:02 -0500", "subject": "Fix incorrect config constant introduced in 5e3b7cbe.", - "body": "This commit introduced PGBACKREST_CONFIG_ORIG_PATH_FILE_STR as a String constant for PGBACKREST_CONFIG_ORIG_PATH_FILE but failed to get the value correct.\n\nAlso, no test was added for PGBACKREST_CONFIG_ORIG_PATH_FILE_STR to prevent regressions as there is for PGBACKREST_CONFIG_ORIG_PATH_FILE." + "body": "This commit introduced PGBACKREST_CONFIG_ORIG_PATH_FILE_STR as a String constant for PGBACKREST_CONFIG_ORIG_PATH_FILE but failed to get the value correct.\n\nAlso, no test was added for PGBACKREST_CONFIG_ORIG_PATH_FILE_STR to prevent regressions as there is for PGBACKREST_CONFIG_ORIG_PATH_FILE." }, { "commit": "75f6e45de26cf4ab087dc791f2aa177553584472", @@ -14618,13 +14618,13 @@ "commit": "acb579c4698f856e4c1ceedca751aaeadc9d2d3f", "date": "2018-11-13 10:37:58 -0500", "subject": "Tighten limits on code coverage context selection.", - "body": "If the last } of a function was marked as uncovered then the context selection would overrun into the next function.\n\nStart checking context on the current line to prevent this. Make the same change for start context even though it doesn't seem to have an issue." + "body": "If the last } of a function was marked as uncovered then the context selection would overrun into the next function.\n\nStart checking context on the current line to prevent this. Make the same change for start context even though it doesn't seem to have an issue." }, { "commit": "086bc35ddc71b008a5d20f8e5e3d0305508dfdf5", "date": "2018-11-12 21:18:53 -0500", "subject": "Make ioReadLine() read less aggressively.", - "body": "ioReadLine() calls ioRead(), which aggressively tries to fill the output buffer, but this doesn't play well with blocking reads.\n\nGive ioReadLine() an option that tells it to read only what is available. That doesn't mean the function will never block but at least it won't do so by reading too far." + "body": "ioReadLine() calls ioRead(), which aggressively tries to fill the output buffer, but this doesn't play well with blocking reads.\n\nGive ioReadLine() an option that tells it to read only what is available. That doesn't mean the function will never block but at least it won't do so by reading too far." }, { "commit": "bc810e5a87176ee68491e79473325a8a411a34c9", @@ -14694,13 +14694,13 @@ "commit": "8f857a975e22d6a7c0e99a494169f24ed50b6c06", "date": "2018-11-10 09:37:12 -0500", "subject": "Add constant macros to String object.", - "body": "There are many places (and the number is growing) where a zero-terminated string constant must be transformed into a String object to be usable. This pattern wastes time and memory, especially since the created string is generally used in a read-only fashion.\n\nDefine macros to create constant String objects that are initialized at compile time rather than at run time." + "body": "There are many places (and the number is growing) where a zero-terminated string constant must be transformed into a String object to be usable. This pattern wastes time and memory, especially since the created string is generally used in a read-only fashion.\n\nDefine macros to create constant String objects that are initialized at compile time rather than at run time." }, { "commit": "df200bee2acf614edf9d008cccde67f02d89f103", "date": "2018-11-09 16:50:22 -0500", "subject": "Add regExpPrefix() to aid in static prefix searches.", - "body": "The storageList() command accepts a regular expression as a filter. This works fine for local filesystems where it is relatively cheap to get a complete list of files and filter them in code. However, for remote filesystems like S3 it can be expensive to fetch a complete list of files only to discard the bulk of them locally.\n\nS3 does not filter on regular expressions but it can accept a static prefix so this function extracts a prefix from a regular expression when possible.\n\nEven a few characters can drastically reduce the amount of data that must be fetched remotely so the function does not try to be too clever. It requires a ^ anchor and stops scanning when the first special character is found." + "body": "The storageList() command accepts a regular expression as a filter. This works fine for local filesystems where it is relatively cheap to get a complete list of files and filter them in code. However, for remote filesystems like S3 it can be expensive to fetch a complete list of files only to discard the bulk of them locally.\n\nS3 does not filter on regular expressions but it can accept a static prefix so this function extracts a prefix from a regular expression when possible.\n\nEven a few characters can drastically reduce the amount of data that must be fetched remotely so the function does not try to be too clever. It requires a ^ anchor and stops scanning when the first special character is found." }, { "commit": "8c504bd2f9e6c14f134b6115f469ec3fe064cb73", @@ -14741,13 +14741,13 @@ "commit": "edb2c6eb26ca77eb3e3e05c006f3036dee2865b7", "date": "2018-11-08 08:37:57 -0500", "subject": "Construct Wait object in milliseconds instead of fractional seconds.", - "body": "The Wait object accepted a double in the constructor for wait time but used TimeMSec internally. This was done for compatibility with the Perl code.\n\nInstead, use TimeMSec in the Wait constructor and make changes as needed to calling code.\n\nNote that Perl still uses a double for its Wait object so translation is needed in some places. There are no plans to update the Perl code as it will become obsolete." + "body": "The Wait object accepted a double in the constructor for wait time but used TimeMSec internally. This was done for compatibility with the Perl code.\n\nInstead, use TimeMSec in the Wait constructor and make changes as needed to calling code.\n\nNote that Perl still uses a double for its Wait object so translation is needed in some places. There are no plans to update the Perl code as it will become obsolete." }, { "commit": "a9feaba9e521e39928ff5fdbc025eeff8fc17646", "date": "2018-11-07 08:51:32 -0500", "subject": "Add memContextCallbackClear() to prevent double free() calls.", - "body": "If an object free() method was called manually when a callback was set then the callback would call free() again. This meant that each free() method had to protect against a subsequent call.\n\nInstead, clear the callback (if present) before calling memContextFree(). This is faster (since there is no unnecessary callback) and removes the need for semaphores to protect against a double free()." + "body": "If an object free() method was called manually when a callback was set then the callback would call free() again. This meant that each free() method had to protect against a subsequent call.\n\nInstead, clear the callback (if present) before calling memContextFree(). This is faster (since there is no unnecessary callback) and removes the need for semaphores to protect against a double free()." }, { "commit": "48d2795f312224f03dc88e99434865540ca71c7e", @@ -14789,7 +14789,7 @@ "commit": "1f8931f73274163f27ba38aea378ea50488ba557", "date": "2018-11-03 16:34:04 -0400", "subject": "Improve single test run performance.", - "body": "Improve on 7794ab50 by including the build flag files directly into the Makefile as dependencies (even though they are not includes). This simplifies some of the rsync logic and allows make to do what it does best.\n\nAlso split build flag files into test, harness, and build to reduce rebuilds. Test flags are used to build test.c, harness flags are used to build the rest of the files in the test harness, and build flags are used for the files that are not directly involved in testing." + "body": "Improve on 7794ab50 by including the build flag files directly into the Makefile as dependencies (even though they are not includes). This simplifies some of the rsync logic and allows make to do what it does best.\n\nAlso split build flag files into test, harness, and build to reduce rebuilds. Test flags are used to build test.c, harness flags are used to build the rest of the files in the test harness, and build flags are used for the files that are not directly involved in testing." }, { "commit": "7794ab50dc839088fc1e1137edaaf65354faa76a", @@ -14807,7 +14807,7 @@ "commit": "34c63276cd26cd5310169343bb8e17e323feef95", "date": "2018-11-01 11:31:25 -0400", "subject": "Automatically enable backup checksum delta when anomalies (e.g. timeline switch) are detected.", - "body": "There are a number of cases where a checksum delta is more appropriate than the default time-based delta:\n\n* Timeline has switched since the prior backup\n* File timestamp is older than recorded in the prior backup\n* File size changed but timestamp did not\n* File timestamp is in the future compared to the start of the backup\n* Online option has changed since the prior backup\n\nA practical example is that checksum delta will be enabled after a failover to standby due to the timeline switch. In this case, timestamps can't be trusted and our recommendation has been to run a full backup, which can impact the retention schedule and requires manual intervention.\n\nNow, a checksum delta will be performed if the backup type is incr/diff. This means more CPU will be used during the backup but the backup size will be smaller and the retention schedule will not be impacted." + "body": "There are a number of cases where a checksum delta is more appropriate than the default time-based delta:\n\n* Timeline has switched since the prior backup\n* File timestamp is older than recorded in the prior backup\n* File size changed but timestamp did not\n* File timestamp is in the future compared to the start of the backup\n* Online option has changed since the prior backup\n\nA practical example is that checksum delta will be enabled after a failover to standby due to the timeline switch. In this case, timestamps can't be trusted and our recommendation has been to run a full backup, which can impact the retention schedule and requires manual intervention.\n\nNow, a checksum delta will be performed if the backup type is incr/diff. This means more CPU will be used during the backup but the backup size will be smaller and the retention schedule will not be impacted." }, { "commit": "cca7a4ffd477871d5eab323f1a5f621fbe48a568", @@ -14819,19 +14819,19 @@ "commit": "286f7e501154a05d147d146d25e3dd0ea0d49d5c", "date": "2018-10-27 20:00:00 +0100", "subject": "Fix static WAL segment size used to determine if archive-push-queue-max has been exceeded.", - "body": "This calculation was missed when the WAL segment size was made dynamic in preparation for PostgreSQL 11.\n\nFix the calculation by checking the actual WAL file sizes instead of using an estimate based on WAL segment size. This is more accurate because it takes into account .history and .backup files, which are smaller. Since the calculation is done in the async process the additional processing time should not adversely affect performance.\n\nRemove the PG_WAL_SIZE constant and instead use local constants where the old value is still required. This is only the case for some tests and PostgreSQL 8.3 which does not provide a way to get the WAL segment size from pg_control." + "body": "This calculation was missed when the WAL segment size was made dynamic in preparation for PostgreSQL 11.\n\nFix the calculation by checking the actual WAL file sizes instead of using an estimate based on WAL segment size. This is more accurate because it takes into account .history and .backup files, which are smaller. Since the calculation is done in the async process the additional processing time should not adversely affect performance.\n\nRemove the PG_WAL_SIZE constant and instead use local constants where the old value is still required. This is only the case for some tests and PostgreSQL 8.3 which does not provide a way to get the WAL segment size from pg_control." }, { "commit": "41b00dc204a04a87b337aa5bebe7cfd4032d5eae", "date": "2018-10-27 16:57:57 +0100", "subject": "Fix issue with archive-push-queue-max not being honored on connection error.", - "body": "If an error occurred while acquiring a lock on a remote server the error would be reported correctly, but the queue max detection code was not reached. The tests failed to detect this because they fixed the connection before queue max, allowing the ccde to be reached.\n\nMove the queue max code before the lock so it will run even when remote connections are not working. This means that no attempt will be made to transfer WAL once queue max has been exceeded, but it makes it much more likely that the code will be reach without error.\n\nUpdate tests to continue errors up to the point where queue max is exceeded." + "body": "If an error occurred while acquiring a lock on a remote server the error would be reported correctly, but the queue max detection code was not reached. The tests failed to detect this because they fixed the connection before queue max, allowing the ccde to be reached.\n\nMove the queue max code before the lock so it will run even when remote connections are not working. This means that no attempt will be made to transfer WAL once queue max has been exceeded, but it makes it much more likely that the code will be reach without error.\n\nUpdate tests to continue errors up to the point where queue max is exceeded." }, { "commit": "03b9db9aa2e84bf6b23af8331712d1480520ca19", "date": "2018-10-25 14:58:25 +0100", "subject": "Fix error after log file open failure when processing should continue.", - "body": "The C code was warning on failure and continuing but the Perl logging code was never updated with the same feature.\n\nRather than add the feature to Perl, just disable file logging if the log file cannot be opened. Log files are always opened by C first, so this will eliminate the error in Perl." + "body": "The C code was warning on failure and continuing but the Perl logging code was never updated with the same feature.\n\nRather than add the feature to Perl, just disable file logging if the log file cannot be opened. Log files are always opened by C first, so this will eliminate the error in Perl." }, { "commit": "d301720c58145910189829e8b2ceacccd3543c1e", @@ -14849,7 +14849,7 @@ "commit": "070455ce44504c63a5bf2f70093c64ed278fa12e", "date": "2018-10-19 12:31:56 +0200", "subject": "Correct current history item in InfoPg to always be in position 0.", - "body": "The InfoPg object was partially modified in 960ad732 to place the current history item in position 0, but infoPgDataCurrent() didn't get updated correctly.\n\nRemove this->indexCurrent and make the current position always equal 0. Use the new lstInsert() function when adding new history items via infoPgAdd(), but continue to use lstAdd() when loading from a file for efficiency.\n\nThis does not appear to be a live bug because infoPgDataCurrent() and infoPgAdd() are not yet used in any production code. The archive-get command is the only C code using InfoPG and it always looks at the entire list of items rather than just the current item." + "body": "The InfoPg object was partially modified in 960ad732 to place the current history item in position 0, but infoPgDataCurrent() didn't get updated correctly.\n\nRemove this->indexCurrent and make the current position always equal 0. Use the new lstInsert() function when adding new history items via infoPgAdd(), but continue to use lstAdd() when loading from a file for efficiency.\n\nThis does not appear to be a live bug because infoPgDataCurrent() and infoPgAdd() are not yet used in any production code. The archive-get command is the only C code using InfoPG and it always looks at the entire list of items rather than just the current item." }, { "commit": "f345db3f7ca15480c5aa70d015150b399f3ce33a", @@ -14877,7 +14877,7 @@ "commit": "2c272c220b39739ff324305593034487df81b466", "date": "2018-10-15 23:23:49 +0100", "subject": "PostgreSQL 11 support.", - "body": "PostgreSQL 11 RC1 support was tested in 9ae3d8c46 when the u18 container was rebuilt. Nothing substantive changed after RC1 so pgBackRest is ready for PostgreSQL 11 GA." + "body": "PostgreSQL 11 RC1 support was tested in 9ae3d8c46 when the u18 container was rebuilt. Nothing substantive changed after RC1 so pgBackRest is ready for PostgreSQL 11 GA." }, { "commit": "9ae3d8c46ac10273ca53c942410f768e27395230", @@ -14889,19 +14889,19 @@ "commit": "98ff8ccc59fa1e344426f3084e0a37498e56ce12", "date": "2018-10-09 15:08:49 +0100", "subject": "Improve documentation in filter.h and filter.internal.h.", - "body": "When the filter interface internals were split out into a new header file the documentation was not moved as it should have been. Additionally some functions which should have been moved were left behind.\n\nMove the documentation and functions to filter.internal.h and add more documentation. Filters are a tricky subject so the more documentation the better.\n\nAlso add documentation for the user-facing filter functions in filter.h." + "body": "When the filter interface internals were split out into a new header file the documentation was not moved as it should have been. Additionally some functions which should have been moved were left behind.\n\nMove the documentation and functions to filter.internal.h and add more documentation. Filters are a tricky subject so the more documentation the better.\n\nAlso add documentation for the user-facing filter functions in filter.h." }, { "commit": "68110d04b24dc633e5b5b6869924dd36a2307135", "date": "2018-10-07 17:50:10 +0100", "subject": "Add ioReadLine()/ioWriteLine() to IoRead/IoWrite objects.", - "body": "Allow a single linefeed-terminated line to be read or written. This is useful for various protocol implementations, including HTTP and pgBackRest's protocol.\n\nOn read the maximum line size is limited to buffer-size to prevent runaway memory usage in case a linefeed is not found. This seems fine for HTTP but we may need to revisit this decision when implementing the pgBackRest protocol. Another option would be to increase the minimum buffer size (currently 16KB)." + "body": "Allow a single linefeed-terminated line to be read or written. This is useful for various protocol implementations, including HTTP and pgBackRest's protocol.\n\nOn read the maximum line size is limited to buffer-size to prevent runaway memory usage in case a linefeed is not found. This seems fine for HTTP but we may need to revisit this decision when implementing the pgBackRest protocol. Another option would be to increase the minimum buffer size (currently 16KB)." }, { "commit": "db8dce7adcf277a021ad8766ab91e86fad913eff", "date": "2018-10-02 17:54:43 +0100", "subject": "Disable flapping archive/get unit on CentOS 6.", - "body": "This test has been flapping since 9b9396c7. It seems to be some kind of timing issue since all integration tests pass and this unit passes on all other VMs. It only happens on Travis and is not reproducible in any development environment that we have tried.\n\nFor now, disable the test since the constant flapping is causing major delays in testing and quite a bit of time has been spent trying to identify the root cause. We are actively developing these tests and hope the issue will be identified during the course of normal development.\n\nA number of improvements were made to the tests while searching for this issue. While none of them helped, it makes sense to keep the improvements." + "body": "This test has been flapping since 9b9396c7. It seems to be some kind of timing issue since all integration tests pass and this unit passes on all other VMs. It only happens on Travis and is not reproducible in any development environment that we have tried.\n\nFor now, disable the test since the constant flapping is causing major delays in testing and quite a bit of time has been spent trying to identify the root cause. We are actively developing these tests and hope the issue will be identified during the course of normal development.\n\nA number of improvements were made to the tests while searching for this issue. While none of them helped, it makes sense to keep the improvements." }, { "commit": "ed5d7a53de7cd952825e824ae507e275c4648e18", @@ -14918,7 +14918,7 @@ "commit": "5404628148a3039f3933607c7f63cb4620f71761", "date": "2018-09-27 17:48:40 +0100", "subject": "Fix incorrect error message for duplicate options in configuration files.", - "body": "Duplicating a non-multi-value option was not throwing the correct message when the option was a boolean.\n\nThe reason was that the option was being validated as a boolean before the multi-value check was being done. The validation code assumed it was operating on a string but was instead operating on a string list causing an assertion to fail.\n\nSince it's not safe to do the multi-value check so late, move it up to the command-line and configuration file parse phases instead." + "body": "Duplicating a non-multi-value option was not throwing the correct message when the option was a boolean.\n\nThe reason was that the option was being validated as a boolean before the multi-value check was being done. The validation code assumed it was operating on a string but was instead operating on a string list causing an assertion to fail.\n\nSince it's not safe to do the multi-value check so late, move it up to the command-line and configuration file parse phases instead." }, { "commit": "be2271f6d312e6497e98df2c792d2455afb461eb", @@ -14947,13 +14947,13 @@ "commit": "51484a008f1172b51c2296974de7bfcd37ab24b8", "date": "2018-09-26 18:46:52 +0100", "subject": "Add bufNewZ() to Buffer object.", - "body": "This constructor creates a Buffer object directly from a zero-terminated string. The old way was to create a String object first, then convert that to a Buffer using bufNewStr().\n\nUpdated in all places that used the old pattern." + "body": "This constructor creates a Buffer object directly from a zero-terminated string. The old way was to create a String object first, then convert that to a Buffer using bufNewStr().\n\nUpdated in all places that used the old pattern." }, { "commit": "d038b9a029f0c981e32a94b8b8632c356ffec7ac", "date": "2018-09-25 10:24:42 +0100", "subject": "Support configurable WAL segment size.", - "body": "PostgreSQL 11 introduces configurable WAL segment sizes, from 1MB to 1GB.\n\nThere are two areas that needed to be updated to support this: building the archive-get queue and checking that WAL has been archived after a backup. Both operations require the WAL segment size to properly build a list.\n\nChecking the archive after a backup is still implemented in Perl and has an active database connection, so just get the WAL segment size from the database.\n\nThe archive-get command does not have a connection to the database, so get the WAL segment size from pg_control instead. This requires a deeper inspection of pg_control than has been done in the past, so it seemed best to copy the relevant data structures from each version of PostgreSQL and build a generic interface layer to address them. While this approach is a bit verbose, it has the advantage of being relatively simple, and can easily be updated for new versions of PostgreSQL.\n\nSince the integration tests generate pg_control files for testing, teach Perl how to generate files with the correct offsets for both 32-bit and 64-bit architectures." + "body": "PostgreSQL 11 introduces configurable WAL segment sizes, from 1MB to 1GB.\n\nThere are two areas that needed to be updated to support this: building the archive-get queue and checking that WAL has been archived after a backup. Both operations require the WAL segment size to properly build a list.\n\nChecking the archive after a backup is still implemented in Perl and has an active database connection, so just get the WAL segment size from the database.\n\nThe archive-get command does not have a connection to the database, so get the WAL segment size from pg_control instead. This requires a deeper inspection of pg_control than has been done in the past, so it seemed best to copy the relevant data structures from each version of PostgreSQL and build a generic interface layer to address them. While this approach is a bit verbose, it has the advantage of being relatively simple, and can easily be updated for new versions of PostgreSQL.\n\nSince the integration tests generate pg_control files for testing, teach Perl how to generate files with the correct offsets for both 32-bit and 64-bit architectures." }, { "commit": "c0b0b4e541fc7cc5077e7f6b9fa646e7654ca9a4", @@ -14976,7 +14976,7 @@ "commit": "880fbb5e578c387e52f83c8eca1040b42f4e21a8", "date": "2018-09-19 11:12:45 -0400", "subject": "Add checksum delta for incremental backups.", - "body": "Use checksums rather than timestamps to determine if files have changed. This is useful in cases where the timestamps may not be trustworthy, e.g. when performing an incremental after failing over to a standby.\n\nIf checksum delta is enabled then checksums will be used for verification of resumed backups, even if they are full. Resumes have always used checksums to verify the files in the repository, enabling delta performs checksums on the database files as well.\n\nNote that the user must manually enable this feature in cases were it would be useful or just keep in enabled all the time. A future commit will address automatically enabling the feature in cases where it seems likely to be useful." + "body": "Use checksums rather than timestamps to determine if files have changed. This is useful in cases where the timestamps may not be trustworthy, e.g. when performing an incremental after failing over to a standby.\n\nIf checksum delta is enabled then checksums will be used for verification of resumed backups, even if they are full. Resumes have always used checksums to verify the files in the repository, enabling delta performs checksums on the database files as well.\n\nNote that the user must manually enable this feature in cases were it would be useful or just keep in enabled all the time. A future commit will address automatically enabling the feature in cases where it seems likely to be useful." }, { "commit": "bf0691576a56df0d662488cee19d21e924a3cecb", @@ -15000,31 +15000,31 @@ "commit": "03003562d86f7fdbab21cad9f7a7ee56c3070f93", "date": "2018-09-17 11:45:41 -0400", "subject": "Merge all posix storage tests into a single unit.", - "body": "As we add storage drivers it's important to keep the tests for each completely separate. Rather than have three tests for each driver, standardize on having a single test unit for each driver." + "body": "As we add storage drivers it's important to keep the tests for each completely separate. Rather than have three tests for each driver, standardize on having a single test unit for each driver." }, { "commit": "e55d73304102145c46c32f25dbe103cd4c750d37", "date": "2018-09-17 11:38:10 -0400", "subject": "Add -ftree-coalesce-vars option to unit test compilation.", - "body": "This is a workaround for inefficient handling of many setjmps in gcc >= 4.9. Setjmp is used in all error handling, but in the unit tests each test macro contains an error handling block so they add up pretty quickly for large unit tests.\n\nEnabling -ftree-coalesce-vars in affected versions reduces build time and memory requirements by nearly an order of magnitude. Even so, compiles are much slower than gcc <= 4.8.\n\nWe submitted a bug for this at: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87316\nWhich was marked as a duplicate of: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63155" + "body": "This is a workaround for inefficient handling of many setjmps in gcc >= 4.9. Setjmp is used in all error handling, but in the unit tests each test macro contains an error handling block so they add up pretty quickly for large unit tests.\n\nEnabling -ftree-coalesce-vars in affected versions reduces build time and memory requirements by nearly an order of magnitude. Even so, compiles are much slower than gcc <= 4.8.\n\nWe submitted a bug for this at: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87316\nWhich was marked as a duplicate of: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63155" }, { "commit": "b5f749b21cbf51e3ba9f3bc2129c337cb6ebec92", "date": "2018-09-16 18:41:30 -0400", "subject": "Add CIFS driver to storage helper for read-only repositories.", - "body": "For read-only repositories the Posix and CIFS drivers behave exactly the same. Since that's all we support in C right now it's valid to treat them as the same thing. An assertion has been added to remind us to add the CIFS driver before allowing the repository to be writable.\n\nMostly we want to make sure that the C code does not blow up when the repository type is CIFS." + "body": "For read-only repositories the Posix and CIFS drivers behave exactly the same. Since that's all we support in C right now it's valid to treat them as the same thing. An assertion has been added to remind us to add the CIFS driver before allowing the repository to be writable.\n\nMostly we want to make sure that the C code does not blow up when the repository type is CIFS." }, { "commit": "a6c346cb04d8977d37e52072488d89a7fb3a89b6", "date": "2018-09-16 17:26:04 -0400", "subject": "Clear test directory between test runs.", - "body": "Previously it was the responsibility of the individual tests to clean up after themselves. Now the test harness now does the cleanup automatically.\n\nThis means that some paths/files need to be recreated with each run but that doesn't happen very often.\n\nAn attempt has been made to remove all redundant cleanup code but it's hard to know if everything has been caught. No issues will be caused by anything that was missed, but they will continue to chew up time in the tests." + "body": "Previously it was the responsibility of the individual tests to clean up after themselves. Now the test harness now does the cleanup automatically.\n\nThis means that some paths/files need to be recreated with each run but that doesn't happen very often.\n\nAn attempt has been made to remove all redundant cleanup code but it's hard to know if everything has been caught. No issues will be caused by anything that was missed, but they will continue to chew up time in the tests." }, { "commit": "4119ce208d1d6de2b096d0465721e93dcb981d3f", "date": "2018-09-16 15:58:46 -0400", "subject": "Move test expect log out of the regular test directory.", - "body": "Storing the expect log (created by common/harnessLog) in the regular test directory was not ideal. It showed up in tests and made it difficult to clear the test directory between each run.\n\nMove the expect log to a purpose-built directory one level up so it does not interfere with regular testing." + "body": "Storing the expect log (created by common/harnessLog) in the regular test directory was not ideal. It showed up in tests and made it difficult to clear the test directory between each run.\n\nMove the expect log to a purpose-built directory one level up so it does not interfere with regular testing." }, { "commit": "8852622fa241836b0a6c5c08d755f2c3d990427d", @@ -15075,7 +15075,7 @@ "commit": "aeb1fa3dfbf4a4b7384f8f2bc5a18a23e741396e", "date": "2018-09-13 19:12:40 -0400", "subject": "Don't perform valgrind when requested.", - "body": "The --no-valgrind flag was not being honored. It's not clear if this flag ever worked, but it does now." + "body": "The --no-valgrind flag was not being honored. It's not clear if this flag ever worked, but it does now." }, { "commit": "fd14ceb3995fca731141f09f7c940819138c63a2", @@ -15087,7 +15087,7 @@ "commit": "ab1762663cdcc3564113c5524dc384991228c0a4", "date": "2018-09-13 17:53:48 -0400", "subject": "Don't use negations in objects below Storage.", - "body": "The Storage object represents some some optional parameters as negated if the default is true. This allows sensible defaults without having to specify most optional parameters.\n\nHowever, there's no need to propagate this down to functions that require all parameters to be passed -- it makes the code and logging more confusing. Rename the parameters and update logic to remove negations." + "body": "The Storage object represents some some optional parameters as negated if the default is true. This allows sensible defaults without having to specify most optional parameters.\n\nHowever, there's no need to propagate this down to functions that require all parameters to be passed -- it makes the code and logging more confusing. Rename the parameters and update logic to remove negations." }, { "commit": "1fb9fe7026f355f86c44fac375a370458c10fc99", @@ -15098,13 +15098,13 @@ "commit": "5aa458ffaecc42cbf6ee37ee4f985d255e31443e", "date": "2018-09-11 18:32:56 -0400", "subject": "Simplify debug logging by allowing log functions to return String objects.", - "body": "Previously, debug log functions had to handle NULLs and truncate output to the available buffer size. This was verbose for both coding and testing.\n\nInstead, create a function/macro combination that allows log functions to return a simple String object. The wrapper function takes care of the memory context, handles NULLs, and truncates the log string based on the available buffer size." + "body": "Previously, debug log functions had to handle NULLs and truncate output to the available buffer size. This was verbose for both coding and testing.\n\nInstead, create a function/macro combination that allows log functions to return a simple String object. The wrapper function takes care of the memory context, handles NULLs, and truncates the log string based on the available buffer size." }, { "commit": "9b9396c7b7745cc4c765285e24ed131e7f72f343", "date": "2018-09-11 15:42:31 -0400", "subject": "Migrate local, unencrypted, non-S3 archive-get command to C.", - "body": "The archive-get command will only be executed in C if the repository is local, unencrypted, and type posix or cifs. Admittedly a limited use case, but this is just the first step in migrating the archive-get command entirely into C.\n\nThis is a direct migration from the Perl code (including messages) to integrate as seamlessly with the remaining Perl code as possible. It should not be possible to determine if the C version is running unless debug-level logging is enabled." + "body": "The archive-get command will only be executed in C if the repository is local, unencrypted, and type posix or cifs. Admittedly a limited use case, but this is just the first step in migrating the archive-get command entirely into C.\n\nThis is a direct migration from the Perl code (including messages) to integrate as seamlessly with the remaining Perl code as possible. It should not be possible to determine if the C version is running unless debug-level logging is enabled." }, { "commit": "787e7c295ff64b7f862693784fa901d8b046a3d4", @@ -15116,7 +15116,7 @@ "commit": "9e574a37dc4a5c03155a00673a944c2da0a51528", "date": "2018-09-11 12:30:48 -0400", "subject": "Make archive-get info messages consistent between C and Perl implementations.", - "body": "The info messages were spread around and logged differently based on the execution path and in some cases logged nothing at all.\n\nTemporarily track the async server status with a flag so that info messages are not output in the async process. The async process will be refactored as a separate command to be exec'd in a future commit." + "body": "The info messages were spread around and logged differently based on the execution path and in some cases logged nothing at all.\n\nTemporarily track the async server status with a flag so that info messages are not output in the async process. The async process will be refactored as a separate command to be exec'd in a future commit." }, { "commit": "6c1d48b0186f4df40787349d37b35d822e043289", @@ -15151,7 +15151,7 @@ "commit": "f7fc8422f780d18d045f42124dc40aa0920fe416", "date": "2018-09-07 16:50:01 -0700", "subject": "Make Valgrind return an error even when a non-fatal issue is detected.", - "body": "By default Valgrind does not exit with an error code when a non-fatal error is detected, e.g. unfreed memory. Use the --error-exitcode option to enabled this behavior.\n\nUpdate some minor issues discovered in the tests as a result. Luckily, no issues were missed in the core code." + "body": "By default Valgrind does not exit with an error code when a non-fatal error is detected, e.g. unfreed memory. Use the --error-exitcode option to enabled this behavior.\n\nUpdate some minor issues discovered in the tests as a result. Luckily, no issues were missed in the core code." }, { "commit": "faaa9a91fda78f8e38d0a479675f08a5b17a7302", @@ -15180,13 +15180,13 @@ "commit": "960ad73298fa9c3b9ea60b24428a10ede9a58828", "date": "2018-09-06 10:12:14 -0700", "subject": "Info objects now parse JSON and use specified storage.", - "body": "Use JSON code now that it is available and remove temporary hacks used to get things working initially.\n\nUse passed storage objects rather than using storageLocal(). All storage objects in C are still local but this won't always be the case.\n\nAlso, move Postgres version conversion functions to postgres/info.c since they have no dependency on the info objects and will likely be useful elsewhere." + "body": "Use JSON code now that it is available and remove temporary hacks used to get things working initially.\n\nUse passed storage objects rather than using storageLocal(). All storage objects in C are still local but this won't always be the case.\n\nAlso, move Postgres version conversion functions to postgres/info.c since they have no dependency on the info objects and will likely be useful elsewhere." }, { "commit": "de1b74da0cf9c4f182e3a4cde52d0b5163111a44", "date": "2018-09-06 09:35:34 -0700", "subject": "Move encryption in mock/archive tests to remote tests.", - "body": "The new archive-get C code can't run (yet) when encryption is enabled. Therefore move the encryption tests so we can test the new C code. We'll move it back when encryption is enabled in C.\n\nAlso, push one WAL segment with compression to test decompression in the C code." + "body": "The new archive-get C code can't run (yet) when encryption is enabled. Therefore move the encryption tests so we can test the new C code. We'll move it back when encryption is enabled in C.\n\nAlso, push one WAL segment with compression to test decompression in the C code." }, { "commit": "6361a061812b20efeb89db110ebd592a0d52404f", @@ -15198,13 +15198,13 @@ "commit": "800afeef70464da57af023614b1dfeecce5e112c", "date": "2018-09-04 17:47:23 -0400", "subject": "Posix file functions now differentiate between open and missing errors.", - "body": "The Perl functions do so and the integration tests rely on checking for these errors. This has been exposed as more functionality is moved into C.\n\nPassing the errors types is now a bit complicated so instead use a flag to determine which errors to throw." + "body": "The Perl functions do so and the integration tests rely on checking for these errors. This has been exposed as more functionality is moved into C.\n\nPassing the errors types is now a bit complicated so instead use a flag to determine which errors to throw." }, { "commit": "375ff9f9d283842da3620f22c9c5bdb61249ae9e", "date": "2018-08-31 16:06:40 -0400", "subject": "Ignore all files in a linked tablespace directory except the subdirectory for the current version of PostgreSQL.", - "body": "Previously an error would be generated if other files were present and not owned by the PostgreSQL user. This hasn't been a big deal in practice but it could cause issues.\n\nAlso add tests to make sure the same logic applies with links to files, i.e. all other files in the directory should be ignored. This was actually working correctly, but there were no tests for it before." + "body": "Previously an error would be generated if other files were present and not owned by the PostgreSQL user. This hasn't been a big deal in practice but it could cause issues.\n\nAlso add tests to make sure the same logic applies with links to files, i.e. all other files in the directory should be ignored. This was actually working correctly, but there were no tests for it before." }, { "commit": "41746b53cd91c0d7203d855f9186c685ea59af47", @@ -15220,13 +15220,13 @@ "commit": "d41570c37a2820380fa1708f31ed21c09e4065f8", "date": "2018-08-31 11:31:13 -0400", "subject": "Improve log file names for remote processes started by locals.", - "body": "The log-subprocess feature added in 22765670 failed to take into account the naming for remote processes spawned by local processes. Not only was the local command used for the naming of log files but the process id was not pass through. This meant every remote log was named \"[stanza]-local-remote-000\" which is confusing and meant multiple processes were writing to the same log.\n\nInstead, pass the real command and process id to the remote. This required a minor change in locking to ignore locks if process id is greater than 0 since remotes started by locals never lock." + "body": "The log-subprocess feature added in 22765670 failed to take into account the naming for remote processes spawned by local processes. Not only was the local command used for the naming of log files but the process id was not pass through. This meant every remote log was named \"[stanza]-local-remote-000\" which is confusing and meant multiple processes were writing to the same log.\n\nInstead, pass the real command and process id to the remote. This required a minor change in locking to ignore locks if process id is greater than 0 since remotes started by locals never lock." }, { "commit": "c2d0a21d63a3c76bf513ac79c622e694ca9dfe1d", "date": "2018-08-30 18:44:40 -0400", "subject": "Allow secrets to be passed via environment variables.", - "body": "When environment variables were added in d0b9f986 they were classified as cfgSourceParam, but one of the restrictions on this type is that they can't pass secrets because they might be exposed in the process list.\n\nThe solution is to reclassify environment variables as cfgSourceConfig. This allows them to handle secrets because they will not pass values to subprocesses as parameters. Instead, each subprocess is expected to check the environment directly during configuration parsing.\n\nIn passing, move the error about secrets being passed on the command-line up to command-line parsing and make the error more generic with respect to the configuration file now that multiple configuration files are allowed." + "body": "When environment variables were added in d0b9f986 they were classified as cfgSourceParam, but one of the restrictions on this type is that they can't pass secrets because they might be exposed in the process list.\n\nThe solution is to reclassify environment variables as cfgSourceConfig. This allows them to handle secrets because they will not pass values to subprocesses as parameters. Instead, each subprocess is expected to check the environment directly during configuration parsing.\n\nIn passing, move the error about secrets being passed on the command-line up to command-line parsing and make the error more generic with respect to the configuration file now that multiple configuration files are allowed." }, { "commit": "70514061fdca620ba99dd0e36bb99dccbca43e1d", @@ -16026,7 +16026,7 @@ "commit": "4744eb93878da6b7aa2173be16928b081c865b1d", "date": "2018-04-11 08:21:09 -0400", "subject": "Add storagePathRemove() and use it in the Perl Posix driver.", - "body": "This implementation should be faster because it does not stat each file. It simply assumes that most directory entries are files so attempts an unlink() first. If the entry is reported by error codes to be a directory then it attempts an rmdir()." + "body": "This implementation should be faster because it does not stat each file. It simply assumes that most directory entries are files so attempts an unlink() first. If the entry is reported by error codes to be a directory then it attempts an rmdir()." }, { "commit": "c9ce20d41a0c6edc3087511eaff5e0a80e95c8da", @@ -16443,7 +16443,7 @@ "commit": "d4418e7764bf6b3d3e8b6d2bc204cfd0e00bca13", "date": "2018-02-21 18:15:40 -0500", "subject": "Rename pg-primary and pg-standby variables to pg1 and pg2.", - "body": "It would be better if the hostnames were also pg1 and pg2 to illustrate that primaries and standbys can change hosts, but at this time the configuration ends up being confusing since pg1, pg2, etc. are also used in the option naming. So, for now leave the names as pg-primary and pg-standby to avoid confusion." + "body": "It would be better if the hostnames were also pg1 and pg2 to illustrate that primaries and standbys can change hosts, but at this time the configuration ends up being confusing since pg1, pg2, etc. are also used in the option naming. So, for now leave the names as pg-primary and pg-standby to avoid confusion." }, { "commit": "5eb682a569cdc4f33ea053e3619163c98aefcdd1", @@ -16546,7 +16546,7 @@ "commit": "5f2884cb296a2c71dd9d6432c2c387efc96ceeef", "date": "2018-02-14 16:46:52 -0500", "subject": "Suppress coverage failures for Archive/Push/Async on Travis.", - "body": "The coverage report shows some code as never being run -- but that makes no sense because the tests pass. This may be due to trying to combine the C and Perl coverage reports and overwriting some runs.\n\nSuppress for now with a plan to implement LCOV for the C unit tests." + "body": "The coverage report shows some code as never being run -- but that makes no sense because the tests pass. This may be due to trying to combine the C and Perl coverage reports and overwriting some runs.\n\nSuppress for now with a plan to implement LCOV for the C unit tests." }, { "commit": "a907fd7d2d21f01714427d0c769e3973a99f1a45", @@ -17609,7 +17609,7 @@ "commit": "2310e423e98259838826ccc8093e43668fab061a", "date": "2017-06-27 16:47:40 -0400", "subject": "Fixed an issue that prevented tablespaces from being backed up on PostgreSQL ≤ 8.4.", - "body": "The integration tests that were supposed to prevent this regression did not work as intended. They verified the contents of a table in the (supposedly) restored tablespace, deleted the table, and then deleted the tablespace. All of this was deemed sufficient to prove that the tablespace had been restored correctly and was valid.\n\nHowever, PostgreSQL will happily recreate a tablespace on the basis of a single full-page write, at least in the affected versions. Since writes to the test table were replayed from WAL with each recovery, all the tests passed even though the tablespace was missing after the restore.\n\nThe tests have been updated to include direct comparisons against the file system and a new table that is not replayed after a restore because it is created before the backup and never modified again.\n\nVersions ≥ 9.0 were not affected due to numerous synthetic integration tests that verify backups and restores file by file." + "body": "The integration tests that were supposed to prevent this regression did not work as intended. They verified the contents of a table in the (supposedly) restored tablespace, deleted the table, and then deleted the tablespace. All of this was deemed sufficient to prove that the tablespace had been restored correctly and was valid.\n\nHowever, PostgreSQL will happily recreate a tablespace on the basis of a single full-page write, at least in the affected versions. Since writes to the test table were replayed from WAL with each recovery, all the tests passed even though the tablespace was missing after the restore.\n\nThe tests have been updated to include direct comparisons against the file system and a new table that is not replayed after a restore because it is created before the backup and never modified again.\n\nVersions ≥ 9.0 were not affected due to numerous synthetic integration tests that verify backups and restores file by file." }, { "commit": "fdabf33604cdc5f253187d559b3d4be52c2d843d", @@ -17842,7 +17842,7 @@ "commit": "5c635e0f0a1f5dc8ab8ef8c65204e81d3495afcb", "date": "2017-04-12 18:36:33 -0400", "subject": "Go back to using static user for documentation.", - "body": "Making this dynamic in commit 5d2e792 broke doc builds from cache. The long-term solution is to create a special user for doc builds but that’s beyond the scope of this release." + "body": "Making this dynamic in commit 5d2e792 broke doc builds from cache. The long-term solution is to create a special user for doc builds but that’s beyond the scope of this release." }, { "commit": "f207dc71238097a0fded50cc87bb7ef2afb006de", @@ -18508,7 +18508,7 @@ "commit": "dbd16d25b97ba2d2cd86bcfb35e1ca41a2323c33", "date": "2016-11-22 17:29:24 -0500", "subject": "Fixed regression in section links introduced in v1.10.", - "body": "This was introduced in an effort to make the html output XHTML 1.0 STRICT compliant because the standard does not allow / characters in anchors.\n\nHowever, the / characters were changed to . in the anchors but not in the links. For now revert the anchors to / so further though can be given to this issue." + "body": "This was introduced in an effort to make the html output XHTML 1.0 STRICT compliant because the standard does not allow / characters in anchors.\n\nHowever, the / characters were changed to . in the anchors but not in the links. For now revert the anchors to / so further though can be given to this issue." }, { "commit": "c9b49b0d7e1f0cc612511c1691dcde89ec0cee92", @@ -19103,7 +19103,7 @@ "commit": "bd25223fd6167ef2e1cc4e9788de46371a647572", "date": "2016-06-24 10:54:31 -0400", "subject": "Rename test paths for clarity.", - "body": "This was worked out as part of the test suite refactor [c8f806a] but not committed with it because of the large number of expect logs changes involved. Keeping them separate made it easier to audit the changes in the refactor." + "body": "This was worked out as part of the test suite refactor [c8f806a] but not committed with it because of the large number of expect logs changes involved. Keeping them separate made it easier to audit the changes in the refactor." }, { "commit": "c8f806a293813323affea96c73b1aa177a0ac15a", @@ -19132,7 +19132,7 @@ "commit": "012405a33b01a3f215a0ee610a2648c5f5afb0f6", "date": "2016-06-18 09:55:00 -0400", "subject": "Closed #207: Expire fails with unhandled exception.", - "body": "* Fixed an issue where the expire command would refuse to run when explicitly called from the command line if the db-host option was set. This was not an issue when expire was run after a backup, which is the usual case.\n* Option handling is now far more strict. Previously it was possible for a command to use an option that was not explicitly assigned to it. This was especially true for the backup-host and db-host options which are used to determine locality." + "body": "* Fixed an issue where the expire command would refuse to run when explicitly called from the command line if the db-host option was set. This was not an issue when expire was run after a backup, which is the usual case.\n* Option handling is now far more strict. Previously it was possible for a command to use an option that was not explicitly assigned to it. This was especially true for the backup-host and db-host options which are used to determine locality." }, { "commit": "e988b96eceafd1619d2f8c51c94e38523a051f54", @@ -19281,7 +19281,7 @@ "commit": "c8d68bcf2d33b206634d0d537ef6d4bfb84229ba", "date": "2016-05-26 10:34:10 -0400", "subject": "More detailed release notes.", - "body": "Release notes are now broken into sections so that bugs, features, and refactors are clearly delineated. An \"Additional Notes\" section has been added for changes to documentation and the test suite that do not affect the core code." + "body": "Release notes are now broken into sections so that bugs, features, and refactors are clearly delineated. An \"Additional Notes\" section has been added for changes to documentation and the test suite that do not affect the core code." }, { "commit": "0fb8bcbfb7107b83c345dac1858aab02eeeb4302", @@ -19293,7 +19293,7 @@ "commit": "5a85122841ff38bea5b7b3d02a5d8ace40a4e81f", "date": "2016-05-26 09:20:55 -0400", "subject": "Moved change log to website.", - "body": "The change log was the last piece of documentation to be rendered in Markdown only. Wrote a converter so the document can be output by the standard renderers. The change log will now be located on the website and has been renamed to \"Releases\"." + "body": "The change log was the last piece of documentation to be rendered in Markdown only. Wrote a converter so the document can be output by the standard renderers. The change log will now be located on the website and has been renamed to \"Releases\"." }, { "commit": "e2094c3d312a69544cd144e4758508c7966075b8", @@ -19361,7 +19361,7 @@ "commit": "9b5a27f6578b7041d72f8527e71c35aa60c0e8ab", "date": "2016-05-14 10:39:56 -0400", "subject": "Add Manifest->addFile().", - "body": "Some files need to be added to the manifest after the initial build. This is currently done in only one place but usage will expand in the future so the functionality has been encapsulated in addFile()." + "body": "Some files need to be added to the manifest after the initial build. This is currently done in only one place but usage will expand in the future so the functionality has been encapsulated in addFile()." }, { "commit": "77b01e980fc82d81585531bd688d8a32bed759d5", @@ -19373,13 +19373,13 @@ "commit": "512d006346a06ef2eaf7af7ac2863a2c1a62f3de", "date": "2016-05-14 10:33:12 -0400", "subject": "Refactor database version identification for archive and backup commands.", - "body": "Added database version constants and changed version identification code to use hash tables instead of if-else. Propagated the db version constants to the rest of the code and in passing fixed some path/filename constants.\n\nAdded new regression tests to check that specific files are never copied." + "body": "Added database version constants and changed version identification code to use hash tables instead of if-else. Propagated the db version constants to the rest of the code and in passing fixed some path/filename constants.\n\nAdded new regression tests to check that specific files are never copied." }, { "commit": "4d9920cc48d871035a9a601095c6700510dd5388", "date": "2016-05-14 10:29:35 -0400", "subject": "Fix null and linefeed handling in Db->executeSql().", - "body": "The join() used was not able to handle nulls and was replaced by a loop. An injudicious trim was removed when the source of extra linefeeds was determined to be an additional loop execution that was not handled correctly." + "body": "The join() used was not able to handle nulls and was replaced by a loop. An injudicious trim was removed when the source of extra linefeeds was determined to be an additional loop execution that was not handled correctly." }, { "commit": "0c320e7df77a63e3bbebf43dc931b5884531a772", @@ -19391,13 +19391,13 @@ "commit": "e430f0a05439ff851d3c4fa8b5a19cd1c743b945", "date": "2016-05-11 08:59:34 -0400", "subject": "Added `--db-version=minimal` option as default.", - "body": "This change assigns each version of PostgreSQL to a specific OS version for testing to minimize the number of tests being run. In general, older versions of PostgreSQL are assigned to older OS versions.\n\nThe old behavior can be enabled with `--db-version=all`." + "body": "This change assigns each version of PostgreSQL to a specific OS version for testing to minimize the number of tests being run. In general, older versions of PostgreSQL are assigned to older OS versions.\n\nThe old behavior can be enabled with `--db-version=all`." }, { "commit": "a6a19e3735f5edfa449fe351372b9d4be625d03e", "date": "2016-05-10 18:12:37 -0400", "subject": "Test directories are now located on the host VM rather than in the Docker container.", - "body": "This change allows for easier testing since all files are local on the host VM and can be easily accessed without using `docker exec`. In addition, this change is required to allow multiple Docker containers per test case which is coming soon." + "body": "This change allows for easier testing since all files are local on the host VM and can be easily accessed without using `docker exec`. In addition, this change is required to allow multiple Docker containers per test case which is coming soon." }, { "commit": "60b901948af90126700685b5424f332a490d1ad1", @@ -19455,8 +19455,8 @@ { "commit": "ed20c2eda3790b6a327f05a1c08761cdec38f8eb", "date": "2016-04-16 16:38:44 -0400", - "subject": "Close #172: Unable to unpack Int64 when running on 32-bit OS", - "body": "Added a note to documentation that only 64-bit distributions are supported. It seems unlikely that anybody would be running a production server on anything else these days so we'll wait for a field report before taking further action." + "subject": "Close #172: Unable to unpack Int64 when running on 32-bit OS", + "body": "Added a note to documentation that only 64-bit distributions are supported. It seems unlikely that anybody would be running a production server on anything else these days so we'll wait for a field report before taking further action." }, { "commit": "dee3e86ff8444689f427147e9199070b8bb46ab3", @@ -19494,7 +19494,7 @@ "commit": "885797e4b58a675487a3531ba16908c1d1e9f970", "date": "2016-04-13 19:09:35 -0400", "subject": "Migrated many functions from File.pm to FileCommon.pm.", - "body": "This makes make the migrated file functions available to parts of the code that don't have access to a File object. They still exist as wrappers in the File object to support remote calls." + "body": "This makes make the migrated file functions available to parts of the code that don't have access to a File object. They still exist as wrappers in the File object to support remote calls." }, { "commit": "be8487dbad8a52f8b1f5083e3d7e2e3e47140dce", @@ -19516,7 +19516,7 @@ "commit": "0e4fdda6d8f7da0b737f2ffd3f0d1ad9ec3a33ab", "date": "2016-04-12 15:50:25 -0400", "subject": "Improved error handling when remote closes unexpectedly.", - "body": "In conditions where an error is known to have occurred wait to try and capture the error in the first call that detects the error. Due to timing sometimes the error could be caught later, which worked, but it made the functionality inconsistent in testing." + "body": "In conditions where an error is known to have occurred wait to try and capture the error in the first call that detects the error. Due to timing sometimes the error could be caught later, which worked, but it made the functionality inconsistent in testing." }, { "commit": "8adbcccd02d133f9650486019911f41afe80100d", @@ -19579,7 +19579,7 @@ "commit": "0b317d9040d33c46f466155f9873757c11517884", "date": "2016-02-27 10:11:58 -0500", "subject": "Fix minor bug in protocol compression.", - "body": "This erroneous last caused a warning (which threw an error) and masked the error in decompression. It was found when accidentally attempting to decompress an already-decompressed file, so not a big deal in practice which is probably why it hug around for so long." + "body": "This erroneous last caused a warning (which threw an error) and masked the error in decompression. It was found when accidentally attempting to decompress an already-decompressed file, so not a big deal in practice which is probably why it hug around for so long." }, { "commit": "d4c46acf489910ccf248f7e04438a8108647d03a", @@ -19591,7 +19591,7 @@ "commit": "048571e23fc9e5b0756c0f19590fe4b74706a01e", "date": "2016-02-23 09:25:22 -0500", "subject": "Closed #173: Add static source code analysis", - "body": "Perl Critic added and passes on gentle. A policy file has been created with some permanent exceptions and a list of policies to be fixed in approximately the order they should be fixed in." + "body": "Perl Critic added and passes on gentle. A policy file has been created with some permanent exceptions and a list of policies to be fixed in approximately the order they should be fixed in." }, { "commit": "d35ab82a83895049ea542b3a53b2e90f9428d176", @@ -19793,7 +19793,7 @@ "commit": "ba098d7b91c189d8ee7f45a157ba8e5a243eb85b", "date": "2015-12-24 10:32:25 -0500", "subject": "Fixed an issue where longer-running backups/restores would timeout when remote and threaded.", - "body": "Keepalives are now used to make sure the remote for the main process does not timeout while the thread remotes do all the work. The error messages for timeouts was also improved to make debugging easier." + "body": "Keepalives are now used to make sure the remote for the main process does not timeout while the thread remotes do all the work. The error messages for timeouts was also improved to make debugging easier." }, { "commit": "b0a6954671dcee50e16a72f4c44d0eaa84b67c5c", @@ -19960,7 +19960,7 @@ "commit": "d7e3be1ebff5d0d017a7406552acd9b3a97f0bb8", "date": "2015-09-08 18:29:13 -0400", "subject": "Fixed issue #138: Fix --no-start-stop working on running db without --force.", - "body": "Unable to reproduce this anymore. It seems to have been fixed with the last round of config changes. Add regression tests to make sure it doesn't happen again." + "body": "Unable to reproduce this anymore. It seems to have been fixed with the last round of config changes. Add regression tests to make sure it doesn't happen again." }, { "commit": "b17bf31fb6dbe67b6efdcd634a4d29ede1cf9d76", @@ -20065,7 +20065,7 @@ "commit": "12e8a7572c07fc4ea1d44ba944e530c391396047", "date": "2015-08-08 00:51:58 -0400", "subject": "Worked on issue #122: 9.5 Integration.", - "body": "Expiration tests worked differently with checkpoint_segments. Only allow this test < 9.5 until purely synthetic tests are written." + "body": "Expiration tests worked differently with checkpoint_segments. Only allow this test < 9.5 until purely synthetic tests are written." }, { "commit": "2edf5d4bf7b47b4fd3fd446740a82c35be84dbda", @@ -20082,7 +20082,7 @@ "commit": "4e7bd4468a785a00c37b5d29262d86f76f725a78", "date": "2015-08-06 16:36:55 -0400", "subject": "Worked on issue #122: 9.5 Integration.", - "body": "Most tests are working now. What's not working:\n\n1) --target-resume option fails because pause_on_recovery setting was removed. Need to implement to the new 9.5 option and make that work with older versions in a consistent way.\n2) No tests for the new .partial WAL segments that can be generated on timeline switch." + "body": "Most tests are working now. What's not working:\n\n1) --target-resume option fails because pause_on_recovery setting was removed. Need to implement to the new 9.5 option and make that work with older versions in a consistent way.\n2) No tests for the new .partial WAL segments that can be generated on timeline switch." }, { "commit": "adb8a009255fdfcd726bf7ded3aef60c80b645cf", @@ -20093,7 +20093,7 @@ "commit": "ca1fd9740aede681866e8a52bcece052de631597", "date": "2015-08-06 00:00:30 -0400", "subject": "Working on issue #117: Refactor expiration tests to be purely synthetic.", - "body": "Split BackupTest.pm into two modules. It was getting ungainly to work on." + "body": "Split BackupTest.pm into two modules. It was getting ungainly to work on." }, { "commit": "8b57188bc1e5e6941bf46e9fc4cfd6d8e5bd7856", @@ -20130,7 +20130,7 @@ "commit": "021afa804638c609a7a8074a9502f4bf05c01688", "date": "2015-08-01 17:26:15 -0400", "subject": "Ensure that info output is terminated by a linefeed.", - "body": "On some systems the JSON->encode() function was adding a linefeed and on others it was not. This was causing regression test failures in in the test logs and may have also been inconvenient for users." + "body": "On some systems the JSON->encode() function was adding a linefeed and on others it was not. This was causing regression test failures in in the test logs and may have also been inconvenient for users." }, { "commit": "1b0f997f59ef67358b7091fe832b9a5b026da0de", @@ -20227,7 +20227,7 @@ "commit": "7248795b91dcc6fe749561afb556f625dfe20497", "date": "2015-06-29 22:07:42 -0400", "subject": "Work on issue #48: Abandon threads and go to processes", - "body": "Replaced IPC::System::Simple and Net::OpenSSH with IPC::Open3 to eliminate CPAN dependency for multiple distros. Using open3 will also be used for local processes so it make sense to switch now." + "body": "Replaced IPC::System::Simple and Net::OpenSSH with IPC::Open3 to eliminate CPAN dependency for multiple distros. Using open3 will also be used for local processes so it make sense to switch now." }, { "commit": "c59adfc68dd10e2ab42a686cace1829b6e4d4e2a", @@ -20250,7 +20250,7 @@ "commit": "f210fe99c33fbab478fc3e75cc09bd5e5089446f", "date": "2015-06-22 13:11:07 -0400", "subject": "Implemented issue #109: Move VERSION into source code.", - "body": "Also stopped replacing FORMAT number which explains the large number of test log changes. FORMAT should change very rarely and cause test log failures when it does." + "body": "Also stopped replacing FORMAT number which explains the large number of test log changes. FORMAT should change very rarely and cause test log failures when it does." }, { "commit": "af580168712ac0c370cc5532bb34ab1805823ffa", @@ -20265,7 +20265,7 @@ { "commit": "3f841fcd95c17541d5f04012a72153a8d5cbd8ce", "date": "2015-06-22 09:51:16 -0400", - "subject": "Improved issue #110: 'db-version' is required but not defined.", + "subject": "Improved issue #110: 'db-version' is required but not defined.", "body": "Improved the error message and added hints." }, { @@ -20347,7 +20347,7 @@ { "commit": "b865070eddb4eedbeab70e5de287e250680b3f85", "date": "2015-06-14 10:12:36 -0400", - "subject": "Experimental 9.5 support. Unit tests are not working yet." + "subject": "Experimental 9.5 support. Unit tests are not working yet." }, { "commit": "0b6f81a812a02c34b2f8cc9e5b14139d27dc4895", @@ -20368,7 +20368,7 @@ "commit": "148836fe44e81622d1836f851b8b87e21d32188e", "date": "2015-06-13 18:25:49 -0400", "subject": "Implemented issue #26: Info command.", - "body": "* Includes updating the manifest to format 4. It turns out the manifest and .info files were not very good for providing information. A format update was required anyway so worked through the backlog of changes that would require a format change.\n\n* Multiple database versions are now supported in the archive. Doesn't actually work yet but the structure should be good.\n\n* Tests use more constants now that test logs can catch name regressions." + "body": "* Includes updating the manifest to format 4. It turns out the manifest and .info files were not very good for providing information. A format update was required anyway so worked through the backlog of changes that would require a format change.\n\n* Multiple database versions are now supported in the archive. Doesn't actually work yet but the structure should be good.\n\n* Tests use more constants now that test logs can catch name regressions." }, { "commit": "5a73b1a2408fd22f90417fcbf70edd41a6a99f3b", @@ -20404,7 +20404,7 @@ { "commit": "200e3b26fe902ae1c71a5954c72455902676f90a", "date": "2015-05-29 22:11:51 -0400", - "subject": "Prevent log test regexp replacement when multithreaded. Logging should be completely disabled in this case." + "subject": "Prevent log test regexp replacement when multithreaded. Logging should be completely disabled in this case." }, { "commit": "1586e0eb7555a6f2d44925b3572ab2457ff3e069", @@ -20431,7 +20431,7 @@ "commit": "d321ef0b6d5af0c1dce8fd5701edbadffffce5af", "date": "2015-05-29 12:26:31 -0400", "subject": "Implement issue #89: Make confess backtraces log-level dependent.", - "body": "ASSERTs still dump stack traces to the console and file in all cases. ERRORs only dump stack traces to the file when the file log level is DEBUG or TRACE." + "body": "ASSERTs still dump stack traces to the console and file in all cases. ERRORs only dump stack traces to the file when the file log level is DEBUG or TRACE." }, { "commit": "13e4eec629df044c90c64424a6992ff3e99b5173", @@ -20476,7 +20476,7 @@ { "commit": "e9099b99aad5cec0b87668fa90babfc18472e63f", "date": "2015-05-26 10:01:05 -0400", - "subject": "Updated required modules. Minor doc fixes." + "subject": "Updated required modules. Minor doc fixes." }, { "commit": "d5335b40e83539bf137ff9051ecac84e6d85a2b6", @@ -20516,7 +20516,7 @@ { "commit": "1ac4b781fd304fcc6aa663afc324cdb2d1fdcb06", "date": "2015-05-07 15:56:56 -0600", - "subject": "Better info logging for restore. Most of the messages were debug before and some important ones were missing." + "subject": "Better info logging for restore. Most of the messages were debug before and some important ones were missing." }, { "commit": "095a9a0b831877b9b5d62b4b37da4caca0edb7df", @@ -20531,7 +20531,7 @@ { "commit": "56588f6fdd30378728372e1dea2a4920c043acb3", "date": "2015-05-05 11:08:48 -0600", - "subject": "Log testing can now be enabled for certain deterministic tests. This works by comparing the generated logs against a previous copy. Currently only enabled for the backup/synthetic tests." + "subject": "Log testing can now be enabled for certain deterministic tests. This works by comparing the generated logs against a previous copy. Currently only enabled for the backup/synthetic tests." }, { "commit": "1d1c7e47d149fc9b3b9b0c1db6238e998d56c804", @@ -20621,8 +20621,8 @@ { "commit": "43d86e64a4ce73d69c4d9c73f32ef5cff23d5344", "date": "2015-04-07 18:36:59 -0400", - "subject": "First pass at tests comparing rsync to backrest. Decent results, but room for improvement.", - "body": "All tests local over SSH with rsync default compression, 4 threads and default compression on backrest. Backrest default is gzip = 6, assuming rsync is the same.\n\nOn a 1GB DB:\n\nrsync time = 32.82\nbackrest time = 19.48\n\nbackrest is 171% faster.\n\nOn a 5GB DB:\n\nrsync time = 171.16\nbackrest time = 86.97\n\nbackrest is 196% faster." + "subject": "First pass at tests comparing rsync to backrest. Decent results, but room for improvement.", + "body": "All tests local over SSH with rsync default compression, 4 threads and default compression on backrest. Backrest default is gzip = 6, assuming rsync is the same.\n\nOn a 1GB DB:\n\nrsync time = 32.82\nbackrest time = 19.48\n\nbackrest is 171% faster.\n\nOn a 5GB DB:\n\nrsync time = 171.16\nbackrest time = 86.97\n\nbackrest is 196% faster." }, { "commit": "7081c8b86701c88abfee03b01382364f79a7cae9", @@ -20637,7 +20637,7 @@ { "commit": "3f651a8ce8d489a680e08467b9baccd7edef188b", "date": "2015-04-02 22:07:23 -0400", - "subject": "Unit tests will now work across all installed versions of Postgres. Created a function to list all supported versions. Now used for all version checking." + "subject": "Unit tests will now work across all installed versions of Postgres. Created a function to list all supported versions. Now used for all version checking." }, { "commit": "0b27c749991ac58de400c81c5e0fe4d0e890f957", @@ -20717,7 +20717,7 @@ { "commit": "258fb9c6e24445e8be390b729cae005208decdca", "date": "2015-03-16 14:01:01 -0400", - "subject": "More work on automated docs. Merging this to go back to some feature/bug work for a while." + "subject": "More work on automated docs. Merging this to go back to some feature/bug work for a while." }, { "commit": "882f068254242ed06bceb57e14f5f52e5ed317f8", @@ -20727,12 +20727,12 @@ { "commit": "7675a11dedaeebaeb5f546bb6c1849f989af794a", "date": "2015-03-08 14:05:41 -0400", - "subject": "First pass at building automated docs for markdown/html. This works pretty well, but the config sections of doc.xml still require too much maintenance. With the new Config code, it should be possible to generate those sections automatically." + "subject": "First pass at building automated docs for markdown/html. This works pretty well, but the config sections of doc.xml still require too much maintenance. With the new Config code, it should be possible to generate those sections automatically." }, { "commit": "ae6bdecfaf47893346443620a0a57586cf22ebdf", "date": "2015-03-08 13:26:09 -0400", - "subject": "Split command-line parameter processing out into a separate file. This is in preparation allowing all parameters to be specified/overridden on the command line, with pg_backrest.conf being option." + "subject": "Split command-line parameter processing out into a separate file. This is in preparation allowing all parameters to be specified/overridden on the command line, with pg_backrest.conf being option." }, { "commit": "f115e01b71b23f1fc4fc8b98c42993a31b87a3e0", @@ -20752,7 +20752,7 @@ { "commit": "bfb2c05357521718ef115ca40af60ac51f11ed69", "date": "2015-03-04 00:36:54 -0500", - "subject": "Started on new HTML docs. The markdown is not cutting it." + "subject": "Started on new HTML docs. The markdown is not cutting it." }, { "commit": "942b29d5b42a9ad62c96023def0c8cf1a3fc6b19", @@ -20762,7 +20762,7 @@ { "commit": "d19baefdb99e4853f3dec7abaae757b18e117a16", "date": "2015-03-03 22:33:41 -0500", - "subject": "Removed db-timestamp-start and stop from manifest. Better to get these values from" + "subject": "Removed db-timestamp-start and stop from manifest. Better to get these values from" }, { "commit": "7509b01e22352404ef89ded72b8bd413c7f0a726", @@ -20832,7 +20832,7 @@ { "commit": "77bc4238dc1afd854049fd901eb1dce76185371f", "date": "2015-03-01 13:41:35 -0500", - "subject": "ZLib stuff starting to look good. All references removed from File and using binary_xfer for all de/compression." + "subject": "ZLib stuff starting to look good. All references removed from File and using binary_xfer for all de/compression." }, { "commit": "7ede058b456cb708acdcdba93b52a6729adf9380", @@ -20842,7 +20842,7 @@ { "commit": "28326d6b4c4a815882acb1d6e95a28701fd12b2b", "date": "2015-02-28 19:07:29 -0500", - "subject": "File->copy now returns hash and size in all cases, though the local copies are not optimal. They just call hash_size()." + "subject": "File->copy now returns hash and size in all cases, though the local copies are not optimal. They just call hash_size()." }, { "commit": "260a6cb8f1c7a98ad44f1c4ba0d654c6ca49f544", @@ -20862,7 +20862,7 @@ { "commit": "f93c6caec206071c91d814b3d490da5d357ff44f", "date": "2015-02-28 10:23:33 -0500", - "subject": "Backup/restore copy will be run in the main process when thread-max=1. I've resisted this change because it adds complexity, but I have to accept that threads are not stable on all platforms. Or maybe any platform." + "subject": "Backup/restore copy will be run in the main process when thread-max=1. I've resisted this change because it adds complexity, but I have to accept that threads are not stable on all platforms. Or maybe any platform." }, { "commit": "5d10a18b257a92ce520c575a488d8921e8284f5e", @@ -20872,17 +20872,17 @@ { "commit": "d6205d95011e5efb3856f5e62d09571374070492", "date": "2015-02-27 23:31:39 -0500", - "subject": "Looks like all unit tests pass - now for a long test run to see if that is really true. And to see if the old lockup is gone." + "subject": "Looks like all unit tests pass - now for a long test run to see if that is really true. And to see if the old lockup is gone." }, { "commit": "25442655c84cd10d026e23f5ad66aa17e5c96ae5", "date": "2015-02-27 18:42:28 -0500", - "subject": "Hash of compressed file is working. Something still broken in binary_xfer because some 0 length archive files are showing up. Investigating." + "subject": "Hash of compressed file is working. Something still broken in binary_xfer because some 0 length archive files are showing up. Investigating." }, { "commit": "53f783d3fecd725aed2b7e1a203e11e8f8c85e0b", "date": "2015-02-27 16:36:40 -0500", - "subject": "binary_xfer compress/decompression working without threads. All unit tests passing. Hooray." + "subject": "binary_xfer compress/decompression working without threads. All unit tests passing. Hooray." }, { "commit": "c18c629878705c01798e099de4f300b26129b679", @@ -20912,7 +20912,7 @@ { "commit": "d2602a5c0762ca7ad1cd275587556f2a280b8098", "date": "2015-02-03 20:33:33 -0500", - "subject": "Tracking down a lockup in the restore threads. It doesn't happen in backup - they are the same except that restore uses the ThreadGroup object. I'm beginning to think that threads and objects don't play together very nicely. Objects in threads seems OK, but threads in objects, not so much." + "subject": "Tracking down a lockup in the restore threads. It doesn't happen in backup - they are the same except that restore uses the ThreadGroup object. I'm beginning to think that threads and objects don't play together very nicely. Objects in threads seems OK, but threads in objects, not so much." }, { "commit": "7bee43372d41df531d6c571140dbb8eed805adf2", @@ -20937,7 +20937,7 @@ { "commit": "7f38461c6814e0e092ee83a3222112aa21c85456", "date": "2015-02-02 18:48:33 -0500", - "subject": "Remove ThreadQueue->end(). Not supported on all platforms." + "subject": "Remove ThreadQueue->end(). Not supported on all platforms." }, { "commit": "bde8943517ad260d87414ba6f8aa03c3e799e790", @@ -20952,7 +20952,7 @@ { "commit": "a6d3b7e1a960052728cc274645ebb011b741ab18", "date": "2015-01-31 23:04:24 -0500", - "subject": "Working on checking restores against the manifest. Current issue is that the manifest does not always record the final size of the file - it may change while the file is being copied. This is fine in principal but makes testing a pain." + "subject": "Working on checking restores against the manifest. Current issue is that the manifest does not always record the final size of the file - it may change while the file is being copied. This is fine in principal but makes testing a pain." }, { "commit": "018a2afacadbf9e6a848c1e0ff66f0cd9d732755", @@ -21013,23 +21013,23 @@ { "commit": "11c257296a7632ae760b1ddae66beec2f9e114ca", "date": "2015-01-30 20:16:21 -0500", - "subject": "In the end it was a single non-undefed reference holding up the show. The Backup file should be split into Archive, Backup, Expire, and made into objects. That would cut down on this kind of nastiness." + "subject": "In the end it was a single non-undefed reference holding up the show. The Backup file should be split into Archive, Backup, Expire, and made into objects. That would cut down on this kind of nastiness." }, { "commit": "50e015a8385e347e2ba75fb84f9ab7c5fef7fa70", "date": "2015-01-30 18:58:49 -0500", - "subject": "Revert \"Abortive attempt at cleaning up some thread issues - I realized the issue is in mixing threads and objects too liberally. Trying another approach but want to keep this code for historical and reference purposes.\"", + "subject": "Revert \"Abortive attempt at cleaning up some thread issues - I realized the issue is in mixing threads and objects too liberally. Trying another approach but want to keep this code for historical and reference purposes.\"", "body": "This reverts commit e95631f82ac8c15cb2492bb321703797be54eff6." }, { "commit": "e95631f82ac8c15cb2492bb321703797be54eff6", "date": "2015-01-30 14:55:55 -0500", - "subject": "Abortive attempt at cleaning up some thread issues - I realized the issue is in mixing threads and objects too liberally. Trying another approach but want to keep this code for historical and reference purposes." + "subject": "Abortive attempt at cleaning up some thread issues - I realized the issue is in mixing threads and objects too liberally. Trying another approach but want to keep this code for historical and reference purposes." }, { "commit": "fb934ecce9c204a5ed1d444b8c3876c8393af440", "date": "2015-01-30 14:54:08 -0500", - "subject": "Allow immediate stops when discarding data at end of unit test. Makes the shutdowns faster." + "subject": "Allow immediate stops when discarding data at end of unit test. Makes the shutdowns faster." }, { "commit": "19e455afc1c152f212d0de771bb373a03ab5a8a1", @@ -21049,7 +21049,7 @@ { "commit": "139b1cf87242995e41a68dbcb35a10c72da9d592", "date": "2015-01-28 10:29:29 -0500", - "subject": "Fixed small race condition in cleanup - the archiver was recreating paths after they had been deleted. Put in a loop to make sure it gets done." + "subject": "Fixed small race condition in cleanup - the archiver was recreating paths after they had been deleted. Put in a loop to make sure it gets done." }, { "commit": "60550cd45b802b9d1859d9bdd22f5211b9908944", @@ -21064,7 +21064,7 @@ { "commit": "a59bd8c3281237f35da0ad4b7af33953cf53a26b", "date": "2015-01-27 22:59:59 -0500", - "subject": "Restores except for type=none are mostly working. There are some failing unit tests to fix." + "subject": "Restores except for type=none are mostly working. There are some failing unit tests to fix." }, { "commit": "13544d51bf9be1128486b167a3ecb5dc3fb6a09c", @@ -21229,12 +21229,12 @@ { "commit": "d6d57e654e48101fdbd2ef06316f71b395dbdfd1", "date": "2015-01-06 13:08:56 -0500", - "subject": "Fixed the way wait was done after the manifest is created. Previously, waits were done for base and each tablespace which is not very efficient. Now one wait is done after the entire manifest is built. Also storing the exact time that copy began." + "subject": "Fixed the way wait was done after the manifest is created. Previously, waits were done for base and each tablespace which is not very efficient. Now one wait is done after the entire manifest is built. Also storing the exact time that copy began." }, { "commit": "43098086afd9cbf4e487a18fd2c2d6c486e4e224", "date": "2015-01-03 16:49:26 -0500", - "subject": "Implemented timestamp last modified to record the time of the last modified file in the backup. Also added timestamp-db-start and timestamp-db-stop to for more info. timestamp-db-start can be used for PITR." + "subject": "Implemented timestamp last modified to record the time of the last modified file in the backup. Also added timestamp-db-start and timestamp-db-stop to for more info. timestamp-db-start can be used for PITR." }, { "commit": "91b06bef475d6ad17036c27337be59db35460f0d", @@ -21249,7 +21249,7 @@ { "commit": "2e080eedb85f7394d40fb14ec6c92b0a3e11aba5", "date": "2015-01-02 14:18:07 -0500", - "subject": "Added an optional delay after manifest build so that files are not copied in the same second that the manifest is built. This can result in (admittedly unlikely) race conditions that can produce an invalid backup. I was also able to reduce the sleep types when waiting for thread termination - so unit test times are improved by almost 100%." + "subject": "Added an optional delay after manifest build so that files are not copied in the same second that the manifest is built. This can result in (admittedly unlikely) race conditions that can produce an invalid backup. I was also able to reduce the sleep types when waiting for thread termination - so unit test times are improved by almost 100%." }, { "commit": "297b22cb2bde9dddf99f557a40f8a6fb6e0807c8", @@ -21259,7 +21259,7 @@ { "commit": "32b37335a14dd1f7ed98200df3869b0443afb2d8", "date": "2014-12-31 19:03:03 -0500", - "subject": "Trying to find realistic conditions where a file can be changed without the timestamp changing between backups. So far, this is the only case I can make work - it looks like adding a 1 second pause after creation of the manifest would cover this case." + "subject": "Trying to find realistic conditions where a file can be changed without the timestamp changing between backups. So far, this is the only case I can make work - it looks like adding a 1 second pause after creation of the manifest would cover this case." }, { "commit": "59e901684d0119ec6547356386190bec90837c72", @@ -21514,7 +21514,7 @@ { "commit": "aab5ec2943919c7990d0da202275bd978ce384c1", "date": "2014-09-29 19:39:28 -0400", - "subject": "Converting _ to -. Last one I hope." + "subject": "Converting _ to -. Last one I hope." }, { "commit": "bdbdaf39d35e8bfcc88976179716843571890c23", @@ -21754,7 +21754,7 @@ { "commit": "b48a7e6cc2b4702bf1ad26d652142fbb1c28d931", "date": "2014-08-12 19:17:16 -0400", - "subject": "The backup label (and path name) are now created at the end of the backup instead of the beginning. This makes selecting a backup for PITR much easier." + "subject": "The backup label (and path name) are now created at the end of the backup instead of the beginning. This makes selecting a backup for PITR much easier." }, { "commit": "672c6b2ccb3b5926cd0e6bb0d78524d07bc74db6", @@ -21904,7 +21904,7 @@ { "commit": "c85413ec6837a36ccf3911f07d63f9eb8648e770", "date": "2014-06-29 10:53:39 -0400", - "subject": "Lots of improvements to unit tests. A few bug fixes." + "subject": "Lots of improvements to unit tests. A few bug fixes." }, { "commit": "f9ec149ffe866c1a43b020f1a0e2dfa57b9fdc50", @@ -21914,7 +21914,7 @@ { "commit": "97b9560e5c03eb314282dac3984fc21b5d2a2fad", "date": "2014-06-28 11:47:21 -0400", - "subject": "Fixed binary_xfer() issue. Now seems to work in all cases." + "subject": "Fixed binary_xfer() issue. Now seems to work in all cases." }, { "commit": "9c160a03e36029c8d4eaaf81344f972820a5a2e8", @@ -22184,7 +22184,7 @@ { "commit": "3e12f9230b35bc625cc653b620452090ab9b2260", "date": "2014-05-14 15:07:37 -0400", - "subject": "Working on unit tests for file_copy. Still need to add specific error tests, timestamp, and permissions." + "subject": "Working on unit tests for file_copy. Still need to add specific error tests, timestamp, and permissions." }, { "commit": "db40553434035ffc4ab1f61061f429f023bde5b8", @@ -22364,7 +22364,7 @@ { "commit": "571d449717fecee1eac64088bde42606fcddc4eb", "date": "2014-02-27 19:15:00 -0500", - "subject": "Redirect find error output to /dev/null. Sometimes files are removed from the db while find is running. We only want to error if the find process errors." + "subject": "Redirect find error output to /dev/null. Sometimes files are removed from the db while find is running. We only want to error if the find process errors." }, { "commit": "84c4cec257b25fc026f12560d6c0bc5cfb5822ec", @@ -22419,7 +22419,7 @@ { "commit": "0387a8ee09e5fbd66fe8d4dae055f9fea07e6039", "date": "2014-02-21 07:34:17 -0500", - "subject": "replaced process id with thread id. Added use thread to all modules." + "subject": "replaced process id with thread id. Added use thread to all modules." }, { "commit": "ac3ce81621fab54c925eead8c53dee01c797d63e", @@ -22734,12 +22734,12 @@ { "commit": "9d08f2a64494c9528c30531bd95b7306484f229e", "date": "2014-02-06 12:49:54 -0500", - "subject": "More robust config. Retention is read from config." + "subject": "More robust config. Retention is read from config." }, { "commit": "a2b0d7a674a1524f53440be45d333409a3ad1a2b", "date": "2014-02-05 22:26:10 -0500", - "subject": "Backup expiration working again. Other changes." + "subject": "Backup expiration working again. Other changes." }, { "commit": "27986e0c10999660078e7c74b25bc99eb3e028ce", @@ -22874,7 +22874,7 @@ { "commit": "e13a706c0a251baf9f69126653bb2eabb78064b5", "date": "2014-01-29 15:38:57 -0500", - "subject": "More moving. Links now also get references." + "subject": "More moving. Links now also get references." }, { "commit": "54c73b38139127e0acb9a13be6cd7a94d84851ca", @@ -23263,7 +23263,7 @@ { "commit": "bc46aefe61871e27281b9b8c9ffc185c8e2846af", "date": "2013-11-20 22:24:30 -0500", - "subject": "Fixed for OSX. Do not every use TextEditor on code!" + "subject": "Fixed for OSX. Do not ever use TextEditor on code!" }, { "commit": "e67821a23096f4788f6bef71e7d4d361b7d9858f", diff --git a/doc/xml/coding.xml b/doc/xml/coding.xml index 2f3ecdf6c..f3d768d33 100644 --- a/doc/xml/coding.xml +++ b/doc/xml/coding.xml @@ -25,7 +25,7 @@
Indentation -

Indentation is four spaces -- no tabs. Only file types that absolutely require tabs (e.g. `Makefile`) may use them.

+

Indentation is four spaces -- no tabs. Only file types that absolutely require tabs (e.g. `Makefile`) may use them.

@@ -96,7 +96,7 @@ typedef struct InlineCommentExample nameIdx - loop variable for iterating through a list of names -

Variable names should be descriptive. Avoid i, j, etc.

+

Variable names should be descriptive. Avoid i, j, etc.

@@ -124,7 +124,7 @@ typedef struct InlineCommentExample

The value should be aligned at column 69 whenever possible.

-

This type of constant should mostly be used for strings. Use enums whenever possible for integer constants.

+

This type of constant should mostly be used for strings. Use enums whenever possible for integer constants.

String Constants

@@ -143,7 +143,7 @@ typedef struct InlineCommentExample STRING_EXTERN(SAMPLE_VALUE_STR, SAMPLE_VALUE); -

Static strings declared in the C file are not required to have a #define if the #define version is not used. Externed strings must always have the #define in the header file.

+

Static strings declared in the C file are not required to have a #define if the #define version is not used. Externed strings must always have the #define in the header file.

Enum Constants

@@ -157,7 +157,7 @@ typedef enum } CipherMode; -

Note the comma after the last element. This reduces diff churn when new elements are added.

+

Note the comma after the last element. This reduces diff churn when new elements are added.

@@ -212,7 +212,7 @@ typedef enum
Braces -

C allows braces to be excluded for a single statement. However, braces should be used when the control statement (if, while, etc.) spans more than one line or the statement to be executed spans more than one line.

+

C allows braces to be excluded for a single statement. However, braces should be used when the control statement (if, while, etc.) spans more than one line or the statement to be executed spans more than one line.

No braces needed:

@@ -291,14 +291,14 @@ switch (int)
Macros -

Don't use a macro when a function could be used instead. Macros make it hard to measure code coverage.

+

Don't use a macro when a function could be used instead. Macros make it hard to measure code coverage.

Objects -

Object-oriented programming is used extensively. The object pointer is always referred to as this.

+

Object-oriented programming is used extensively. The object pointer is always referred to as this.

An object can expose internal struct members by defining a public struct that contains the members to be exposed and using inline functions to get/set the members.

@@ -340,9 +340,9 @@ struct List
Variadic Functions -

Variadic functions can take a variable number of parameters. While the printf() pattern is variadic, it is not very flexible in terms of optional parameters given in any order.

+

Variadic functions can take a variable number of parameters. While the printf() pattern is variadic, it is not very flexible in terms of optional parameters given in any order.

-

This project implements variadic functions using macros (which are exempt from the normal macro rule of being all caps). A typical variadic function definition:

+

This project implements variadic functions using macros (which are exempt from the normal macro rule of being all caps). A typical variadic function definition:

typedef struct StoragePathCreateParam @@ -374,7 +374,7 @@ storagePathCreateP(storageLocal(), "/tmp/pgbackrest"); storagePathCreateP(storageLocal(), "/tmp/pgbackrest", .errorOnExists = true, .mode = 0777); -

If the majority of functions in a module or object are variadic it is best to provide macros for all functions even if they do not have variable parameters. Do not use the base function when variadic macros exist.

+

If the majority of functions in a module or object are variadic it is best to provide macros for all functions even if they do not have variable parameters. Do not use the base function when variadic macros exist.

@@ -390,7 +390,7 @@ storagePathCreateP(storageLocal(), "/tmp/pgbackrest", .errorOnExists = true, .mo
Uncoverable Code -

The uncoverable keyword marks code that can never be covered. For instance, a function that never returns because it always throws an error. Uncoverable code should be rare to non-existent outside the common libraries and test code.

+

The uncoverable keyword marks code that can never be covered. For instance, a function that never returns because it always throws an error. Uncoverable code should be rare to non-existent outside the common libraries and test code.

} // {uncoverable - function throws error so never returns} @@ -403,7 +403,7 @@ storagePathCreateP(storageLocal(), "/tmp/pgbackrest", .errorOnExists = true, .mo
Uncovered Code -

Marks code that is not tested for one reason or another. This should be kept to a minimum and an excuse given for each instance.

+

Marks code that is not tested for one reason or another. This should be kept to a minimum and an excuse given for each instance.

exit(EXIT_FAILURE); // {uncovered - test harness does not support non-zero exit} diff --git a/doc/xml/contributing.xml b/doc/xml/contributing.xml index 3bbf00e85..f40ed89d5 100644 --- a/doc/xml/contributing.xml +++ b/doc/xml/contributing.xml @@ -107,7 +107,7 @@ -

Some unit tests and all the integration tests require Docker. Running in containers allows us to simulate multiple hosts, test on different distributions and versions of , and use sudo without affecting the host system.

+

Some unit tests and all the integration tests require Docker. Running in containers allows us to simulate multiple hosts, test on different distributions and versions of , and use sudo without affecting the host system.

Install Docker @@ -131,7 +131,7 @@ -

This clone of the repository is sufficient for experimentation. For development, create a fork and clone that instead.

+

This clone of the repository is sufficient for experimentation. For development, create a fork and clone that instead.

Clone <backrest/> repository @@ -180,7 +180,7 @@ /* - * HEADER FILE - see db.h for a complete implementation example + * HEADER FILE - see db.h for a complete implementation example */ // Typedef the object declared in the C file @@ -738,7 +738,7 @@ run 8/1 ------------- L2285 no current backups <option id="force" name="Force"> <summary>Force a restore.</summary> - <text>By itself this option forces the <postgres/> data and tablespace paths to be completely overwritten. In combination with <br-option>--delta</br-option> a timestamp/size delta will be performed instead of using checksums.</text> + <text>By itself this option forces the <postgres/> data and tablespace paths to be completely overwritten. In combination with <br-option>--delta</br-option> a timestamp/size delta will be performed instead of using checksums.</text> <example>y</example> </option> @@ -789,7 +789,7 @@ pgbackrest/doc/doc.pl --out=html --no-exe pgbackrest/doc/doc.pl --out=html --include=user-guide --require=/quickstart --var=encrypt=n --no-cache --pre -

The resulting Docker containers can be listed with docker ps and the container can be entered with docker exec doc-pg-primary bash. Additionally, the -u option can be added for entering the container as a specific user (e.g. postgres).

+

The resulting Docker containers can be listed with docker ps and the container can be entered with docker exec doc-pg-primary bash. Additionally, the -u option can be added for entering the container as a specific user (e.g. postgres).

diff --git a/doc/xml/documentation.xml b/doc/xml/documentation.xml index 28019ee1c..c78251715 100644 --- a/doc/xml/documentation.xml +++ b/doc/xml/documentation.xml @@ -21,7 +21,7 @@ ./doc.pl --out=html --include=user-guide --var=os-type=rhel -

Documentation generation will build a cache of all executed statements and use the cache to build the documentation quickly if no executed statements have changed. This makes proofing text-only edits very fast, but sometimes it is useful to do a full build without using the cache:

+

Documentation generation will build a cache of all executed statements and use the cache to build the documentation quickly if no executed statements have changed. This makes proofing text-only edits very fast, but sometimes it is useful to do a full build without using the cache:

./doc.pl --out=html --include=user-guide --var=os-type=rhel --no-cache @@ -65,9 +65,9 @@ sudo usermod -aG docker testdoc
Building with Packages -

A user-specified package can be used when building the documentation. Since the documentation exercises most functionality this is a great way to smoke-test packages.

+

A user-specified package can be used when building the documentation. Since the documentation exercises most functionality this is a great way to smoke-test packages.

-

The package must be located within the repo and the specified path should be relative to the repository base. test/package is a good default path to use.

+

The package must be located within the repo and the specified path should be relative to the repository base. test/package is a good default path to use.

Ubuntu 16.04:

diff --git a/doc/xml/faq.xml b/doc/xml/faq.xml index d2927aa6c..1c507f0e4 100644 --- a/doc/xml/faq.xml +++ b/doc/xml/faq.xml @@ -136,7 +136,7 @@ process-max=1

It is often desirable to restore the latest backup from a production server to a development server. In principal, the instructions are the same as in setting up a hot standby with a few exceptions.

-

NEED TO ELABORATE HERE: Need an example of the restore command - what settings are different? Would they be {[dash]}-target, {[dash]}-target-action=promote, {[dash]}-type=immediate on the command-line? What about in the POSTGRES (e.g. hot_standby = on / wal_level = hot_standby - these would be different, no?) and PGBACKREST (e.g. would recovery-option=standby_mode=on still be set?) config files

+

NEED TO ELABORATE HERE: Need an example of the restore command - what settings are different? Would they be {[dash]}-target, {[dash]}-target-action=promote, {[dash]}-type=immediate on the command-line? What about in the POSTGRES (e.g. hot_standby = on / wal_level = hot_standby - these would be different, no?) and PGBACKREST (e.g. would recovery-option=standby_mode=on still be set?) config files

--> diff --git a/doc/xml/index.xml b/doc/xml/index.xml index d584bb17c..0669fbfbc 100644 --- a/doc/xml/index.xml +++ b/doc/xml/index.xml @@ -33,7 +33,7 @@

aims to be a reliable, easy-to-use backup and restore solution that can seamlessly scale up to the largest databases and workloads by utilizing algorithms that are optimized for database-specific requirements.

-

v{[version-stable]} is the current stable release. Release notes are on the Releases page.

+

v{[version-stable]} is the current stable release. Release notes are on the Releases page.

Please find us on GitHub and give us a star if you like !

@@ -92,7 +92,7 @@
Page Checksums -

has supported page-level checksums since 9.3. If page checksums are enabled will validate the checksums for every file that is copied during a backup. All page checksums are validated during a full backup and checksums in files that have changed are validated during differential and incremental backups.

+

has supported page-level checksums since 9.3. If page checksums are enabled will validate the checksums for every file that is copied during a backup. All page checksums are validated during a full backup and checksums in files that have changed are validated during differential and incremental backups.

Validation failures do not stop the backup process, but warnings with details of exactly which pages have failed validation are output to the console and file log.

@@ -126,22 +126,22 @@
Parallel, Asynchronous WAL Push & Get -

Dedicated commands are included for pushing WAL to the archive and getting WAL from the archive. Both commands support parallelism to accelerate processing and run asynchronously to provide the fastest possible response time to .

+

Dedicated commands are included for pushing WAL to the archive and getting WAL from the archive. Both commands support parallelism to accelerate processing and run asynchronously to provide the fastest possible response time to .

-

WAL push automatically detects WAL segments that are pushed multiple times and de-duplicates when the segment is identical, otherwise an error is raised. Asynchronous WAL push allows transfer to be offloaded to another process which compresses WAL segments in parallel for maximum throughput. This can be a critical feature for databases with extremely high write volume.

+

WAL push automatically detects WAL segments that are pushed multiple times and de-duplicates when the segment is identical, otherwise an error is raised. Asynchronous WAL push allows transfer to be offloaded to another process which compresses WAL segments in parallel for maximum throughput. This can be a critical feature for databases with extremely high write volume.

-

Asynchronous WAL get maintains a local queue of WAL segments that are decompressed and ready for replay. This reduces the time needed to provide WAL to which maximizes replay speed. Higher-latency connections and storage (such as S3) benefit the most.

+

Asynchronous WAL get maintains a local queue of WAL segments that are decompressed and ready for replay. This reduces the time needed to provide WAL to which maximizes replay speed. Higher-latency connections and storage (such as S3) benefit the most.

-

The push and get commands both ensure that the database and repository match by comparing versions and system identifiers. This virtually eliminates the possibility of misconfiguring the WAL archive location.

+

The push and get commands both ensure that the database and repository match by comparing versions and system identifiers. This virtually eliminates the possibility of misconfiguring the WAL archive location.

diff --git a/doc/xml/metric.xml b/doc/xml/metric.xml index 1aa50c14d..7688d1329 100644 --- a/doc/xml/metric.xml +++ b/doc/xml/metric.xml @@ -11,7 +11,7 @@

Function/line coverage is complete with no exceptions.

-

Branch coverage excludes branches inside macros and assert() calls. Macros have their own unit tests so they do not need to be tested everywhere they appear. Asserts are not expected to have complete branch coverage since they test cases that should always be true.

+

Branch coverage excludes branches inside macros and assert() calls. Macros have their own unit tests so they do not need to be tested everywhere they appear. Asserts are not expected to have complete branch coverage since they test cases that should always be true.

diff --git a/doc/xml/release.xml b/doc/xml/release.xml index f95f8eb67..2050cbd44 100644 --- a/doc/xml/release.xml +++ b/doc/xml/release.xml @@ -5,7 +5,7 @@ -

release numbers consist of two parts, major and minor. A major release may break compatibility with the prior major release, but v2 releases are fully compatible with v1 repositories and will accept all v1 options. Minor releases can include bug fixes and features but do not change the repository format and strive to avoid changing options and naming.

+

release numbers consist of two parts, major and minor. A major release may break compatibility with the prior major release, but v2 releases are fully compatible with v1 repositories and will accept all v1 options. Minor releases can include bug fixes and features but do not change the repository format and strive to avoid changing options and naming.

Documentation for the v1 release can be found here.

@@ -4388,7 +4388,7 @@

Fix resume when the resumable backup was created by Perl.

-

In this case the resumable backup should be ignored, but the C code was not able to load the partial manifest written by Perl since the format differs slightly. Add validations to catch this case and continue gracefully.

+

In this case the resumable backup should be ignored, but the C code was not able to load the partial manifest written by Perl since the format differs slightly. Add validations to catch this case and continue gracefully.

@@ -4451,7 +4451,7 @@

Fix missing files corrupting the manifest.

-

If a file was removed by during the backup (or was missing from the standby) then the next file might not be copied and updated in the manifest. If this happened then the backup would error when restored.

+

If a file was removed by during the backup (or was missing from the standby) then the next file might not be copied and updated in the manifest. If this happened then the backup would error when restored.

@@ -4489,7 +4489,7 @@

Fix error in timeline conversion.

-

The timeline is required to verify WAL segments in the archive after a backup. The conversion was performed base 10 instead of 16, which led to errors when the timeline was &ge; 0xA.

+

The timeline is required to verify WAL segments in the archive after a backup. The conversion was performed base 10 instead of 16, which led to errors when the timeline was &ge; 0xA.

@@ -4528,7 +4528,7 @@

Add pg-user option.

-

Specifies the database user name when connecting to . If not specified will connect with the local OS user or PGUSER, which was the previous behavior.

+

Specifies the database user name when connecting to . If not specified will connect with the local OS user or PGUSER, which was the previous behavior.

@@ -4585,7 +4585,7 @@

Fix archive-push/archive-get when PGDATA is symlinked.

-

These commands tried to use cwd() as PGDATA but this would disagree with the path configured in pgBackRest if PGDATA was symlinked. If cwd() does not match the path then chdir() to the path and make sure the next cwd() matches the result from the first call.

+

These commands tried to use cwd() as PGDATA but this would disagree with the path configured in pgBackRest if PGDATA was symlinked. If cwd() does not match the path then chdir() to the path and make sure the next cwd() matches the result from the first call.

@@ -4628,7 +4628,7 @@

Fix remote timeout in delta restore.

-

When performing a delta restore on a largely unchanged cluster the remote could timeout if no files were fetched from the repository within protocol-timeout. Add keep-alives to prevent remote timeout.

+

When performing a delta restore on a largely unchanged cluster the remote could timeout if no files were fetched from the repository within protocol-timeout. Add keep-alives to prevent remote timeout.

@@ -5024,7 +5024,7 @@

Rename repo-s3-verify-ssl option to repo-s3-verify-tls.

-

The new name is preferred because pgBackRest does not support any SSL protocol versions (they are all considered to be insecure). The old name will continue to be accepted.

+

The new name is preferred because pgBackRest does not support any SSL protocol versions (they are all considered to be insecure). The old name will continue to be accepted.

@@ -5105,7 +5105,7 @@ -

Filter improvements. Only process next filter in IoFilterGroup when input buffer is full or flushing. Improve filter's notion of done to optimize filter processing.

+

Filter improvements. Only process next filter in IoFilterGroup when input buffer is full or flushing. Improve filter's notion of done to optimize filter processing.

@@ -5387,7 +5387,7 @@ -

IMPORTANT NOTE: The new TLS/SSL implementation forbids dots in S3 bucket names per RFC-2818. This security fix is required for compliant hostname verification.

+

IMPORTANT NOTE: The new TLS/SSL implementation forbids dots in S3 bucket names per RFC-2818. This security fix is required for compliant hostname verification.

@@ -5446,7 +5446,7 @@
-

CryptoHash improvements and fixes. Fix incorrect buffer size used in cryptoHashOne(). Add missing const to cryptoHashOne() and cryptoHashOneStr(). Add hash size constants. Extern hash type constant.

+

CryptoHash improvements and fixes. Fix incorrect buffer size used in cryptoHashOne(). Add missing const to cryptoHashOne() and cryptoHashOneStr(). Add hash size constants. Extern hash type constant.

@@ -5482,7 +5482,7 @@ -

Logging improvements. Allow three-digit process IDs in logging. Allow process id in C logging.

+

Logging improvements. Allow three-digit process IDs in logging. Allow process id in C logging.

@@ -5704,7 +5704,7 @@ -

MemContext improvements. Improve performance of context and memory allocations. Use contextTop/contextCurrent instead of memContextTop()/memContextCurrent(). Don't make a copy of the context name.

+

MemContext improvements. Improve performance of context and memory allocations. Use contextTop/contextCurrent instead of memContextTop()/memContextCurrent(). Don't make a copy of the context name.

@@ -5836,7 +5836,7 @@ -

JSON improvements. Optimize parser implementation. Make the renderer more null tolerant.

+

JSON improvements. Optimize parser implementation. Make the renderer more null tolerant.

@@ -5930,7 +5930,7 @@ -

Function log macro improvements. Rename FUNCTION_DEBUG_* and consolidate ASSERT_* macros for consistency. Improve CONST and P/PP type macro handling. Move MACRO_TO_STR() to common/debug.h. Remove unused type parameter from FUNCTION_TEST_RETURN().

+

Function log macro improvements. Rename FUNCTION_DEBUG_* and consolidate ASSERT_* macros for consistency. Improve CONST and P/PP type macro handling. Move MACRO_TO_STR() to common/debug.h. Remove unused type parameter from FUNCTION_TEST_RETURN().

@@ -5946,7 +5946,7 @@ -

JSON improvements. Allow empty arrays in JSON parser. Fix null output in JSON renderer. Fix escaping in JSON string parser/renderer.

+

JSON improvements. Allow empty arrays in JSON parser. Fix null output in JSON renderer. Fix escaping in JSON string parser/renderer.

@@ -6166,7 +6166,7 @@ -

Storage helper improvements. Allow NULL stanza in storage helper. Add path expression for repository backup.

+

Storage helper improvements. Allow NULL stanza in storage helper. Add path expression for repository backup.

@@ -6174,7 +6174,7 @@ -

Info module improvements. Rename constants in Info module for consistency. Remove #define statements in the InfoPg module to conform with newly-adopted coding standards. Use cast to make for loop more readable in InfoPg module. Add infoArchiveIdHistoryMatch() to the InfoArchive object.

+

Info module improvements. Rename constants in Info module for consistency. Remove #define statements in the InfoPg module to conform with newly-adopted coding standards. Use cast to make for loop more readable in InfoPg module. Add infoArchiveIdHistoryMatch() to the InfoArchive object.

@@ -6566,11 +6566,11 @@ -

New test containers. Add libxml2 library needed for S3 development. Include new minor version upgrades for . Remove 11 beta/rc repository.

+

New test containers. Add libxml2 library needed for S3 development. Include new minor version upgrades for . Remove 11 beta/rc repository.

-

Test speed improvements. Mount tmpfs in Vagrantfile instead test.pl. Preserve contents of C unit test build directory between test.pl executions. Improve efficiency of code generation.

+

Test speed improvements. Mount tmpfs in Vagrantfile instead test.pl. Preserve contents of C unit test build directory between test.pl executions. Improve efficiency of code generation.

@@ -6673,7 +6673,7 @@ -

Storage refactoring. Posix file functions now differentiate between open and missing errors. Don't use negations in objects below Storage. Rename posix driver files/functions for consistency. Full abstraction of storage driver interface. Merge protocol storage helper into storage helper. Add CIFS driver to storage helper for read-only repositories.

+

Storage refactoring. Posix file functions now differentiate between open and missing errors. Don't use negations in objects below Storage. Rename posix driver files/functions for consistency. Full abstraction of storage driver interface. Merge protocol storage helper into storage helper. Add CIFS driver to storage helper for read-only repositories.

@@ -6802,7 +6802,7 @@ -

Make Valgrind return an error even when a non-fatal issue is detected. Update some minor issues discovered in the tests as a result.

+

Make Valgrind return an error even when a non-fatal issue is detected. Update some minor issues discovered in the tests as a result.

@@ -6904,7 +6904,7 @@

Exclude temporary and unlogged relation (table/index) files from backup.

-

Implemented using the same logic as the patches adding this feature to , 8694cc96 and 920a5e50. Temporary relation exclusion is enabled in &ge; 9.0. Unlogged relation exclusion is enabled in &ge; 9.1, where the feature was introduced.

+

Implemented using the same logic as the patches adding this feature to , 8694cc96 and 920a5e50. Temporary relation exclusion is enabled in &ge; 9.0. Unlogged relation exclusion is enabled in &ge; 9.1, where the feature was introduced.

@@ -6954,7 +6954,7 @@ -

Validate configuration options in a single pass. By pre-calculating and storing the option dependencies in parse.auto.c validation can be completed in a single pass, which is both simpler and faster.

+

Validate configuration options in a single pass. By pre-calculating and storing the option dependencies in parse.auto.c validation can be completed in a single pass, which is both simpler and faster.

@@ -6967,7 +6967,7 @@ -

Improve performance of string to int conversion. Use strtoll() instead of sprintf() for conversion. Also use available integer min/max constants rather than hard-coded values.

+

Improve performance of string to int conversion. Use strtoll() instead of sprintf() for conversion. Also use available integer min/max constants rather than hard-coded values.

@@ -6992,7 +6992,7 @@ -

Allow Buffer object used size to be different than allocated size. Add functions to manage used size and remaining size and update automatically when possible.

+

Allow Buffer object used size to be different than allocated size. Add functions to manage used size and remaining size and update automatically when possible.

@@ -7000,11 +7000,11 @@ -

Abstract IO layer out of the storage layer. This allows the routines to be used for IO objects that do not have a storage representation. Implement buffer read and write IO objects. Implement filters and update cryptoHash to use the new interface. Implement size and buffer filters.

+

Abstract IO layer out of the storage layer. This allows the routines to be used for IO objects that do not have a storage representation. Implement buffer read and write IO objects. Implement filters and update cryptoHash to use the new interface. Implement size and buffer filters.

-

storageFileRead() accepts a buffer for output rather than creating one. This is more efficient overall and allows the caller to specify how many bytes will be read on each call. Reads are appended if the buffer already contains data but the buffer size will never increase.

+

storageFileRead() accepts a buffer for output rather than creating one. This is more efficient overall and allows the caller to specify how many bytes will be read on each call. Reads are appended if the buffer already contains data but the buffer size will never increase.

@@ -7021,7 +7021,7 @@ -

Manifest improvements. Require catalog version when instantiating a Manifest object (and not loading it from disk). Prevent manifest from being built more than once. Limit manifest build recursion (i.e. links followed) to sixteen levels to detect link loops.

+

Manifest improvements. Require catalog version when instantiating a Manifest object (and not loading it from disk). Prevent manifest from being built more than once. Limit manifest build recursion (i.e. links followed) to sixteen levels to detect link loops.

@@ -7097,17 +7097,17 @@

Stop trying to arrange contributors in release.xml by last/first name.

-

Contributor names have always been presented in the release notes exactly as given, but we tried to assign internal IDs based on last/first name which can be hard to determine and ultimately doesn't make sense. Inspired by Christophe's PostgresOpen 2017 talk, Human Beings Do Not Have a Primary Key.

+

Contributor names have always been presented in the release notes exactly as given, but we tried to assign internal IDs based on last/first name which can be hard to determine and ultimately doesn't make sense. Inspired by Christophe's PostgresOpen 2017 talk, Human Beings Do Not Have a Primary Key.

-

Allow containers to be defined in a document. The defined containers are built before the document build begins which allows them to be reused.

+

Allow containers to be defined in a document. The defined containers are built before the document build begins which allows them to be reused.

-

Move most host setup to containers defined in the documentation. This includes installation which had previously been included in the documentation. This way produces faster builds and there is no need for us to document installation.

+

Move most host setup to containers defined in the documentation. This includes installation which had previously been included in the documentation. This way produces faster builds and there is no need for us to document installation.

@@ -7123,15 +7123,15 @@ -

Use pre-built images from Docker Hub when the container definition has not changed. Downloading an image is quite a bit faster than building a new image from scratch and saves minutes per test run in CI.

+

Use pre-built images from Docker Hub when the container definition has not changed. Downloading an image is quite a bit faster than building a new image from scratch and saves minutes per test run in CI.

-

Refactor the common/log tests to not depend on common/harnessLog. common/harnessLog was not ideally suited for general testing and made all the tests quite awkward. Instead, move all code used to test the common/log module into the logTest module and repurpose common/harnessLog to do log expect testing for all other tests in a cleaner way. Add a few exceptions for config testing since the log levels are reset by default in config/parse.

+

Refactor the common/log tests to not depend on common/harnessLog. common/harnessLog was not ideally suited for general testing and made all the tests quite awkward. Instead, move all code used to test the common/log module into the logTest module and repurpose common/harnessLog to do log expect testing for all other tests in a cleaner way. Add a few exceptions for config testing since the log levels are reset by default in config/parse.

-

Add --log-level-test option. This allows setting the test log level independently from the general test harness setting, but current only works for the C tests. It is useful for seeing log output from functions on the console while a test is running.

+

Add --log-level-test option. This allows setting the test log level independently from the general test harness setting, but current only works for the C tests. It is useful for seeing log output from functions on the console while a test is running.

@@ -7216,7 +7216,7 @@ -

Split log levels into separate header file. Many modules that use debug.h do not need to do logging so this reduces dependencies for those modules.

+

Split log levels into separate header file. Many modules that use debug.h do not need to do logging so this reduces dependencies for those modules.

@@ -7271,11 +7271,11 @@ -

Build containers from scratch for more accurate testing. Use a prebuilt s3 server container.

+

Build containers from scratch for more accurate testing. Use a prebuilt s3 server container.

-

Document generator improvements. Allow parameters to be passed when a container is created. Allow /etc/hosts update to be skipped (for containers without bash). Allow environment load to be skipped. Allow bash wrapping to be skipped. Allow forcing a command to run as a user without sudo. Allow an entire execute list to be hidden.

+

Document generator improvements. Allow parameters to be passed when a container is created. Allow /etc/hosts update to be skipped (for containers without bash). Allow environment load to be skipped. Allow bash wrapping to be skipped. Allow forcing a command to run as a user without sudo. Allow an entire execute list to be hidden.

@@ -7349,11 +7349,11 @@ -

Add stack trace macros to all functions. Low-level functions only include stack trace in test builds while higher-level functions ship with stack trace built-in. Stack traces include all parameters passed to the function but production builds only create the parameter list when the log level is set high enough, i.e. debug or trace depending on the function.

+

Add stack trace macros to all functions. Low-level functions only include stack trace in test builds while higher-level functions ship with stack trace built-in. Stack traces include all parameters passed to the function but production builds only create the parameter list when the log level is set high enough, i.e. debug or trace depending on the function.

-

Build libc using links rather than referencing the C files in src directly. The C library builds with different options which should not be reused for the C binary or vice versa.

+

Build libc using links rather than referencing the C files in src directly. The C library builds with different options which should not be reused for the C binary or vice versa.

@@ -7361,7 +7361,7 @@ -

Test harness improvements. Allow more than one test to provide coverage for the same module. Add option to disable valgrind. Add option to disabled coverage. Add option to disable debug build. Add option to disable compiler optimization. Add --dev-test mode.

+

Test harness improvements. Allow more than one test to provide coverage for the same module. Add option to disable valgrind. Add option to disabled coverage. Add option to disable debug build. Add option to disable compiler optimization. Add --dev-test mode.

@@ -7369,7 +7369,7 @@ -

Set log-timestamp=n for integration tests. This means less filtering of logs needs to be done and new timestamps can be added without adding new filters.

+

Set log-timestamp=n for integration tests. This means less filtering of logs needs to be done and new timestamps can be added without adding new filters.

@@ -7456,7 +7456,7 @@

Make backup/restore path sync more efficient.

-

Scanning the entire directory can be very expensive if there are a lot of small tables. The backup manifest contains the path list so use it to perform syncs instead of scanning the backup/restore path.

+

Scanning the entire directory can be very expensive if there are a lot of small tables. The backup manifest contains the path list so use it to perform syncs instead of scanning the backup/restore path.

@@ -7472,19 +7472,19 @@ -

Make backup.history sync more efficient. Only the backup.history/[year] directory was being synced, so check if the backup.history is newly created and sync it as well.

+

Make backup.history sync more efficient. Only the backup.history/[year] directory was being synced, so check if the backup.history is newly created and sync it as well.

-

Move async forking and more error handling to C. The Perl process was exiting directly when called but that interfered with proper locking for the forked async process. Now Perl returns results to the C process which handles all errors, including signals.

+

Move async forking and more error handling to C. The Perl process was exiting directly when called but that interfered with proper locking for the forked async process. Now Perl returns results to the C process which handles all errors, including signals.

-

Improved lock implementation written in C. Now only two types of locks can be taken: archive and backup. Most commands use one or the other but the stanza-* commands acquire both locks. This provides better protection than the old command-based locking scheme.

+

Improved lock implementation written in C. Now only two types of locks can be taken: archive and backup. Most commands use one or the other but the stanza-* commands acquire both locks. This provides better protection than the old command-based locking scheme.

-

Storage object improvements. Convert all functions to variadic functions. Enforce read-only storage. Add storageLocalWrite() helper function. Add storageCopy(), storageExists(), storageMove(), storageNewRead()/storageNewWrite(), storagePathCreate(), storagePathRemove(), storagePathSync(), and storageRemove(). Add StorageFileRead and StorageFileWrite objects. Abstract Posix driver code into a separate module. Call storagePathRemove() from the Perl Posix driver.

+

Storage object improvements. Convert all functions to variadic functions. Enforce read-only storage. Add storageLocalWrite() helper function. Add storageCopy(), storageExists(), storageMove(), storageNewRead()/storageNewWrite(), storagePathCreate(), storagePathRemove(), storagePathSync(), and storageRemove(). Add StorageFileRead and StorageFileWrite objects. Abstract Posix driver code into a separate module. Call storagePathRemove() from the Perl Posix driver.

@@ -7493,11 +7493,11 @@ -

Improve String and StringList objects. Add strUpper(), strLower(), strLstExists(), strLstExistsZ(), strChr(), strSub(), strSubN(), and strTrunc().

+

Improve String and StringList objects. Add strUpper(), strLower(), strLstExists(), strLstExistsZ(), strChr(), strSub(), strSubN(), and strTrunc().

-

Improve Buffer object. Add bufNewC(), bufEq() and bufCat(). Only reallocate buffer when the size has changed.

+

Improve Buffer object. Add bufNewC(), bufEq() and bufCat(). Only reallocate buffer when the size has changed.

@@ -7509,15 +7509,15 @@ -

Error handling improvements. Add THROWP_* macro variants for error handling. These macros allow an ErrorType pointer to be passed and are required for functions that may return different errors based on a parameter. Add _FMT variants for all THROW macros so format types are checked by the compiler.

+

Error handling improvements. Add THROWP_* macro variants for error handling. These macros allow an ErrorType pointer to be passed and are required for functions that may return different errors based on a parameter. Add _FMT variants for all THROW macros so format types are checked by the compiler.

-

Split cfgLoad() into multiple functions to make testing easier. Mainly this helps with unit tests that need to do log expect testing.

+

Split cfgLoad() into multiple functions to make testing easier. Mainly this helps with unit tests that need to do log expect testing.

-

Allow MemContext objects to be copied to a new parent. This makes it easier to create objects and then copy them to another context when they are complete without having to worry about freeing them on error. Update List, StringList, and Buffer to allow moves. Update Ini and Storage to take advantage of moves.

+

Allow MemContext objects to be copied to a new parent. This makes it easier to create objects and then copy them to another context when they are complete without having to worry about freeing them on error. Update List, StringList, and Buffer to allow moves. Update Ini and Storage to take advantage of moves.

@@ -7525,11 +7525,11 @@ -

Refactor usec to msec in common/time.c. The implementation provides usec resolution but this is not needed in practice and it makes the interface more complicated due to the extra zeros.

+

Refactor usec to msec in common/time.c. The implementation provides usec resolution but this is not needed in practice and it makes the interface more complicated due to the extra zeros.

-

Replace THROW_ON_SYS_ERROR() with THROW_SYS_ERROR(). The former macro was hiding missing branch coverage for critical error handling.

+

Replace THROW_ON_SYS_ERROR() with THROW_SYS_ERROR(). The former macro was hiding missing branch coverage for critical error handling.

@@ -7541,11 +7541,11 @@ -

Split debug and assert code into separate headers. Assert can be used earlier because it only depends on the error-handler and not logging. Add ASSERT() macro which is preserved in production builds.

+

Split debug and assert code into separate headers. Assert can be used earlier because it only depends on the error-handler and not logging. Add ASSERT() macro which is preserved in production builds.

-

Cleanup C types. Remove typec.h. Order all typdefs above local includes.

+

Cleanup C types. Remove typec.h. Order all typdefs above local includes.

@@ -7600,7 +7600,7 @@ -

Document build improvements. Perform apt-get update to ensure packages are up to date before installing. Add -p to the repository mkdir so it won't fail if the directory already exists, handy for testing packages.

+

Document build improvements. Perform apt-get update to ensure packages are up to date before installing. Add -p to the repository mkdir so it won't fail if the directory already exists, handy for testing packages.

@@ -7610,21 +7610,21 @@

Use lcov for C unit test coverage reporting.

-

Switch from Devel::Cover because it would not report on branch coverage for reports converted from gcov. Incomplete branch coverage for a module now generates an error. Coverage of unit tests is not displayed in the report unless they are incomplete for either statement or branch coverage.

+

Switch from Devel::Cover because it would not report on branch coverage for reports converted from gcov. Incomplete branch coverage for a module now generates an error. Coverage of unit tests is not displayed in the report unless they are incomplete for either statement or branch coverage.

-

Move test definitions to test/define.yaml. The location is better because it is no longer buried in the Perl test libs. Also, the data can be easily accessed from C.

+

Move test definitions to test/define.yaml. The location is better because it is no longer buried in the Perl test libs. Also, the data can be easily accessed from C.

-

Move help/version integration tests to mock/all. Help and version are covered by unit tests, so we really just to need to make sure there is output when called from the command line.

+

Move help/version integration tests to mock/all. Help and version are covered by unit tests, so we really just to need to make sure there is output when called from the command line.

-

Move archive-stop and expire tests to the mock module. These are mock integration tests so they should be grouped with the other mock integration tests.

+

Move archive-stop and expire tests to the mock module. These are mock integration tests so they should be grouped with the other mock integration tests.

@@ -7632,7 +7632,7 @@ -

Add HARNESS_FORK macros for tests that require fork(). A standard pattern for tests makes fork() easier to use and should help prevent some common mistakes.

+

Add HARNESS_FORK macros for tests that require fork(). A standard pattern for tests makes fork() easier to use and should help prevent some common mistakes.

@@ -7640,7 +7640,7 @@ -

Generate code counts for all source files. The source files are also classified by type and purpose.

+

Generate code counts for all source files. The source files are also classified by type and purpose.

@@ -7648,11 +7648,11 @@ -

Improve logic for smart builds to include version changes. Skip version checks when testing in --dev mode.

+

Improve logic for smart builds to include version changes. Skip version checks when testing in --dev mode.

-

Use pip 9.03 in test VMs. pip 10 drops support for Python 2.6 which is still used by the older test VMs.

+

Use pip 9.03 in test VMs. pip 10 drops support for Python 2.6 which is still used by the older test VMs.

@@ -7664,7 +7664,7 @@ -

Divide tests into three types (unit, integration, performance). Many options that were set per test can instead be inferred from the types, i.e. container, c, expect, and individual.

+

Divide tests into three types (unit, integration, performance). Many options that were set per test can instead be inferred from the types, i.e. container, c, expect, and individual.

@@ -7692,7 +7692,7 @@

Immediately error when a secure option (e.g. repo1-s3-key) is passed on the command line.

-

Since would not pass secure options on to sub-processes an obscure error was thrown. The new error is much clearer and provides hints about how to fix the problem. Update command documentation to omit secure options that cannot be specified on the command-line.

+

Since would not pass secure options on to sub-processes an obscure error was thrown. The new error is much clearer and provides hints about how to fix the problem. Update command documentation to omit secure options that cannot be specified on the command-line.

@@ -7743,11 +7743,11 @@ -

Improve Perl configuration. Set config before Main::main() call to avoid secrets being exposed in a stack trace. Move logic for setting defaults to C.

+

Improve Perl configuration. Set config before Main::main() call to avoid secrets being exposed in a stack trace. Move logic for setting defaults to C.

-

Improve logging. Move command begin to C except when it must be called after another command in Perl (e.g. expire after backup). Command begin logs correctly for complex data types like hash and list. Specify which commands will log to file immediately and set the default log level for log messages that are common to all commands. File logging is initiated from C.

+

Improve logging. Move command begin to C except when it must be called after another command in Perl (e.g. expire after backup). Command begin logs correctly for complex data types like hash and list. Specify which commands will log to file immediately and set the default log level for log messages that are common to all commands. File logging is initiated from C.

@@ -7775,7 +7775,7 @@ -

Improve debugging. Add ASSERT_DEBUG() macro for debugging and replace all current assert() calls except in tests that can't use the debug code. Replace remaining NDEBUG blocks with the more granular DEBUG_UNIT. Remove some debug memset() calls in MemContext since valgrind is more useful for these checks.

+

Improve debugging. Add ASSERT_DEBUG() macro for debugging and replace all current assert() calls except in tests that can't use the debug code. Replace remaining NDEBUG blocks with the more granular DEBUG_UNIT. Remove some debug memset() calls in MemContext since valgrind is more useful for these checks.

@@ -7791,11 +7791,11 @@ -

Check int size in common/type.h. This ensures that integers are at least 32-bits without having to run the test suite.

+

Check int size in common/type.h. This ensures that integers are at least 32-bits without having to run the test suite.

-

Improve conversion of C exceptions to Exception objects. Colons in the message would prevent all of the message from being loaded into the Exception object.

+

Improve conversion of C exceptions to Exception objects. Colons in the message would prevent all of the message from being loaded into the Exception object.

@@ -7839,7 +7839,7 @@ -

Build performance improvements. Improve bin and libc build performance. Improve code generation performance.

+

Build performance improvements. Improve bin and libc build performance. Improve code generation performance.

@@ -7859,7 +7859,7 @@ -

Remove --smart from --expect tests. This ensures that new binaries are built before running the tests.

+

Remove --smart from --expect tests. This ensures that new binaries are built before running the tests.

@@ -7891,15 +7891,15 @@ -

Improve performance of HTTPS client. Buffering now takes the pending bytes on the socket into account (when present) rather than relying entirely on select(). In some instances the final bytes would not be flushed until the connection was closed.

+

Improve performance of HTTPS client. Buffering now takes the pending bytes on the socket into account (when present) rather than relying entirely on select(). In some instances the final bytes would not be flushed until the connection was closed.

-

Improve S3 delete performance. The constant S3_BATCH_MAX had been replaced with a hard-coded value of 2, probably during testing.

+

Improve S3 delete performance. The constant S3_BATCH_MAX had been replaced with a hard-coded value of 2, probably during testing.

-

Allow any non-command-line option to be reset to default on the command-line. This allows options in pgbackrest.conf to be reset to default which reduces the need to write new configuration files for specific needs.

+

Allow any non-command-line option to be reset to default on the command-line. This allows options in pgbackrest.conf to be reset to default which reduces the need to write new configuration files for specific needs.

@@ -7911,7 +7911,7 @@ -

Rename db-* options to pg-* and backup-* options to repo-* to improve consistency. repo-* options are now indexed although currently only one is allowed.

+

Rename db-* options to pg-* and backup-* options to repo-* to improve consistency. repo-* options are now indexed although currently only one is allowed.

@@ -7945,15 +7945,15 @@
-

Improve MemContext module. Add temporary context blocks and refactor allocation arrays to include allocation size.

+

Improve MemContext module. Add temporary context blocks and refactor allocation arrays to include allocation size.

-

Improve error module. Add functions to convert error codes to C errors and handle system errors.

+

Improve error module. Add functions to convert error codes to C errors and handle system errors.

-

Create a master list of errors in build/error.yaml. The C and Perl errors lists are created automatically by Build.pm so they stay up to date.

+

Create a master list of errors in build/error.yaml. The C and Perl errors lists are created automatically by Build.pm so they stay up to date.

@@ -7961,15 +7961,15 @@ -

Add 30 second wait loop to lockAcquire() when fail on no lock enabled. This should help prevent processes that are shutting down from interfering with processes that are starting up.

+

Add 30 second wait loop to lockAcquire() when fail on no lock enabled. This should help prevent processes that are shutting down from interfering with processes that are starting up.

-

Replace cfgCommandTotal()/cfgOptionTotal() functions with constants. The constants are applicable in more cases and allow the compiler to optimize certain loops more efficiently.

+

Replace cfgCommandTotal()/cfgOptionTotal() functions with constants. The constants are applicable in more cases and allow the compiler to optimize certain loops more efficiently.

-

Cleanup usage of internal options. Apply internal to options that need to be read to determine locality but should not appear in the help.

+

Cleanup usage of internal options. Apply internal to options that need to be read to determine locality but should not appear in the help.

@@ -8013,11 +8013,11 @@ -

Improve section source feature to not require a title or content. The title will be pulled from the source document.

+

Improve section source feature to not require a title or content. The title will be pulled from the source document.

-

Allow code blocks to have a type. Currently this is only rendered in Markdown.

+

Allow code blocks to have a type. Currently this is only rendered in Markdown.

@@ -8029,11 +8029,11 @@ -

PDF rendering improvements. Check both doc-path and bin-path for logo. Allow PDF to be output to a location other than the output directory. Use PDF-specific version variable for more flexible formatting. Allow sections to be excluded from table of contents. More flexible replacements for titles and footers. Fill is now the default for table columns. Column width is specified as a percentage rather that using latex-specific notation. Fix missing variable replace for code-block title.

+

PDF rendering improvements. Check both doc-path and bin-path for logo. Allow PDF to be output to a location other than the output directory. Use PDF-specific version variable for more flexible formatting. Allow sections to be excluded from table of contents. More flexible replacements for titles and footers. Fill is now the default for table columns. Column width is specified as a percentage rather that using latex-specific notation. Fix missing variable replace for code-block title.

-

Add id param for hosts created with host-add. The host-*-ip variable is created from the id param so the name param can be changed without affecting the host-*-ip variable. If id is not specified then it is copied from name.

+

Add id param for hosts created with host-add. The host-*-ip variable is created from the id param so the name param can be changed without affecting the host-*-ip variable. If id is not specified then it is copied from name.

@@ -8053,11 +8053,11 @@ -

Improve speed of C unit tests. Preserve object files between tests and use a Makefile to avoid rebuilding object files.

+

Improve speed of C unit tests. Preserve object files between tests and use a Makefile to avoid rebuilding object files.

-

Report coverage errors via the console. This helps with debugging coverage issues on remote services like Travis.

+

Report coverage errors via the console. This helps with debugging coverage issues on remote services like Travis.

@@ -8089,7 +8089,7 @@ -

Fix critical bug in resume that resulted in inconsistent backups. A regression in v0.82 removed the timestamp comparison when deciding which files from the aborted backup to keep on resume. See note above for more details.

+

Fix critical bug in resume that resulted in inconsistent backups. A regression in v0.82 removed the timestamp comparison when deciding which files from the aborted backup to keep on resume. See note above for more details.

@@ -8097,7 +8097,7 @@ -

Fix non-compliant ISO-8601 timestamp format in S3 authorization headers. AWS and some gateways were tolerant of space rather than zero-padded hours while others were not.

+

Fix non-compliant ISO-8601 timestamp format in S3 authorization headers. AWS and some gateways were tolerant of space rather than zero-padded hours while others were not.

@@ -8153,25 +8153,25 @@ -

Improve the HTTP client to set content-length to 0 when not specified by the server. S3 (and gateways) always set content-length or transfer-encoding but HTTP 1.1 does not require it and proxies (e.g. HAProxy) may not include either.

+

Improve the HTTP client to set content-length to 0 when not specified by the server. S3 (and gateways) always set content-length or transfer-encoding but HTTP 1.1 does not require it and proxies (e.g. HAProxy) may not include either.

-

Improve performance of HTTPS client. Buffering now takes the pending bytes on the socket into account (when present) rather than relying entirely on select(). In some instances the final bytes would not be flushed until the connection was closed.

+

Improve performance of HTTPS client. Buffering now takes the pending bytes on the socket into account (when present) rather than relying entirely on select(). In some instances the final bytes would not be flushed until the connection was closed.

-

Improve S3 delete performance. The constant S3_BATCH_MAX had been replaced with a hard-coded value of 2, probably during testing.

+

Improve S3 delete performance. The constant S3_BATCH_MAX had been replaced with a hard-coded value of 2, probably during testing.

-

Make backup/restore path sync more efficient. Scanning the entire directory can be very expensive if there are a lot of small tables. The backup manifest contains the path list so use it to perform syncs instead of scanning the backup/restore path. Remove recursive path sync functionality since it is no longer used.

+

Make backup/restore path sync more efficient. Scanning the entire directory can be very expensive if there are a lot of small tables. The backup manifest contains the path list so use it to perform syncs instead of scanning the backup/restore path. Remove recursive path sync functionality since it is no longer used.

-

Make backup.history sync more efficient. Only the backup.history/[year] directory was being synced, so check if the backup.history is newly created and sync it as well.

+

Make backup.history sync more efficient. Only the backup.history/[year] directory was being synced, so check if the backup.history is newly created and sync it as well.

@@ -8223,7 +8223,7 @@ -

Disable package build tests since v1 will no longer be packaged. Users installing packages should update to v2. v1 builds are intended for users installing from source.

+

Disable package build tests since v1 will no longer be packaged. Users installing packages should update to v2. v1 builds are intended for users installing from source.

@@ -8263,7 +8263,7 @@ -

Ensure latest db-id is selected on when matching archive.info to backup.info. This provides correct matching in the event there are system-id and db-version duplicates (e.g. after reverting a pg_upgrade).

+

Ensure latest db-id is selected on when matching archive.info to backup.info. This provides correct matching in the event there are system-id and db-version duplicates (e.g. after reverting a pg_upgrade).

@@ -8349,7 +8349,7 @@ -

Fixed an issue that suppressed locality errors for backup and restore. When a backup host is present, backups should only be allowed on the backup host and restores should only be allowed on the database host unless an alternate configuration is created that ignores the remote host.

+

Fixed an issue that suppressed locality errors for backup and restore. When a backup host is present, backups should only be allowed on the backup host and restores should only be allowed on the database host unless an alternate configuration is created that ignores the remote host.

@@ -8357,11 +8357,11 @@ -

Fixed an issue where WAL was not expired on 10. This was caused by a faulty regex that expected all major versions to be X.X.

+

Fixed an issue where WAL was not expired on 10. This was caused by a faulty regex that expected all major versions to be X.X.

-

Fixed an issue where the --no-config option was not passed to child processes. This meant the child processes would still read the local config file and possibly cause unexpected behaviors.

+

Fixed an issue where the --no-config option was not passed to child processes. This meant the child processes would still read the local config file and possibly cause unexpected behaviors.

@@ -8402,11 +8402,11 @@ -

Split refactor sections into improvements and development in the release notes. Many development notes are not relevant to users and simply clutter the release notes, so they are no longer shown on the website.

+

Split refactor sections into improvements and development in the release notes. Many development notes are not relevant to users and simply clutter the release notes, so they are no longer shown on the website.

-

Allow internal options that do not show up in the documentation. Used for test options initially but other use cases are on the horizon.

+

Allow internal options that do not show up in the documentation. Used for test options initially but other use cases are on the horizon.

@@ -8422,7 +8422,7 @@
-

Move restore test infrastructure to HostBackup.pm. Required to test restores on the backup server, a fairly common scenario. Improve the restore function to accept optional parameters rather than a long list of parameters. In passing, clean up extraneous use of strType and strComment variables.

+

Move restore test infrastructure to HostBackup.pm. Required to test restores on the backup server, a fairly common scenario. Improve the restore function to accept optional parameters rather than a long list of parameters. In passing, clean up extraneous use of strType and strComment variables.

@@ -8457,7 +8457,7 @@ -

Fixed an issue retrieving WAL for old database versions. After a stanza-upgrade it should still be possible to restore backups from the previous version and perform recovery with archive-get. However, archive-get only checked the most recent db version/id and failed. Also clean up some issues when the same db version/id appears multiple times in the history.

+

Fixed an issue retrieving WAL for old database versions. After a stanza-upgrade it should still be possible to restore backups from the previous version and perform recovery with archive-get. However, archive-get only checked the most recent db version/id and failed. Also clean up some issues when the same db version/id appears multiple times in the history.

@@ -8465,7 +8465,7 @@ -

Fixed an issue with invalid backup groups being set correctly on restore. If the backup cannot map a group to a name it stores the group in the manifest as false then uses either the owner of $PGDATA to set the group during restore or failing that the group of the current user. This logic was not working correctly because the selected group was overwriting the user on restore leaving the group undefined and the user incorrectly set to the group.

+

Fixed an issue with invalid backup groups being set correctly on restore. If the backup cannot map a group to a name it stores the group in the manifest as false then uses either the owner of $PGDATA to set the group during restore or failing that the group of the current user. This logic was not working correctly because the selected group was overwriting the user on restore leaving the group undefined and the user incorrectly set to the group.

@@ -8473,7 +8473,7 @@ -

Fixed an issue passing parameters to remotes. When more than one db was specified the path, port, and socket path would for db1 were passed no matter which db was actually being addressed.

+

Fixed an issue passing parameters to remotes. When more than one db was specified the path, port, and socket path would for db1 were passed no matter which db was actually being addressed.

@@ -8490,7 +8490,7 @@ -

Disable gzip filter when --compress-level-network=0. The filter was used with compress level set to 0 which added overhead without any benefit.

+

Disable gzip filter when --compress-level-network=0. The filter was used with compress level set to 0 which added overhead without any benefit.

@@ -8500,15 +8500,15 @@ -

Refactor protocol param generation into a new function. This allows the code to be tested more precisely and doesn't require executing a remote process.

+

Refactor protocol param generation into a new function. This allows the code to be tested more precisely and doesn't require executing a remote process.

-

Add list type for options. The hash type was being used for lists with an additional flag (`value-hash`) to indicate that it was not really a hash.

+

Add list type for options. The hash type was being used for lists with an additional flag (`value-hash`) to indicate that it was not really a hash.

-

Remove configurable option hints. db-path was the only option with a hint so the feature seemed wasteful. All missing stanza options now output the same hint without needing configuration.

+

Remove configurable option hints. db-path was the only option with a hint so the feature seemed wasteful. All missing stanza options now output the same hint without needing configuration.

@@ -8524,11 +8524,11 @@ -

Simplify try..catch..finally names. Also wrap in a do...while loop to make sure that no random else is attached to the main if block.

+

Simplify try..catch..finally names. Also wrap in a do...while loop to make sure that no random else is attached to the main if block.

-

Improve base64 implementation. Different encoded strings could be generated based on compiler optimizations. Even though decoding was still successful the encoded strings did not match the standard.

+

Improve base64 implementation. Different encoded strings could be generated based on compiler optimizations. Even though decoding was still successful the encoded strings did not match the standard.

@@ -8583,19 +8583,19 @@ -

Only check expect logs on CentOS 7. Variations in distros cause false negatives in tests but don't add much value.

+

Only check expect logs on CentOS 7. Variations in distros cause false negatives in tests but don't add much value.

-

Fix flapping protocol timeout test. It only matters that the correct error code is returned, so disable logging to prevent message ordering from failing the expect test.

+

Fix flapping protocol timeout test. It only matters that the correct error code is returned, so disable logging to prevent message ordering from failing the expect test.

-

Designate a single distro (Ubuntu 16.04) for coverage testing. Running coverage testing on multiple distros takes time but doesn't add significant value. Also ensure that the distro designated to run coverage tests is one of the default test distros. For C tests, enable optimizations on the distros that don't do coverage testing.

+

Designate a single distro (Ubuntu 16.04) for coverage testing. Running coverage testing on multiple distros takes time but doesn't add significant value. Also ensure that the distro designated to run coverage tests is one of the default test distros. For C tests, enable optimizations on the distros that don't do coverage testing.

-

Automate generation of WAL and pg_control test files. The existing static files would not work with 32-bit or big-endian systems so create functions to generate these files dynamically rather than creating a bunch of new static files.

+

Automate generation of WAL and pg_control test files. The existing static files would not work with 32-bit or big-endian systems so create functions to generate these files dynamically rather than creating a bunch of new static files.

@@ -8625,7 +8625,7 @@ -

Remove error when overlapping timelines are detected. Overlapping timelines are valid in many Point-in-Time-Recovery (PITR) scenarios.

+

Remove error when overlapping timelines are detected. Overlapping timelines are valid in many Point-in-Time-Recovery (PITR) scenarios.

@@ -8644,17 +8644,17 @@ -

Improve performance of list requests on S3. Any beginning literal portion of a filter expression is used to generate a search prefix which often helps keep the request small enough to avoid rate limiting.

+

Improve performance of list requests on S3. Any beginning literal portion of a filter expression is used to generate a search prefix which often helps keep the request small enough to avoid rate limiting.

-

Improve protocol error handling. In particular, stop errors are no longer reported as unexpected.

+

Improve protocol error handling. In particular, stop errors are no longer reported as unexpected.

-

Allow functions with sensitive options to be logged at debug level with redactions. Previously, functions with sensitive options had to be logged at trace level to avoid exposing them. Trace level logging may still expose secrets so use with caution.

+

Allow functions with sensitive options to be logged at debug level with redactions. Previously, functions with sensitive options had to be logged at trace level to avoid exposing them. Trace level logging may still expose secrets so use with caution.

@@ -9060,7 +9060,7 @@ -

IMPORTANT NOTE: 8.3 and 8.4 installations utilizing tablespaces should upgrade immediately from any v1 release and run a full backup. A bug prevented tablespaces from being backed up on these versions only. &ge; 9.0 +

IMPORTANT NOTE: 8.3 and 8.4 installations utilizing tablespaces should upgrade immediately from any v1 release and run a full backup. A bug prevented tablespaces from being backed up on these versions only. &ge; 9.0 is not affected.

@@ -9184,7 +9184,7 @@
-

Add deprecated state for containers. Deprecated containers may only be used to build packages.

+

Add deprecated state for containers. Deprecated containers may only be used to build packages.

@@ -9196,7 +9196,7 @@ -

Remove process-max option. Parallelism is now tested in a more targeted manner and the high level option is no longer needed.

+

Remove process-max option. Parallelism is now tested in a more targeted manner and the high level option is no longer needed.

@@ -9352,11 +9352,11 @@ -

Simplify locking scheme. Now, only the master process will hold write locks (for archive-push and backup commands) and not all local and remote worker processes as before.

+

Simplify locking scheme. Now, only the master process will hold write locks (for archive-push and backup commands) and not all local and remote worker processes as before.

-

Do not set timestamps of files in the backup directories to match timestamps in the cluster directory. This was originally done to enable backup resume, but that process is now implemented with checksums.

+

Do not set timestamps of files in the backup directories to match timestamps in the cluster directory. This was originally done to enable backup resume, but that process is now implemented with checksums.

@@ -9382,7 +9382,7 @@ -

The backup and restore commands no longer copy via temp files. In both cases the files are checksummed on resume so there's no danger of partial copies.

+

The backup and restore commands no longer copy via temp files. In both cases the files are checksummed on resume so there's no danger of partial copies.

@@ -9414,7 +9414,7 @@ -

Ignore clock skew in container libc/package builds using make. It is common for containers to have clock skew so the build process takes care of this issue independently.

+

Ignore clock skew in container libc/package builds using make. It is common for containers to have clock skew so the build process takes care of this issue independently.

@@ -9659,9 +9659,9 @@

IMPORTANT NOTE: The new implementation of asynchronous archiving no longer copies WAL to a separate queue. If there is any WAL left over in the old queue after upgrading to 1.13, it will be abandoned and not pushed to the repository.

-

To prevent this outcome, stop archiving by setting archive_command = false. Next, drain the async queue by running pgbackrest --stanza=[stanza-name] archive-push and wait for the process to complete. Check that the queue in [spool-path]/archive/[stanza-name]/out is empty. Finally, install 1.13 and restore the original archive_command.

+

To prevent this outcome, stop archiving by setting archive_command = false. Next, drain the async queue by running pgbackrest --stanza=[stanza-name] archive-push and wait for the process to complete. Check that the queue in [spool-path]/archive/[stanza-name]/out is empty. Finally, install 1.13 and restore the original archive_command.

-

IMPORTANT NOTE: The stanza-create command is not longer optional and must be executed before backup or archiving can be performed on a new stanza. Pre-existing stanzas do not require stanza-create to be executed.

+

IMPORTANT NOTE: The stanza-create command is not longer optional and must be executed before backup or archiving can be performed on a new stanza. Pre-existing stanzas do not require stanza-create to be executed.

@@ -9758,11 +9758,11 @@
-

Remove --lock option. This option was introduced before the lock directory could be located outside the repository and is now obsolete.

+

Remove --lock option. This option was introduced before the lock directory could be located outside the repository and is now obsolete.

-

Added --log-timestamp option to allow timestamps to be suppressed in logging. This is primarily used to avoid filters in the automated documentation.

+

Added --log-timestamp option to allow timestamps to be suppressed in logging. This is primarily used to avoid filters in the automated documentation.

@@ -9878,7 +9878,7 @@ -

Split test modules into separate files to make the code more maintainable. Tests are dynamically loaded by name rather than requiring an if-else block.

+

Split test modules into separate files to make the code more maintainable. Tests are dynamically loaded by name rather than requiring an if-else block.

@@ -9899,7 +9899,7 @@ -

IMPORTANT NOTE: In prior releases it was possible to specify options on the command-line that were invalid for the current command without getting an error. An error will now be generated for invalid options so it is important to carefully check command-line options in your environment to prevent disruption.

+

IMPORTANT NOTE: In prior releases it was possible to specify options on the command-line that were invalid for the current command without getting an error. An error will now be generated for invalid options so it is important to carefully check command-line options in your environment to prevent disruption.

@@ -9908,11 +9908,11 @@ -

Fixed an issue where options that were invalid for the specified command could be provided on the command-line without generating an error. The options were ignored and did not cause any change in behavior, but it did lead to some confusion. Invalid options will now generate an error.

+

Fixed an issue where options that were invalid for the specified command could be provided on the command-line without generating an error. The options were ignored and did not cause any change in behavior, but it did lead to some confusion. Invalid options will now generate an error.

-

Fixed an issue where internal symlinks were not being created for tablespaces in the repository. This issue was only apparent when trying to bring up clusters in-place manually using filesystem snapshots and did not affect normal backup and restore.

+

Fixed an issue where internal symlinks were not being created for tablespaces in the repository. This issue was only apparent when trying to bring up clusters in-place manually using filesystem snapshots and did not affect normal backup and restore.

@@ -9938,11 +9938,11 @@ -

Added the --checksum-page option to allow pgBackRest to validate page checksums in data files when checksums are enabled on >= 9.3. Note that this functionality requires a C library which may not initially be available in OS packages. The option will automatically be enabled when the library is present and checksums are enabled on the cluster.

+

Added the --checksum-page option to allow pgBackRest to validate page checksums in data files when checksums are enabled on >= 9.3. Note that this functionality requires a C library which may not initially be available in OS packages. The option will automatically be enabled when the library is present and checksums are enabled on the cluster.

-

Added the --repo-link option to allow internal symlinks to be suppressed when the repository is located on a filesystem that does not support symlinks. This does not affect any functionality, but the convenience link latest will not be created and neither will internal tablespace symlinks, which will affect the ability to bring up clusters in-place manually using filesystem snapshots.

+

Added the --repo-link option to allow internal symlinks to be suppressed when the repository is located on a filesystem that does not support symlinks. This does not affect any functionality, but the convenience link latest will not be created and neither will internal tablespace symlinks, which will affect the ability to bring up clusters in-place manually using filesystem snapshots.

@@ -9960,7 +9960,7 @@ -

For simplicity, the pg_control file is now copied with the rest of the files instead of by itself of at the end of the process. The backup command does not require this behavior and the restore copies to a temporary file which is renamed at the end of the restore.

+

For simplicity, the pg_control file is now copied with the rest of the files instead of by itself of at the end of the process. The backup command does not require this behavior and the restore copies to a temporary file which is renamed at the end of the restore.

@@ -10108,7 +10108,7 @@ -

Fixed an issue where asynchronous archiving was transferring one file per execution instead of transferring files in batches. This regression was introduced in v1.09 and affected efficiency only, all WAL segments were correctly archived in asynchronous mode.

+

Fixed an issue where asynchronous archiving was transferring one file per execution instead of transferring files in batches. This regression was introduced in v1.09 and affected efficiency only, all WAL segments were correctly archived in asynchronous mode.

@@ -10212,7 +10212,7 @@ -

Fixed an issue where the async archiver would not be started if archive-push did not have enough space to queue a new WAL segment. This meant that the queue would never be cleared without manual intervention (such as calling archive-push directly). now receives errors when there is not enough space to store new WAL segments but the async process will still be started so that space is eventually freed.

+

Fixed an issue where the async archiver would not be started if archive-push did not have enough space to queue a new WAL segment. This meant that the queue would never be cleared without manual intervention (such as calling archive-push directly). now receives errors when there is not enough space to store new WAL segments but the async process will still be started so that space is eventually freed.

@@ -10242,7 +10242,7 @@ -

Added the log-level-stderr option to control whether console log messages are sent to stderr or stdout. By default this is set to warn which represents a change in behavior from previous versions, even though it may be more intuitive. Setting log-level-stderr=off will preserve the old behavior.

+

Added the log-level-stderr option to control whether console log messages are sent to stderr or stdout. By default this is set to warn which represents a change in behavior from previous versions, even though it may be more intuitive. Setting log-level-stderr=off will preserve the old behavior.

@@ -10422,7 +10422,7 @@ -

Experimental support for non-exclusive backups in 9.6 rc1. Changes to the control/catalog/WAL versions in subsequent release candidates may break compatibility but will be updated with each release to keep pace.

+

Experimental support for non-exclusive backups in 9.6 rc1. Changes to the control/catalog/WAL versions in subsequent release candidates may break compatibility but will be updated with each release to keep pace.

@@ -10508,19 +10508,19 @@ -

Backup from a standby cluster. A connection to the primary cluster is still required to start/stop the backup and copy files that are not replicated, but the vast majority of files are copied from the standby in order to reduce load on the primary.

+

Backup from a standby cluster. A connection to the primary cluster is still required to start/stop the backup and copy files that are not replicated, but the vast majority of files are copied from the standby in order to reduce load on the primary.

-

More flexible configuration for databases. Master and standby can both be configured on the backup server and will automatically determine which is the primary. This means no configuration changes for backup are required after failing over from a primary to standby when a separate backup server is used.

+

More flexible configuration for databases. Master and standby can both be configured on the backup server and will automatically determine which is the primary. This means no configuration changes for backup are required after failing over from a primary to standby when a separate backup server is used.

-

Exclude directories during backup that are cleaned, recreated, or zeroed by at startup. These include pgsql_tmp and pg_stat_tmp. The postgresql.auto.conf.tmp file is now excluded in addition to files that were already excluded: backup_label.old, postmaster.opts, postmaster.pid, recovery.conf, recovery.done.

+

Exclude directories during backup that are cleaned, recreated, or zeroed by at startup. These include pgsql_tmp and pg_stat_tmp. The postgresql.auto.conf.tmp file is now excluded in addition to files that were already excluded: backup_label.old, postmaster.opts, postmaster.pid, recovery.conf, recovery.done.

-

Experimental support for non-exclusive backups in 9.6 beta4. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but will be updated with each release to keep pace.

+

Experimental support for non-exclusive backups in 9.6 beta4. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but will be updated with each release to keep pace.

@@ -10582,7 +10582,7 @@ -

Fixed an issue where tablespace paths that had $PGDATA as a substring would be identified as a subdirectories of $PGDATA even when they were not. Also hardened relative path checking a bit.

+

Fixed an issue where tablespace paths that had $PGDATA as a substring would be identified as a subdirectories of $PGDATA even when they were not. Also hardened relative path checking a bit.

@@ -10654,7 +10654,7 @@
-

Fixed an issue where the contents of pg_xlog were being backed up if the directory was symlinked. This didn't cause any issues during restore but was a waste of space.

+

Fixed an issue where the contents of pg_xlog were being backed up if the directory was symlinked. This didn't cause any issues during restore but was a waste of space.

@@ -10664,7 +10664,7 @@ -

Experimental support for non-exclusive backups in 9.6 beta3. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but will be updated with each release to keep pace.

+

Experimental support for non-exclusive backups in 9.6 beta3. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but will be updated with each release to keep pace.

@@ -10678,7 +10678,7 @@
-

All remote types now take locks. The exceptions date to when the test harness and were running in the same VM and no longer apply.

+

All remote types now take locks. The exceptions date to when the test harness and were running in the same VM and no longer apply.

@@ -10726,7 +10726,7 @@ -

Added release.pl to make releases reproducible. For now this only includes building and deploying documentation.

+

Added release.pl to make releases reproducible. For now this only includes building and deploying documentation.

@@ -10770,7 +10770,7 @@ -

Fixed an issue where keep-alives could be starved out by lots of small files during multi-threaded backup. They were also completely absent from single/multi-threaded backup resume and restore checksumming.

+

Fixed an issue where keep-alives could be starved out by lots of small files during multi-threaded backup. They were also completely absent from single/multi-threaded backup resume and restore checksumming.

@@ -10778,7 +10778,7 @@ -

Fixed an issue where the expire command would refuse to run when explicitly called from the command line if the db-host option was set. This was not an issue when expire was run automatically after a backup

+

Fixed an issue where the expire command would refuse to run when explicitly called from the command line if the db-host option was set. This was not an issue when expire was run automatically after a backup

@@ -10798,35 +10798,35 @@ -

Added the protocol-timeout option. Previously protocol-timeout was set as db-timeout + 30 seconds.

+

Added the protocol-timeout option. Previously protocol-timeout was set as db-timeout + 30 seconds.

-

Failure to shutdown remotes at the end of the backup no longer throws an exception. Instead a warning is generated that recommends a higher protocol-timeout.

+

Failure to shutdown remotes at the end of the backup no longer throws an exception. Instead a warning is generated that recommends a higher protocol-timeout.

-

Experimental support for non-exclusive backups in 9.6 beta2. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but will be updated with each release to keep pace.

+

Experimental support for non-exclusive backups in 9.6 beta2. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but will be updated with each release to keep pace.

-

Improved handling of users/groups captured during backup that do not exist on the restore host. Also explicitly handle the case where user/group is not mapped to a name.

+

Improved handling of users/groups captured during backup that do not exist on the restore host. Also explicitly handle the case where user/group is not mapped to a name.

-

Option handling is now far more strict. Previously it was possible for a command to use an option that was not explicitly assigned to it. This was especially true for the backup-host and db-host options which are used to determine locality.

+

Option handling is now far more strict. Previously it was possible for a command to use an option that was not explicitly assigned to it. This was especially true for the backup-host and db-host options which are used to determine locality.

-

The pg_xlogfile_name() function is no longer used to construct WAL filenames from LSNs. While this function is convenient it is not available on a standby. Instead, the archive is searched for the LSN in order to find the timeline. If due to some misadventure the LSN appears on multiple timelines then an error will be thrown, whereas before this condition would have passed unnoticed.

+

The pg_xlogfile_name() function is no longer used to construct WAL filenames from LSNs. While this function is convenient it is not available on a standby. Instead, the archive is searched for the LSN in order to find the timeline. If due to some misadventure the LSN appears on multiple timelines then an error will be thrown, whereas before this condition would have passed unnoticed.

-

Changed version variable to a constant. It had originally been designed to play nice with a specific packaging tool but that tool was never used.

+

Changed version variable to a constant. It had originally been designed to play nice with a specific packaging tool but that tool was never used.

@@ -10878,7 +10878,7 @@
-

Allow hidden options to be added to a command. This allows certain commands (like apt-get) to be forced during the build without making that a part of the documentation.

+

Allow hidden options to be added to a command. This allows certain commands (like apt-get) to be forced during the build without making that a part of the documentation.

@@ -10906,11 +10906,11 @@ -

Major refactor of the test suite to make it more modular and object-oriented. Multiple Docker containers can now be created for a single test to simulate more realistic environments. Tests paths have been renamed for clarity.

+

Major refactor of the test suite to make it more modular and object-oriented. Multiple Docker containers can now be created for a single test to simulate more realistic environments. Tests paths have been renamed for clarity.

-

Greatly reduced the quantity of Docker containers built by default. Containers are only built for versions specified in db-minimal and those required to build documentation. Additional containers can be built with --db-version=all or by specifying a version, e.g. --db-version=9.4.

+

Greatly reduced the quantity of Docker containers built by default. Containers are only built for versions specified in db-minimal and those required to build documentation. Additional containers can be built with --db-version=all or by specifying a version, e.g. --db-version=9.4.

@@ -10959,7 +10959,7 @@ -

Release notes are now broken into sections so that bugs, features, and refactors are clearly delineated. An Additional Notes section has been added for changes to documentation and the test suite that do not affect the core code.

+

Release notes are now broken into sections so that bugs, features, and refactors are clearly delineated. An Additional Notes section has been added for changes to documentation and the test suite that do not affect the core code.

@@ -10978,17 +10978,17 @@ -

The change log was the last piece of documentation to be rendered in Markdown only. Wrote a converter so the document can be output by the standard renderers. The change log will now be located on the website and has been renamed to Releases.

+

The change log was the last piece of documentation to be rendered in Markdown only. Wrote a converter so the document can be output by the standard renderers. The change log will now be located on the website and has been renamed to Releases.

-

Added an execution cache so that documentation can be generated without setting up the full container environment. This is useful for packaging, keeps the documentation consistent for a release, and speeds up generation when no changes are made in the execution list.

+

Added an execution cache so that documentation can be generated without setting up the full container environment. This is useful for packaging, keeps the documentation consistent for a release, and speeds up generation when no changes are made in the execution list.

-

Remove function constants and pass strings directly to logDebugParam(). The function names were only used once so creating constants for them was wasteful.

+

Remove function constants and pass strings directly to logDebugParam(). The function names were only used once so creating constants for them was wasteful.

@@ -11008,7 +11008,7 @@ -

Upgraded doc/test VM to Ubuntu 16.04. This will help catch Perl errors in the doc code since it is not run across multiple distributions like the core and test code. It is also to be hoped that a newer kernel will make Docker more stable.

+

Upgraded doc/test VM to Ubuntu 16.04. This will help catch Perl errors in the doc code since it is not run across multiple distributions like the core and test code. It is also to be hoped that a newer kernel will make Docker more stable.

@@ -11040,7 +11040,7 @@ -

Allow selective restore of databases from a cluster backup. This feature can result in major space and time savings when only specific databases are restored. Unrestored databases will not be accessible but must be manually dropped before they will be removed from the shared catalogue.

+

Allow selective restore of databases from a cluster backup. This feature can result in major space and time savings when only specific databases are restored. Unrestored databases will not be accessible but must be manually dropped before they will be removed from the shared catalogue.

@@ -11049,7 +11049,7 @@ -

Experimental support for non-exclusive backups in 9.6 beta1. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but will be updated with each release to keep pace.

+

Experimental support for non-exclusive backups in 9.6 beta1. Changes to the control/catalog/WAL versions in subsequent betas may break compatibility but will be updated with each release to keep pace.

@@ -11058,7 +11058,7 @@ -

IMPORTANT NOTE: This flag day release breaks compatibility with older versions of . The manifest format, on-disk structure, configuration scheme, and the exe/path names have all changed. You must create a new repository to hold backups for this version of and keep your older repository for a time in case you need to do a restore. Restores from the prior repository will require the prior version of but because of name changes it is possible to have 1.00 and a prior version of installed at the same time. See the notes below for more detailed information on what has changed.

+

IMPORTANT NOTE: This flag day release breaks compatibility with older versions of . The manifest format, on-disk structure, configuration scheme, and the exe/path names have all changed. You must create a new repository to hold backups for this version of and keep your older repository for a time in case you need to do a restore. Restores from the prior repository will require the prior version of but because of name changes it is possible to have 1.00 and a prior version of installed at the same time. See the notes below for more detailed information on what has changed.

@@ -11068,11 +11068,11 @@ -

Implemented a new configuration scheme which should be far simpler to use. See the User Guide and Configuration Reference for details but for a simple configuration all options can now be placed in the stanza section. Options that are shared between stanzas can be placed in the [global] section. More complex configurations can still make use of command sections though this should be a rare use case.

+

Implemented a new configuration scheme which should be far simpler to use. See the User Guide and Configuration Reference for details but for a simple configuration all options can now be placed in the stanza section. Options that are shared between stanzas can be placed in the [global] section. More complex configurations can still make use of command sections though this should be a rare use case.

-

The repo-path option now always refers to the repository where backups and archive are stored, whether local or remote, so the repo-remote-path option has been removed. The new spool-path option can be used to define a location for queueing WAL segments when archiving asynchronously. A local repository is no longer required.

+

The repo-path option now always refers to the repository where backups and archive are stored, whether local or remote, so the repo-remote-path option has been removed. The new spool-path option can be used to define a location for queueing WAL segments when archiving asynchronously. A local repository is no longer required.

@@ -11082,7 +11082,7 @@ -

The default configuration filename is now pgbackrest.conf instead of pg_backrest.conf. This was done for consistency with other naming changes but also to prevent old config files from being loaded accidentally when migrating to 1.00.

+

The default configuration filename is now pgbackrest.conf instead of pg_backrest.conf. This was done for consistency with other naming changes but also to prevent old config files from being loaded accidentally when migrating to 1.00.

@@ -11100,7 +11100,7 @@ --> -

Lock files are now stored in /tmp/pgbackrest by default. These days /run/pgbackrest is the preferred location but that would require init scripts which are not part of this release. The lock-path option can be used to configure the lock directory.

+

Lock files are now stored in /tmp/pgbackrest by default. These days /run/pgbackrest is the preferred location but that would require init scripts which are not part of this release. The lock-path option can be used to configure the lock directory.

@@ -11109,7 +11109,7 @@ -

Log files are now stored in /var/log/pgbackrest by default and no longer have the date appended so they can be managed with logrotate. The log-path option can be used to configure the log directory.

+

Log files are now stored in /var/log/pgbackrest by default and no longer have the date appended so they can be managed with logrotate. The log-path option can be used to configure the log directory.

@@ -11123,7 +11123,7 @@ -

All files and directories linked from PGDATA are now included in the backup. By default links will be restored directly into PGDATA as files or directories. The {[dash]}-link-all option can be used to restore all links to their original locations. The {[dash]}-link-map option can be used to remap a link to a new location.

+

All files and directories linked from PGDATA are now included in the backup. By default links will be restored directly into PGDATA as files or directories. The {[dash]}-link-all option can be used to restore all links to their original locations. The {[dash]}-link-map option can be used to remap a link to a new location.

@@ -11145,7 +11145,7 @@ -

Fixed an issue where the master process was passing {[dash]}-repo-remote-path instead of {[dash]}-repo-path to the remote and causing the lock files to be created in the default repository directory (/var/lib/backup), generally ending in failure. This was only an issue when {[dash]}-repo-remote-path was defined on the command line rather than in pg_backrest.conf.

+

Fixed an issue where the master process was passing {[dash]}-repo-remote-path instead of {[dash]}-repo-path to the remote and causing the lock files to be created in the default repository directory (/var/lib/backup), generally ending in failure. This was only an issue when {[dash]}-repo-remote-path was defined on the command line rather than in pg_backrest.conf.

@@ -11154,7 +11154,7 @@ -

IMPORTANT BUG FIX FOR TABLESPACES: A change to the repository format was accidentally introduced in 0.90 which means the on-disk backup was no longer a valid cluster when the backup contained tablespaces. This only affected users who directly copied the backups to restore clusters rather than using the restore command. However, the fix breaks compatibility with older backups that contain tablespaces no matter how they are being restored ( will throw errors and refuse to restore). New full backups should be taken immediately after installing version 0.91 for any clusters that contain tablespaces. If older backups need to be restored then use a version of that matches the backup version.

+

IMPORTANT BUG FIX FOR TABLESPACES: A change to the repository format was accidentally introduced in 0.90 which means the on-disk backup was no longer a valid cluster when the backup contained tablespaces. This only affected users who directly copied the backups to restore clusters rather than using the restore command. However, the fix breaks compatibility with older backups that contain tablespaces no matter how they are being restored ( will throw errors and refuse to restore). New full backups should be taken immediately after installing version 0.91 for any clusters that contain tablespaces. If older backups need to be restored then use a version of that matches the backup version.

@@ -11207,13 +11207,13 @@ -

The retention-archive option can now be be safely set to less than backup retention (retention-full or retention-diff) without also specifying archive-copy=n. The WAL required to make the backups that fall outside of archive retention consistent will be preserved in the archive. However, in this case PITR will not be possible for the backups that fall outside of archive retention.

+

The retention-archive option can now be be safely set to less than backup retention (retention-full or retention-diff) without also specifying archive-copy=n. The WAL required to make the backups that fall outside of archive retention consistent will be preserved in the archive. However, in this case PITR will not be possible for the backups that fall outside of archive retention.

-

When backing up and restoring tablespaces only operates on the subdirectory created for the version of being run against. Since multiple versions can live in a tablespace (especially during a binary upgrade) this prevents too many files from being copied during a backup and other versions possibly being wiped out during a restore. This only applies to >= 9.0 &mdash; prior versions of could not share a tablespace directory.

+

When backing up and restoring tablespaces only operates on the subdirectory created for the version of being run against. Since multiple versions can live in a tablespace (especially during a binary upgrade) this prevents too many files from being copied during a backup and other versions possibly being wiped out during a restore. This only applies to >= 9.0 &mdash; prior versions of could not share a tablespace directory.

@@ -11230,7 +11230,7 @@ -

Added checks for {[dash]}-delta and {[dash]}-force restore options to ensure that the destination is a valid $PGDATA directory. will check for the presence of PG_VERSION or backup.manifest (left over from an aborted restore). If neither file is found then {[dash]}-delta and {[dash]}-force will be disabled but the restore will proceed unless there are files in the $PGDATA directory (or any tablespace directories) in which case the operation will be aborted.

+

Added checks for {[dash]}-delta and {[dash]}-force restore options to ensure that the destination is a valid $PGDATA directory. will check for the presence of PG_VERSION or backup.manifest (left over from an aborted restore). If neither file is found then {[dash]}-delta and {[dash]}-force will be disabled but the restore will proceed unless there are files in the $PGDATA directory (or any tablespace directories) in which case the operation will be aborted.

@@ -11258,7 +11258,7 @@ -

Fixed an issue where document generation failed because some OSs are not tolerant of having multiple installed versions of . A separate VM is now created for each version. Also added a sleep after database starts during document generation to ensure the database is running before the next command runs.

+

Fixed an issue where document generation failed because some OSs are not tolerant of having multiple installed versions of . A separate VM is now created for each version. Also added a sleep after database starts during document generation to ensure the database is running before the next command runs.

@@ -11272,13 +11272,13 @@ -

Fixed an issue where longer-running backups/restores would timeout when remote and threaded. Keepalives are now used to make sure the remote for the main process does not timeout while the thread remotes do all the work. The error message for timeouts was also improved to make debugging easier.

+

Fixed an issue where longer-running backups/restores would timeout when remote and threaded. Keepalives are now used to make sure the remote for the main process does not timeout while the thread remotes do all the work. The error message for timeouts was also improved to make debugging easier.

-

Allow restores to be performed on a read-only repository by using {[dash]}-no-lock and {[dash]}-log-level-file=off. The {[dash]}-no-lock option can only be used with restores.

+

Allow restores to be performed on a read-only repository by using {[dash]}-no-lock and {[dash]}-log-level-file=off. The {[dash]}-no-lock option can only be used with restores.

@@ -11294,7 +11294,7 @@ -

The dev branch has been renamed to master and for the time being the master branch has renamed to release, though it will probably be removed at some point {[dash]}- thus ends the gitflow experiment for . It is recommended that any forks get re-forked and clones get re-cloned.

+

The dev branch has been renamed to master and for the time being the master branch has renamed to release, though it will probably be removed at some point {[dash]}- thus ends the gitflow experiment for . It is recommended that any forks get re-forked and clones get re-cloned.

@@ -11331,11 +11331,11 @@ -

Symlinks are no longer created in backup directories in the repository. These symlinks could point virtually anywhere and potentially be dangerous. Symlinks are still recreated during a restore.

+

Symlinks are no longer created in backup directories in the repository. These symlinks could point virtually anywhere and potentially be dangerous. Symlinks are still recreated during a restore.

-

Added better messaging for backup expiration. Full and differential backup expirations are logged on a single line along with a list of all dependent backups expired.

+

Added better messaging for backup expiration. Full and differential backup expirations are logged on a single line along with a list of all dependent backups expired.

@@ -11374,7 +11374,7 @@ -

Added a new user guide that covers basics and some advanced topics including PITR. Much more to come, but it's a start.

+

Added a new user guide that covers basics and some advanced topics including PITR. Much more to come, but it's a start.

@@ -11394,7 +11394,7 @@ -

Fixed an issue where a resume would fail if temp files were left in the root backup directory when the backup failed. This scenario was likely if the backup process got terminated during the copy phase.

+

Fixed an issue where a resume would fail if temp files were left in the root backup directory when the backup failed. This scenario was likely if the backup process got terminated during the copy phase.

@@ -11404,7 +11404,7 @@ -

Experimental support for 9.5 beta1. This may break when the control version or WAL magic changes in future versions but will be updated in each release to keep pace. All regression tests pass except for {[dash]}-target-resume tests (this functionality has changed in 9.5) and there is no testing yet for .partial WAL segments.

+

Experimental support for 9.5 beta1. This may break when the control version or WAL magic changes in future versions but will be updated in each release to keep pace. All regression tests pass except for {[dash]}-target-resume tests (this functionality has changed in 9.5) and there is no testing yet for .partial WAL segments.

@@ -11458,7 +11458,7 @@ -

Experimental support for 9.5 alpha2. This may break when the control version or WAL magic changes in future versions but will be updated in each release to keep pace. All regression tests pass except for {[dash]}-target-resume tests (this functionality has changed in 9.5) and there is no testing yet for .partial WAL segments.

+

Experimental support for 9.5 alpha2. This may break when the control version or WAL magic changes in future versions but will be updated in each release to keep pace. All regression tests pass except for {[dash]}-target-resume tests (this functionality has changed in 9.5) and there is no testing yet for .partial WAL segments.

@@ -11490,7 +11490,7 @@ -

Expiration tests are now synthetic rather than based on actual backups. This will allow development of more advanced expiration features.

+

Expiration tests are now synthetic rather than based on actual backups. This will allow development of more advanced expiration features.

@@ -11504,7 +11504,7 @@ -

Fixed an issue that caused the formatted timestamp for both the oldest and newest backups to be reported as the current time by the info command. Only text output was affected {[dash]}- json output reported the correct epoch values.

+

Fixed an issue that caused the formatted timestamp for both the oldest and newest backups to be reported as the current time by the info command. Only text output was affected {[dash]}- json output reported the correct epoch values.

@@ -11518,7 +11518,7 @@ -

The repository is now created and updated with consistent directory and file modes. By default umask is set to 0000 but this can be disabled with the neutral-umask setting.

+

The repository is now created and updated with consistent directory and file modes. By default umask is set to 0000 but this can be disabled with the neutral-umask setting.

@@ -11530,21 +11530,21 @@ -

Remove pg_control file at the beginning of the restore and copy it back at the very end. This prevents the possibility that a partial restore can be started by .

+

Remove pg_control file at the beginning of the restore and copy it back at the very end. This prevents the possibility that a partial restore can be started by .

-

Added checks to be sure the db-path setting is consistent with db-port by comparing the data_directory as reported by the cluster against the db-path setting and the version as reported by the cluster against the value read from pg_control. The db-socket-path setting is checked to be sure it is an absolute path.

+

Added checks to be sure the db-path setting is consistent with db-port by comparing the data_directory as reported by the cluster against the db-path setting and the version as reported by the cluster against the value read from pg_control. The db-socket-path setting is checked to be sure it is an absolute path.

-

Experimental support for 9.5 alpha1. This may break when the control version or WAL magic changes in future versions but will be updated in each release to keep pace. All regression tests pass except for {[dash]}-target-resume tests (this functionality has changed in 9.5) and there is no testing yet for .partial WAL segments.

+

Experimental support for 9.5 alpha1. This may break when the control version or WAL magic changes in future versions but will be updated in each release to keep pace. All regression tests pass except for {[dash]}-target-resume tests (this functionality has changed in 9.5) and there is no testing yet for .partial WAL segments.

-

Now using Perl DBI and DBD::Pg for connections to rather than psql. The cmd-psql and cmd-psql-option settings have been removed and replaced with db-port and db-socket-path. Follow the instructions in the Installation Guide to install DBD::Pg on your operating system.

+

Now using Perl DBI and DBD::Pg for connections to rather than psql. The cmd-psql and cmd-psql-option settings have been removed and replaced with db-port and db-socket-path. Follow the instructions in the Installation Guide to install DBD::Pg on your operating system.

@@ -11558,7 +11558,7 @@ -

Split most of README.md out into USERGUIDE.md and CHANGELOG.md because it was becoming unwieldy. Changed most references to database in the user guide to database cluster for clarity.

+

Split most of README.md out into USERGUIDE.md and CHANGELOG.md because it was becoming unwieldy. Changed most references to database in the user guide to database cluster for clarity.

@@ -11580,10 +11580,10 @@ -

Removed dependency on CPAN packages for multi-threaded operation. While it might not be a bad idea to update the threads and Thread::Queue packages, it is no longer necessary.

+

Removed dependency on CPAN packages for multi-threaded operation. While it might not be a bad idea to update the threads and Thread::Queue packages, it is no longer necessary.

-

Modified wait backoff to use a Fibonacci rather than geometric sequence. This will make wait time grow less aggressively while still giving reasonable values.

+

Modified wait backoff to use a Fibonacci rather than geometric sequence. This will make wait time grow less aggressively while still giving reasonable values.

@@ -11654,7 +11654,7 @@ -

IMPORTANT NOTE: This flag day release breaks compatibility with older versions of . The manifest format, on-disk structure, and the binary names have all changed. You must create a new repository to hold backups for this version of and keep your older repository for a time in case you need to do a restore. The pg_backrest.conf file has not changed but you'll need to change any references to pg_backrest.pl in cron (or elsewhere) to pg_backrest (without the .pl extension).

+

IMPORTANT NOTE: This flag day release breaks compatibility with older versions of . The manifest format, on-disk structure, and the binary names have all changed. You must create a new repository to hold backups for this version of and keep your older repository for a time in case you need to do a restore. The pg_backrest.conf file has not changed but you'll need to change any references to pg_backrest.pl in cron (or elsewhere) to pg_backrest (without the .pl extension).

@@ -11667,17 +11667,17 @@ -

Logging now uses unbuffered output. This should make log files that are being written by multiple threads less chaotic.

+

Logging now uses unbuffered output. This should make log files that are being written by multiple threads less chaotic.

-

Experimental support for 9.5. This may break when the control version or WAL magic changes but will be updated in each release.

+

Experimental support for 9.5. This may break when the control version or WAL magic changes but will be updated in each release.

-

More efficient file ordering for backup. Files are copied in descending size order so a single thread does not end up copying a large file at the end. This had already been implemented for restore.

+

More efficient file ordering for backup. Files are copied in descending size order so a single thread does not end up copying a large file at the end. This had already been implemented for restore.

@@ -11691,7 +11691,7 @@ -

Fixed an issue where archive-copy would fail on an incr/diff backup when hardlink=n. In this case the pg_xlog path does not already exist and must be created.

+

Fixed an issue where archive-copy would fail on an incr/diff backup when hardlink=n. In this case the pg_xlog path does not already exist and must be created.

@@ -11699,7 +11699,7 @@ -

Fixed an issue in async archiving where archive-push was not properly returning 0 when archive-max-mb was reached and moved the async check after transfer to avoid having to remove the stop file twice. Also added unit tests for this case and improved error messages to make it clearer to the user what went wrong.

+

Fixed an issue in async archiving where archive-push was not properly returning 0 when archive-max-mb was reached and moved the async check after transfer to avoid having to remove the stop file twice. Also added unit tests for this case and improved error messages to make it clearer to the user what went wrong.

@@ -11707,13 +11707,13 @@ -

Fixed a locking issue that could allow multiple operations of the same type against a single stanza. This appeared to be benign in terms of data integrity but caused spurious errors while archiving and could lead to errors in backup/restore.

+

Fixed a locking issue that could allow multiple operations of the same type against a single stanza. This appeared to be benign in terms of data integrity but caused spurious errors while archiving and could lead to errors in backup/restore.

-

Allow duplicate WAL segments to be archived when the checksum matches. This is necessary for some recovery scenarios.

+

Allow duplicate WAL segments to be archived when the checksum matches. This is necessary for some recovery scenarios.

@@ -11773,19 +11773,19 @@ -

Better resume support. Resumed files are checked to be sure they have not been modified and the manifest is saved more often to preserve checksums as the backup progresses. More unit tests to verify each resume case.

+

Better resume support. Resumed files are checked to be sure they have not been modified and the manifest is saved more often to preserve checksums as the backup progresses. More unit tests to verify each resume case.

-

Resume is now optional. Use the resume setting or {[dash]}-no-resume from the command line to disable.

+

Resume is now optional. Use the resume setting or {[dash]}-no-resume from the command line to disable.

-

More info messages during restore. Previously, most of the restore messages were debug level so not a lot was output in the log.

+

More info messages during restore. Previously, most of the restore messages were debug level so not a lot was output in the log.

-

Added tablespace setting to allow tablespaces to be restored into the pg_tblspc path. This produces compact restores that are convenient for development, staging, etc. Currently these restores cannot be backed up as expects only links in the pg_tblspc path.

+

Added tablespace setting to allow tablespaces to be restored into the pg_tblspc path. This produces compact restores that are convenient for development, staging, etc. Currently these restores cannot be backed up as expects only links in the pg_tblspc path.

@@ -11795,7 +11795,7 @@ -

Fixed a buffering error that could occur on large, highly-compressible files when copying to an uncompressed remote destination. The error was detected in the decompression code and resulted in a failed backup rather than corruption so it should not affect successful backups made with previous versions.

+

Fixed a buffering error that could occur on large, highly-compressible files when copying to an uncompressed remote destination. The error was detected in the decompression code and resulted in a failed backup rather than corruption so it should not affect successful backups made with previous versions.

@@ -11805,13 +11805,13 @@ -

Pushing duplicate WAL now generates an error. This worked before only if checksums were disabled.

+

Pushing duplicate WAL now generates an error. This worked before only if checksums were disabled.

-

Database System IDs are used to make sure that all WAL in an archive matches up. This should help prevent misconfigurations that send WAL from multiple clusters to the same archive.

+

Database System IDs are used to make sure that all WAL in an archive matches up. This should help prevent misconfigurations that send WAL from multiple clusters to the same archive.

@@ -11835,7 +11835,7 @@ -

Fixed broken checksums and now they work with normal and resumed backups. Finally realized that checksums and checksum deltas should be functionally separated and this simplified a number of things. Issue #28 has been created for checksum deltas.

+

Fixed broken checksums and now they work with normal and resumed backups. Finally realized that checksums and checksum deltas should be functionally separated and this simplified a number of things. Issue #28 has been created for checksum deltas.

@@ -11853,11 +11853,11 @@ -

De/compression is now performed without threads and checksum/size is calculated in stream. That means file checksums are no longer optional.

+

De/compression is now performed without threads and checksum/size is calculated in stream. That means file checksums are no longer optional.

-

Added option {[dash]}-no-start-stop to allow backups when Postgres is shut down. If postmaster.pid is present then {[dash]}-force is required to make the backup run (though if Postgres is running an inconsistent backup will likely be created). This option was added primarily for the purpose of unit testing, but there may be applications in the real world as well.

+

Added option {[dash]}-no-start-stop to allow backups when Postgres is shut down. If postmaster.pid is present then {[dash]}-force is required to make the backup run (though if Postgres is running an inconsistent backup will likely be created). This option was added primarily for the purpose of unit testing, but there may be applications in the real world as well.

@@ -11865,13 +11865,13 @@ -

Link latest always points to the last backup. This has been added for convenience and to make restores simpler.

+

Link latest always points to the last backup. This has been added for convenience and to make restores simpler.

-

Removed dependency on Moose. It wasn't being used extensively and makes for longer startup times.

+

Removed dependency on Moose. It wasn't being used extensively and makes for longer startup times.

@@ -11889,7 +11889,7 @@ -

Complete rewrite of BackRest::File module to use a custom protocol for remote operations and Perl native GZIP and SHA operations. Compression is performed in threads rather than forked processes.

+

Complete rewrite of BackRest::File module to use a custom protocol for remote operations and Perl native GZIP and SHA operations. Compression is performed in threads rather than forked processes.

@@ -11913,7 +11913,7 @@ -

Fairly comprehensive unit tests for all the basic operations. More work to be done here for sure, but then there is always more work to be done on unit tests.

+

Fairly comprehensive unit tests for all the basic operations. More work to be done here for sure, but then there is always more work to be done on unit tests.

@@ -11923,13 +11923,13 @@ -

Found and squashed a nasty bug where file_copy() was defaulted to ignore errors. There was also an issue in file_exists() that was causing the test to fail when the file actually did exist. Together they could have resulted in a corrupt backup with no errors, though it is very unlikely.

+

Found and squashed a nasty bug where file_copy() was defaulted to ignore errors. There was also an issue in file_exists() that was causing the test to fail when the file actually did exist. Together they could have resulted in a corrupt backup with no errors, though it is very unlikely.

-

Worked on improving error handling in the File object. This is not complete, but works well enough to find a few errors that have been causing us problems (notably, find is occasionally failing building the archive async manifest when system is under load).

+

Worked on improving error handling in the File object. This is not complete, but works well enough to find a few errors that have been causing us problems (notably, find is occasionally failing building the archive async manifest when system is under load).

@@ -11953,7 +11953,7 @@ -

If an archive directory which should be empty could not be deleted backrest was throwing an error. There's a good fix for that coming, but for the time being it has been changed to a warning so processing can continue. This was impacting backups as sometimes the final archive file would not get pushed if the first archive file had been in a different directory (plus some bad luck).

+

If an archive directory which should be empty could not be deleted backrest was throwing an error. There's a good fix for that coming, but for the time being it has been changed to a warning so processing can continue. This was impacting backups as sometimes the final archive file would not get pushed if the first archive file had been in a different directory (plus some bad luck).

@@ -11963,7 +11963,7 @@ -

Added RequestTTY=yes to ssh sessions. Hoping this will prevent random lockups.

+

Added RequestTTY=yes to ssh sessions. Hoping this will prevent random lockups.

@@ -11991,11 +11991,11 @@ -

Removed master_stderr_discard option on database SSH connections. There have been occasional lockups and they could be related to issues originally seen in the file code.

+

Removed master_stderr_discard option on database SSH connections. There have been occasional lockups and they could be related to issues originally seen in the file code.

-

Changed lock file conflicts on backup and expire commands to ERROR. They were set to DEBUG due to a copy-and-paste from the archive locks.

+

Changed lock file conflicts on backup and expire commands to ERROR. They were set to DEBUG due to a copy-and-paste from the archive locks.

@@ -12005,23 +12005,23 @@ -

No restore functionality, but the backup directories are consistent data directories. You'll need to either uncompress the files or turn off compression in the backup. Uncompressed backups on a ZFS (or similar) filesystem are a good option because backups can be restored locally via a snapshot to create logical backups or do spot data recovery.

+

No restore functionality, but the backup directories are consistent data directories. You'll need to either uncompress the files or turn off compression in the backup. Uncompressed backups on a ZFS (or similar) filesystem are a good option because backups can be restored locally via a snapshot to create logical backups or do spot data recovery.

-

Archiving is single-threaded. This has not posed an issue on our multi-terabyte databases with heavy write volume. Recommend a large WAL volume or to use the async option with a large volume nearby.

+

Archiving is single-threaded. This has not posed an issue on our multi-terabyte databases with heavy write volume. Recommend a large WAL volume or to use the async option with a large volume nearby.

-

Backups are multi-threaded, but the Net::OpenSSH library does not appear to be 100% thread-safe so it will very occasionally lock up on a thread. There is an overall process timeout that resolves this issue by killing the process. Yes, very ugly.

+

Backups are multi-threaded, but the Net::OpenSSH library does not appear to be 100% thread-safe so it will very occasionally lock up on a thread. There is an overall process timeout that resolves this issue by killing the process. Yes, very ugly.

-

Checksums are lost on any resumed backup. Only the final backup will record checksum on multiple resumes. Checksums from previous backups are correctly recorded and a full backup will reset everything.

+

Checksums are lost on any resumed backup. Only the final backup will record checksum on multiple resumes. Checksums from previous backups are correctly recorded and a full backup will reset everything.

-

The backup.manifest is being written as Storable because Config::IniFile does not seem to handle large files well. Would definitely like to save these as human-readable text.

+

The backup.manifest is being written as Storable because Config::IniFile does not seem to handle large files well. Would definitely like to save these as human-readable text.

@@ -12029,7 +12029,7 @@ -

Absolutely no documentation (outside the code). Well, excepting these release notes.

+

Absolutely no documentation (outside the code). Well, excepting these release notes.

diff --git a/doc/xml/user-guide.xml b/doc/xml/user-guide.xml index b40c8e5b5..7931a7b34 100644 --- a/doc/xml/user-guide.xml +++ b/doc/xml/user-guide.xml @@ -685,7 +685,7 @@ -

supports locating repositories in Azure-compatible object stores. The container used to store the repository must be created in advance &mdash; will not do it automatically. The repository can be located in the container root (/) but it's usually best to place it in a subpath so object store logs or other data can also be stored in the container without conflicts.

+

supports locating repositories in Azure-compatible object stores. The container used to store the repository must be created in advance &mdash; will not do it automatically. The repository can be located in the container root (/) but it's usually best to place it in a subpath so object store logs or other data can also be stored in the container without conflicts.

Do not enable hierarchical namespace as this will cause errors during expire. @@ -740,7 +740,7 @@ -

supports locating repositories in S3-compatible object stores. The bucket used to store the repository must be created in advance &mdash; will not do it automatically. The repository can be located in the bucket root (/) but it's usually best to place it in a subpath so object store logs or other data can also be stored in the bucket without conflicts.

+

supports locating repositories in S3-compatible object stores. The bucket used to store the repository must be created in advance &mdash; will not do it automatically. The repository can be located in the bucket root (/) but it's usually best to place it in a subpath so object store logs or other data can also be stored in the bucket without conflicts.

Configure <proper>S3</proper> @@ -770,7 +770,7 @@ - The region and endpoint will need to be configured to where the bucket is located. The values given here are for the {[s3-region]} region. + The region and endpoint will need to be configured to where the bucket is located. The values given here are for the {[s3-region]} region.
@@ -789,9 +789,9 @@

Configuration information and documentation for PostgreSQL can be found in the Manual.

-

A somewhat novel approach is taken to documentation in this user guide. Each command is run on a virtual machine when the documentation is built from the XML source. This means you can have a high confidence that the commands work correctly in the order presented. Output is captured and displayed below the command when appropriate. If the output is not included it is because it was deemed not relevant or was considered a distraction from the narrative.

+

A somewhat novel approach is taken to documentation in this user guide. Each command is run on a virtual machine when the documentation is built from the XML source. This means you can have a high confidence that the commands work correctly in the order presented. Output is captured and displayed below the command when appropriate. If the output is not included it is because it was deemed not relevant or was considered a distraction from the narrative.

-

All commands are intended to be run as an unprivileged user that has sudo privileges for both the root and postgres users. It's also possible to run the commands directly as their respective users without modification and in that case the sudo commands can be stripped off.

+

All commands are intended to be run as an unprivileged user that has sudo privileges for both the root and postgres users. It's also possible to run the commands directly as their respective users without modification and in that case the sudo commands can be stripped off.

@@ -817,7 +817,7 @@
Restore -

A restore is the act of copying a backup to a system where it will be started as a live database cluster. A restore requires the backup files and one or more WAL segments in order to work correctly.

+

A restore is the act of copying a backup to a system where it will be started as a live database cluster. A restore requires the backup files and one or more WAL segments in order to work correctly.

@@ -826,7 +826,7 @@

WAL is the mechanism that uses to ensure that no committed changes are lost. Transactions are written sequentially to the WAL and a transaction is considered to be committed when those writes are flushed to disk. Afterwards, a background process writes the changes into the main database cluster files (also known as the heap). In the event of a crash, the WAL is replayed to make the database consistent.

-

WAL is conceptually infinite but in practice is broken up into individual 16MB files called segments. WAL segments follow the naming convention 0000000100000A1E000000FE where the first 8 hexadecimal digits represent the timeline and the next 16 digits are the logical sequence number (LSN).

+

WAL is conceptually infinite but in practice is broken up into individual 16MB files called segments. WAL segments follow the naming convention 0000000100000A1E000000FE where the first 8 hexadecimal digits represent the timeline and the next 16 digits are the logical sequence number (LSN).

@@ -848,21 +848,21 @@
Upgrading {[project]} from v1 to v2 -

Upgrading from v1 to v2 is fairly straight-forward. The repository format has not changed and all non-deprecated options from v1 are accepted, so for most installations it is simply a matter of installing the new version.

+

Upgrading from v1 to v2 is fairly straight-forward. The repository format has not changed and all non-deprecated options from v1 are accepted, so for most installations it is simply a matter of installing the new version.

However, there are a few caveats:

- The deprecated thread-max option is no longer valid. Use process-max instead. + The deprecated thread-max option is no longer valid. Use process-max instead. - The deprecated archive-max-mb option is no longer valid. This has been replaced with the archive-push-queue-max option which has different semantics. + The deprecated archive-max-mb option is no longer valid. This has been replaced with the archive-push-queue-max option which has different semantics. The default for the backup-user option has changed from backrest to pgbackrest. In v2.02 the default location of the configuration file has changed from /etc/pgbackrest.conf to /etc/pgbackrest/pgbackrest.conf. If /etc/pgbackrest/pgbackrest.conf does not exist, the /etc/pgbackrest.conf file will be loaded instead, if it exists. -

Many option names have changed to improve consistency although the old names from v1 are still accepted. In general, db-* options have been renamed to pg-* and backup-*/retention-* options have been renamed to repo-* when appropriate.

+

Many option names have changed to improve consistency although the old names from v1 are still accepted. In general, db-* options have been renamed to pg-* and backup-*/retention-* options have been renamed to repo-* when appropriate.

and repository options must be indexed when using the new names introduced in v2, e.g. pg1-host, pg1-path, repo1-path, repo1-type, etc.

@@ -879,13 +879,13 @@
Build -

{[user-guide-os]} packages for are available at apt.postgresql.org. If they are not provided for your distribution/version it is easy to download the source and install manually.

+

{[user-guide-os]} packages for are available at apt.postgresql.org. If they are not provided for your distribution/version it is easy to download the source and install manually.

{[user-guide-os]} packages for are available from Crunchy Data or yum.postgresql.org, but it is also easy to download the source and install manually.

-

When building from source it is best to use a build host rather than building on production. Many of the tools required for the build should generally not be installed in production. consists of a single executable so it is easy to copy to a new host once it is built.

+

When building from source it is best to use a build host rather than building on production. Many of the tools required for the build should generally not be installed in production. consists of a single executable so it is easy to copy to a new host once it is built.

Download version <id>{[version]}</id> of <backrest/> to <path>{[build-path]}</path> path @@ -974,7 +974,7 @@ postgres -

should now be properly installed but it is best to check. If any dependencies were missed then you will get an error when running from the command line.

+

should now be properly installed but it is best to check. If any dependencies were missed then you will get an error when running from the command line.

Make sure the installation worked @@ -1033,7 +1033,7 @@

The name 'demo' describes the purpose of this cluster accurately so that will also make a good stanza name.

-

needs to know where the base data directory for the cluster is located. The path can be requested from directly but in a recovery scenario the process will not be available. During backups the value supplied to will be compared against the path that is running on and they must be equal or the backup will return an error. Make sure that pg-path is exactly equal to data_directory in postgresql.conf.

+

needs to know where the base data directory for the cluster is located. The path can be requested from directly but in a recovery scenario the process will not be available. During backups the value supplied to will be compared against the path that is running on and they must be equal or the backup will return an error. Make sure that pg-path is exactly equal to data_directory in postgresql.conf.

By default {[user-guide-os]} stores clusters in {[pg-path-default]} so it is easy to determine the correct path for the data directory.

@@ -1048,7 +1048,7 @@ n -

configuration files follow the Windows INI convention. Sections are denoted by text in brackets and key/value pairs are contained in each section. Lines beginning with # are ignored and can be used as comments.

+

configuration files follow the Windows INI convention. Sections are denoted by text in brackets and key/value pairs are contained in each section. Lines beginning with # are ignored and can be used as comments.

There are multiple ways the configuration files can be loaded:

@@ -1090,7 +1090,7 @@ -

For this demonstration the repository will be stored on the same host as the server. This is the simplest configuration and is useful in cases where traditional backup software is employed to backup the database host.

+

For this demonstration the repository will be stored on the same host as the server. This is the simplest configuration and is useful in cases where traditional backup software is employed to backup the database host.

{[host-pg1]} @@ -1151,7 +1151,7 @@
Configure Archiving -

Backing up a running cluster requires WAL archiving to be enabled. Note that at least one WAL segment will be created during the backup process even if no explicit writes are made to the cluster.

+

Backing up a running cluster requires WAL archiving to be enabled. Note that at least one WAL segment will be created during the backup process even if no explicit writes are made to the cluster.

Configure archive settings @@ -1196,7 +1196,7 @@

When archiving a WAL segment is expected to take more than 60 seconds (the default) to reach the repository, then the archive-timeout option should be increased. Note that this option is not the same as the archive_timeout option which is used to force a WAL segment switch; useful for databases where there are long periods of inactivity. For more information on the archive_timeout option, see Write Ahead Log.

-

The archive-push command can be configured with its own options. For example, a lower compression level may be set to speed archiving without affecting the compression used for backups.

+

The archive-push command can be configured with its own options. For example, a lower compression level may be set to speed archiving without affecting the compression used for backups.

Config <cmd>archive-push</cmd> to use a lower compression level @@ -1226,9 +1226,9 @@
Configure Repository Encryption -

The repository will be configured with a cipher type and key to demonstrate encryption. Encryption is always performed client-side even if the repository type (e.g. S3 or other object store) supports encryption.

+

The repository will be configured with a cipher type and key to demonstrate encryption. Encryption is always performed client-side even if the repository type (e.g. S3 or other object store) supports encryption.

-

It is important to use a long, random passphrase for the cipher key. A good way to generate one is to run: openssl rand -base64 48.

+

It is important to use a long, random passphrase for the cipher key. A good way to generate one is to run: openssl rand -base64 48.

Configure <backrest/> repository encryption @@ -1309,7 +1309,7 @@ -

By default will attempt to perform an incremental backup. However, an incremental backup must be based on a full backup and since no full backup existed ran a full backup instead.

+

By default will attempt to perform an incremental backup. However, an incremental backup must be based on a full backup and since no full backup existed ran a full backup instead.

The type option can be used to specify a full or differential backup.

@@ -1323,7 +1323,7 @@ -

This time there was no warning because a full backup already existed. While incremental backups can be based on a full or differential backup, differential backups must be based on a full backup. A full backup can be performed by running the backup command with {[dash]}-type=full.

+

This time there was no warning because a full backup already existed. While incremental backups can be based on a full or differential backup, differential backups must be based on a full backup. A full backup can be performed by running the backup command with {[dash]}-type=full.

During an online backup waits for WAL segments that are required for backup consistency to be archived. This wait time is governed by the archive-timeout option which defaults to 60 seconds. If archiving an individual segment is known to take longer then this option should be increased.

@@ -1367,7 +1367,7 @@
Restore a Backup -

Backups can protect you from a number of disaster scenarios, the most common of which are hardware failure and data corruption. The easiest way to simulate data corruption is to remove an important cluster file.

+

Backups can protect you from a number of disaster scenarios, the most common of which are hardware failure and data corruption. The easiest way to simulate data corruption is to remove an important cluster file.

Stop the {[postgres-cluster-demo]} cluster and delete the <file>pg_control</file> file @@ -1437,7 +1437,7 @@
Monitoring -

Monitoring is an important part of any production system. There are many tools available and can be monitored on any of them with a little work.

+

Monitoring is an important part of any production system. There are many tools available and can be monitored on any of them with a little work.

can output information about the repository in JSON format which includes a list of all backups for each stanza and WAL archive info.

@@ -1445,7 +1445,7 @@
In <postgres/> -

The COPY command allows info to be loaded into a table. The following example wraps that logic in a function that can be used to perform real-time queries.

+

The COPY command allows info to be loaded into a table. The following example wraps that logic in a function that can be used to perform real-time queries.

Load <backrest/> info function for <postgres/> @@ -1529,7 +1529,7 @@ This syntax requires jq v1.5. - jq may round large numbers such as system identifiers. Test your queries carefully. + jq may round large numbers such as system identifiers. Test your queries carefully.
@@ -1643,7 +1643,7 @@
Retention -

Generally it is best to retain as many backups as possible to provide a greater window for Point-in-Time Recovery, but practical concerns such as disk space must also be considered. Retention options remove older backups once they are no longer needed.

+

Generally it is best to retain as many backups as possible to provide a greater window for Point-in-Time Recovery, but practical concerns such as disk space must also be considered. Retention options remove older backups once they are no longer needed.

@@ -1651,7 +1651,7 @@
Full Backup Retention -

The repo1-retention-full-type determines how the option repo1-retention-full is interpreted; either as the count of full backups to be retained or how many days to retain full backups. New backups must be completed before expiration will occur &mdash; that means if repo1-retention-full-type=count and repo1-retention-full=2 then there will be three full backups stored before the oldest one is expired, or if repo1-retention-full-type=time and repo1-retention-full=20 then there must be one full backup that is at least 20 days old before expiration can occur.

+

The repo1-retention-full-type determines how the option repo1-retention-full is interpreted; either as the count of full backups to be retained or how many days to retain full backups. New backups must be completed before expiration will occur &mdash; that means if repo1-retention-full-type=count and repo1-retention-full=2 then there will be three full backups stored before the oldest one is expired, or if repo1-retention-full-type=time and repo1-retention-full=20 then there must be one full backup that is at least 20 days old before expiration can occur.

Configure <br-option>repo1-retention-full</br-option> @@ -1675,7 +1675,7 @@ -

Archive is expired because WAL segments were generated before the oldest backup. These are not useful for recovery &mdash; only WAL segments generated after a backup can be used to recover that backup.

+

Archive is expired because WAL segments were generated before the oldest backup. These are not useful for recovery &mdash; only WAL segments generated after a backup can be used to recover that backup.

Perform a full backup @@ -1694,7 +1694,7 @@
Differential Backup Retention -

Set repo1-retention-diff to the number of differential backups required. Differentials only rely on the prior full backup so it is possible to create a rolling set of differentials for the last day or more. This allows quick restores to recent points-in-time but reduces overall space consumption.

+

Set repo1-retention-diff to the number of differential backups required. Differentials only rely on the prior full backup so it is possible to create a rolling set of differentials for the last day or more. This allows quick restores to recent points-in-time but reduces overall space consumption.

Configure <br-option>repo1-retention-diff</br-option> @@ -1702,7 +1702,7 @@ 1 -

Backup repo1-retention-diff=1 so two differentials will need to be performed before one is expired. An incremental backup is added to demonstrate incremental expiration. Incremental backups cannot be expired independently &mdash; they are always expired with their related full or differential backup.

+

Backup repo1-retention-diff=1 so two differentials will need to be performed before one is expired. An incremental backup is added to demonstrate incremental expiration. Incremental backups cannot be expired independently &mdash; they are always expired with their related full or differential backup.

Perform differential and incremental backups @@ -1737,7 +1737,7 @@
Archive Retention -

Although automatically removes archived WAL segments when expiring backups (the default expires WAL for full backups based on the repo1-retention-full option), it may be useful to expire archive more aggressively to save disk space. Note that full backups are treated as differential backups for the purpose of differential archive retention.

+

Although automatically removes archived WAL segments when expiring backups (the default expires WAL for full backups based on the repo1-retention-full option), it may be useful to expire archive more aggressively to save disk space. Note that full backups are treated as differential backups for the purpose of differential archive retention.

Expiring archive will never remove WAL segments that are required to make a backup consistent. However, since Point-in-Time-Recovery (PITR) only works on a continuous WAL stream, care should be taken when aggressively expiring archive outside of the normal backup expiration process. To determine what will be expired without actually expiring anything, the dry-run option can be provided on the command line with the expire command.

@@ -1800,16 +1800,16 @@
File Ownership -

If a restore is run as a non-root user (the typical scenario) then all files restored will belong to the user/group executing . If existing files are not owned by the executing user/group then an error will result if the ownership cannot be updated to the executing user/group. In that case the file ownership will need to be updated by a privileged user before the restore can be retried.

+

If a restore is run as a non-root user (the typical scenario) then all files restored will belong to the user/group executing . If existing files are not owned by the executing user/group then an error will result if the ownership cannot be updated to the executing user/group. In that case the file ownership will need to be updated by a privileged user before the restore can be retried.

-

If a restore is run as the root user then will attempt to recreate the ownership recorded in the manifest when the backup was made. Only user/group names are stored in the manifest so the same names must exist on the restore host for this to work. If the user/group name cannot be found locally then the user/group of the data directory will be used and finally root if the data directory user/group cannot be mapped to a name.

+

If a restore is run as the root user then will attempt to recreate the ownership recorded in the manifest when the backup was made. Only user/group names are stored in the manifest so the same names must exist on the restore host for this to work. If the user/group name cannot be found locally then the user/group of the data directory will be used and finally root if the data directory user/group cannot be mapped to a name.

Delta Option -

Restore a Backup in Quick Start required the database cluster directory to be cleaned before the restore could be performed. The delta option allows to automatically determine which files in the database cluster directory can be preserved and which ones need to be restored from the backup &mdash; it also removes files not present in the backup manifest so it will dispose of divergent changes. This is accomplished by calculating a SHA-1 cryptographic hash for each file in the database cluster directory. If the SHA-1 hash does not match the hash stored in the backup then that file will be restored. This operation is very efficient when combined with the process-max option. Since the server is shut down during the restore, a larger number of processes can be used than might be desirable during a backup when the server is running.

+

Restore a Backup in Quick Start required the database cluster directory to be cleaned before the restore could be performed. The delta option allows to automatically determine which files in the database cluster directory can be preserved and which ones need to be restored from the backup &mdash; it also removes files not present in the backup manifest so it will dispose of divergent changes. This is accomplished by calculating a SHA-1 cryptographic hash for each file in the database cluster directory. If the SHA-1 hash does not match the hash stored in the backup then that file will be restored. This operation is very efficient when combined with the process-max option. Since the server is shut down during the restore, a larger number of processes can be used than might be desirable during a backup when the server is running.

Stop the {[postgres-cluster-demo]} cluster, perform delta restore @@ -1842,7 +1842,7 @@
Restore Selected Databases -

There may be cases where it is desirable to selectively restore specific databases from a cluster backup. This could be done for performance reasons or to move selected databases to a machine that does not have enough space to restore the entire cluster backup.

+

There may be cases where it is desirable to selectively restore specific databases from a cluster backup. This could be done for performance reasons or to move selected databases to a machine that does not have enough space to restore the entire cluster backup.

To demonstrate this feature two databases are created: test1 and test2.

@@ -1892,7 +1892,7 @@ -

One of the main reasons to use selective restore is to save space. The size of the test1 database is shown here so it can be compared with the disk utilization after a selective restore.

+

One of the main reasons to use selective restore is to save space. The size of the test1 database is shown here so it can be compared with the disk utilization after a selective restore.

Show space used by test1 database @@ -1926,7 +1926,7 @@ -

Stop the cluster and restore only the test2 database. Built-in databases (template0, template1, and postgres) are always restored.

+

Stop the cluster and restore only the test2 database. Built-in databases (template0, template1, and postgres) are always restored.

Recovery may error unless --type=immediate is specified. This is because after consistency is reached will flag zeroed pages as errors even for a full-page write. For &ge; 13 the ignore_invalid_pages setting may be used to ignore invalid pages. In this case it is important to check the logs after recovery to ensure that no invalid pages were reported in the selected databases. @@ -1963,7 +1963,7 @@ -

The test1 database, despite successful recovery, is not accessible. This is because the entire database was restored as sparse, zeroed files. can successfully apply WAL on the zeroed files but the database as a whole will not be valid because key files contain no data. This is purposeful to prevent the database from being accidentally used when it might contain partial data that was applied during WAL replay.

+

The test1 database, despite successful recovery, is not accessible. This is because the entire database was restored as sparse, zeroed files. can successfully apply WAL on the zeroed files but the database as a whole will not be valid because key files contain no data. This is purposeful to prevent the database from being accidentally used when it might contain partial data that was applied during WAL replay.

Attempting to connect to the test1 database will produce an error @@ -1976,7 +1976,7 @@ -

Since the test1 database is restored with sparse, zeroed files it will only require as much space as the amount of WAL that is written during recovery. While the amount of WAL generated during a backup and applied during recovery can be significant it will generally be a small fraction of the total database size, especially for large databases where this feature is most likely to be useful.

+

Since the test1 database is restored with sparse, zeroed files it will only require as much space as the amount of WAL that is written during recovery. While the amount of WAL generated during a backup and applied during recovery can be significant it will generally be a small fraction of the total database size, especially for large databases where this feature is most likely to be useful.

It is clear that the test1 database uses far less disk space during the selective restore than it would have if the entire database had been restored.

@@ -2044,7 +2044,7 @@ -

It is important to represent the time as reckoned by and to include timezone offsets. This reduces the possibility of unintended timezone conversions and an unexpected recovery result.

+

It is important to represent the time as reckoned by and to include timezone offsets. This reduces the possibility of unintended timezone conversions and an unexpected recovery result.

Get the time from <postgres/> @@ -2064,7 +2064,7 @@ -

Now that the time has been recorded the table is dropped. In practice finding the exact time that the table was dropped is a lot harder than in this example. It may not be possible to find the exact time, but some forensic work should be able to get you close.

+

Now that the time has been recorded the table is dropped. In practice finding the exact time that the table was dropped is a lot harder than in this example. It may not be possible to find the exact time, but some forensic work should be able to get you close.

Drop the important table @@ -2122,7 +2122,7 @@ -

The log also contains valuable information. It will indicate the time and transaction where the recovery stopped and also give the time of the last transaction to be applied.

+

The log also contains valuable information. It will indicate the time and transaction where the recovery stopped and also give the time of the last transaction to be applied.

Examine the <postgres/> log output @@ -2133,7 +2133,7 @@ -

This example was rigged to give the correct result. If a backup after the required time is chosen then will not be able to recover the lost table. can only play forward, not backward. To demonstrate this the important table must be dropped (again).

+

This example was rigged to give the correct result. If a backup after the required time is chosen then will not be able to recover the lost table. can only play forward, not backward. To demonstrate this the important table must be dropped (again).

Drop the important table (again) @@ -2197,7 +2197,7 @@ -

Looking at the log output it's not obvious that recovery failed to restore the table. The key is to look for the presence of the recovery stopping before... and last completed transaction... log messages. If they are not present then the recovery to the specified point-in-time was not successful.

+

Looking at the log output it's not obvious that recovery failed to restore the table. The key is to look for the presence of the recovery stopping before... and last completed transaction... log messages. If they are not present then the recovery to the specified point-in-time was not successful.

Examine the <postgres/> log output to discover the recovery was not successful @@ -2461,7 +2461,7 @@
Dedicated Repository Host -

The configuration described in Quickstart is suitable for simple installations but for enterprise configurations it is more typical to have a dedicated repository host where the backups and WAL archive files are stored. This separates the backups and WAL archive from the database server so database host failures have less impact. It is still a good idea to employ traditional backup software to backup the repository host.

+

The configuration described in Quickstart is suitable for simple installations but for enterprise configurations it is more typical to have a dedicated repository host where the backups and WAL archive files are stored. This separates the backups and WAL archive from the database server so database host failures have less impact. It is still a good idea to employ traditional backup software to backup the repository host.

On hosts, pg1-path is required to be the path of the local PostgreSQL cluster and no pg1-host should be configured. When configuring a repository host, the pgbackrest configuration file must have the pg-host option configured to connect to the primary and standby (if any) hosts. The repository host has the only pgbackrest configuration that should be aware of more than one host. Order does not matter, e.g. pg1-path/pg1-host, pg2-path/pg2-host can be primary or standby.

@@ -2475,7 +2475,7 @@ -

The {[br-user]} user is created to own the repository. Any user can own the repository but it is best not to use postgres (if it exists) to avoid confusion.

+

The {[br-user]} user is created to own the repository. Any user can own the repository but it is best not to use postgres (if it exists) to avoid confusion.

Create <user>{[br-user]}</user> user @@ -2531,7 +2531,7 @@ {[pg-home-path]} - ssh has been configured to only allow to be run via passwordless ssh. This enhances security in the event that one of the service accounts is hijacked. + ssh has been configured to only allow to be run via passwordless ssh. This enhances security in the event that one of the service accounts is hijacked.
Starting and Stopping -

Sometimes it is useful to prevent from running on a system. For example, when failing over from a primary to a standby it's best to prevent from running on the old primary in case gets restarted or can't be completely killed. This will also prevent from running on cron.

+

Sometimes it is useful to prevent from running on a system. For example, when failing over from a primary to a standby it's best to prevent from running on the old primary in case gets restarted or can't be completely killed. This will also prevent from running on cron.

Stop the <backrest/> services @@ -2824,7 +2824,7 @@ -

Specify the --force option to terminate any process that are currently running. If is already stopped then stopping again will generate a warning.

+

Specify the --force option to terminate any process that are currently running. If is already stopped then stopping again will generate a warning.

Stop the <backrest/> services again @@ -2880,7 +2880,7 @@
Replication -

Replication allows multiple copies of a cluster (called standbys) to be created from a single primary. The standbys are useful for balancing reads and to provide redundancy in case the primary host fails.

+

Replication allows multiple copies of a cluster (called standbys) to be created from a single primary. The standbys are useful for balancing reads and to provide redundancy in case the primary host fails.

@@ -3014,7 +3014,7 @@ -

The hot_standby setting must be enabled before starting to allow read-only connections on {[host-pg2]}. Otherwise, connection attempts will be refused. The rest of the configuration is in case the standby is promoted to a primary.

+

The hot_standby setting must be enabled before starting to allow read-only connections on {[host-pg2]}. Otherwise, connection attempts will be refused. The rest of the configuration is in case the standby is promoted to a primary.

Configure <postgres/> @@ -3043,7 +3043,7 @@ -

The log gives valuable information about the recovery. Note especially that the cluster has entered standby mode and is ready to accept read-only connections.

+

The log gives valuable information about the recovery. Note especially that the cluster has entered standby mode and is ready to accept read-only connections.

Examine the <postgres/> log output for log messages indicating success @@ -3083,7 +3083,7 @@ -

So, what went wrong? Since is pulling WAL segments from the archive to perform replication, changes won't be seen on the standby until the WAL segment that contains those changes is pushed from {[host-pg1]}.

+

So, what went wrong? Since is pulling WAL segments from the archive to perform replication, changes won't be seen on the standby until the WAL segment that contains those changes is pushed from {[host-pg1]}.

This can be done manually by calling {[pg-switch-wal]}() which pushes the current WAL segment to the archive (a new WAL segment is created to contain further changes).

@@ -3125,7 +3125,7 @@
Streaming Replication -

Instead of relying solely on the WAL archive, streaming replication makes a direct connection to the primary and applies changes as soon as they are made on the primary. This results in much less lag between the primary and standby.

+

Instead of relying solely on the WAL archive, streaming replication makes a direct connection to the primary and applies changes as soon as they are made on the primary. This results in much less lag between the primary and standby.

Streaming replication requires a user with the replication privilege.

@@ -3140,7 +3140,7 @@ -

The pg_hba.conf file must be updated to allow the standby to connect as the replication user. Be sure to replace the IP address below with the actual IP address of your {[host-pg2]}. A reload will be required after modifying the pg_hba.conf file.

+

The pg_hba.conf file must be updated to allow the standby to connect as the replication user. Be sure to replace the IP address below with the actual IP address of your {[host-pg2]}. A reload will be required after modifying the pg_hba.conf file.

Create <file>pg_hba.conf</file> entry for replication user @@ -3204,7 +3204,7 @@ The primary_conninfo setting has been written into the {[pg-recovery-file-demo]} file because it was configured as a recovery-option in {[project-exe]}.conf. The {[dash]}-type=preserve option can be used with the restore to leave the existing {[pg-recovery-file-demo]} file in place if that behavior is preferred. -

By default {[user-guide-os]} stores the postgresql.conf file in the data directory. That means the change made to postgresql.conf was overwritten by the last restore and the hot_standby setting must be enabled again. Other solutions to this problem are to store the postgresql.conf file elsewhere or to enable the hot_standby setting on the {[host-pg1]} host where it will be ignored.

+

By default {[user-guide-os]} stores the postgresql.conf file in the data directory. That means the change made to postgresql.conf was overwritten by the last restore and the hot_standby setting must be enabled again. Other solutions to this problem are to store the postgresql.conf file elsewhere or to enable the hot_standby setting on the {[host-pg1]} host where it will be ignored.

Enable <pg-option>hot_standby</pg-option> @@ -3273,9 +3273,9 @@
Asynchronous Archiving -

Asynchronous archiving is enabled with the archive-async option. This option enables asynchronous operation for both the archive-push and archive-get commands.

+

Asynchronous archiving is enabled with the archive-async option. This option enables asynchronous operation for both the archive-push and archive-get commands.

-

A spool path is required. The commands will store transient data here but each command works quite a bit differently so spool path usage is described in detail in each section.

+

A spool path is required. The commands will store transient data here but each command works quite a bit differently so spool path usage is described in detail in each section.

Create the spool directory @@ -3299,7 +3299,7 @@ -

The spool path must be configured and asynchronous archiving enabled. Asynchronous archiving automatically confers some benefit by reducing the number of connections made to remote storage, but setting process-max can drastically improve performance by parallelizing operations. Be sure not to set process-max so high that it affects normal database operations.

+

The spool path must be configured and asynchronous archiving enabled. Asynchronous archiving automatically confers some benefit by reducing the number of connections made to remote storage, but setting process-max can drastically improve performance by parallelizing operations. Be sure not to set process-max so high that it affects normal database operations.

Configure the spool path and asynchronous archiving @@ -3319,7 +3319,7 @@ 2 - process-max is configured using command sections so that the option is not used by backup and restore. This also allows different values for archive-push and archive-get. + process-max is configured using command sections so that the option is not used by backup and restore. This also allows different values for archive-push and archive-get.

For demonstration purposes streaming replication will be broken to force to get WAL using the restore_command.

@@ -3345,13 +3345,13 @@
Archive Push -

The asynchronous archive-push command offloads WAL archiving to a separate process (or processes) to improve throughput. It works by looking ahead to see which WAL segments are ready to be archived beyond the request that is currently making via the archive_command. WAL segments are transferred to the archive directly from the pg_xlog/pg_wal directory and success is only returned by the archive_command when the WAL segment has been safely stored in the archive.

+

The asynchronous archive-push command offloads WAL archiving to a separate process (or processes) to improve throughput. It works by looking ahead to see which WAL segments are ready to be archived beyond the request that is currently making via the archive_command. WAL segments are transferred to the archive directly from the pg_xlog/pg_wal directory and success is only returned by the archive_command when the WAL segment has been safely stored in the archive.

-

The spool path holds the current status of WAL archiving. Status files written into the spool directory are typically zero length and should consume a minimal amount of space (a few MB at most) and very little IO. All the information in this directory can be recreated so it is not necessary to preserve the spool directory if the cluster is moved to new hardware.

+

The spool path holds the current status of WAL archiving. Status files written into the spool directory are typically zero length and should consume a minimal amount of space (a few MB at most) and very little IO. All the information in this directory can be recreated so it is not necessary to preserve the spool directory if the cluster is moved to new hardware.

- In the original implementation of asynchronous archiving, WAL segments were copied to the spool directory before compression and transfer. The new implementation copies WAL directly from the pg_xlog directory. If asynchronous archiving was utilized in v1.12 or prior, read the v1.13 release notes carefully before upgrading. + In the original implementation of asynchronous archiving, WAL segments were copied to the spool directory before compression and transfer. The new implementation copies WAL directly from the pg_xlog directory. If asynchronous archiving was utilized in v1.12 or prior, read the v1.13 release notes carefully before upgrading. -

The [stanza]-archive-push-async.log file can be used to monitor the activity of the asynchronous process. A good way to test this is to quickly push a number of WAL segments.

+

The [stanza]-archive-push-async.log file can be used to monitor the activity of the asynchronous process. A good way to test this is to quickly push a number of WAL segments.

Test parallel asynchronous archiving @@ -3393,9 +3393,9 @@
Archive Get -

The asynchronous archive-get command maintains a local queue of WAL to improve throughput. If a WAL segment is not found in the queue it is fetched from the repository along with enough consecutive WAL to fill the queue. The maximum size of the queue is defined by archive-get-queue-max. Whenever the queue is less than half full more WAL will be fetched to fill it.

+

The asynchronous archive-get command maintains a local queue of WAL to improve throughput. If a WAL segment is not found in the queue it is fetched from the repository along with enough consecutive WAL to fill the queue. The maximum size of the queue is defined by archive-get-queue-max. Whenever the queue is less than half full more WAL will be fetched to fill it.

-

Asynchronous operation is most useful in environments that generate a lot of WAL or have a high latency connection to the repository storage (i.e., S3 or other object stores). In the case of a high latency connection it may be a good idea to increase process-max.

+

Asynchronous operation is most useful in environments that generate a lot of WAL or have a high latency connection to the repository storage (i.e., S3 or other object stores). In the case of a high latency connection it may be a good idea to increase process-max.

The [stanza]-archive-get-async.log file can be used to monitor the activity of the asynchronous process.

@@ -3427,7 +3427,7 @@
Backup from a Standby -

can perform backups on a standby instead of the primary. Standby backups require the {[host-pg2]} host to be configured and the backup-standby option enabled. If more than one standby is configured then the first running standby found will be used for the backup.

+

can perform backups on a standby instead of the primary. Standby backups require the {[host-pg2]} host to be configured and the backup-standby option enabled. If more than one standby is configured then the first running standby found will be used for the backup.

Configure <br-option>pg2-host</br-option>/<br-option>pg2-host-user</br-option> and <br-option>pg2-path</br-option> @@ -3445,7 +3445,7 @@ y -

Both the primary and standby databases are required to perform the backup, though the vast majority of the files will be copied from the standby to reduce load on the primary. The database hosts can be configured in any order. will automatically determine which is the primary and which is the standby.

+

Both the primary and standby databases are required to perform the backup, though the vast majority of the files will be copied from the standby to reduce load on the primary. The database hosts can be configured in any order. will automatically determine which is the primary and which is the standby.

Backup the {[postgres-cluster-demo]} cluster from <host>pg2</host> @@ -3458,7 +3458,7 @@

This incremental backup shows that most of the files are copied from the {[host-pg2]} host and only a few are copied from the {[host-pg1]} host.

-

creates a standby backup that is identical to a backup performed on the primary. It does this by starting/stopping the backup on the {[host-pg1]} host, copying only files that are replicated from the {[host-pg2]} host, then copying the remaining few files from the {[host-pg1]} host. This means that logs and statistics from the primary database will be included in the backup.

+

creates a standby backup that is identical to a backup performed on the primary. It does this by starting/stopping the backup on the {[host-pg1]} host, copying only files that are replicated from the {[host-pg2]} host, then copying the remaining few files from the {[host-pg1]} host. This means that logs and statistics from the primary database will be included in the backup.