* nginx: add --nginx-sleep-seconds
As described in #7422, reloading nginx is an asynchronous process and
Certbot does not know when it is complete. In an environment where this
reload takes a long time, the nginx plugin suffers from an issue where
it responds to and fails the ACME challenge before the nginx server is
ready to serve it.
Following the discussion in a previous PR #7740, this commit introduces
a new flag, --nginx-sleep-seconds, which may be used to increase the
duration that Certbot will wait for nginx to reload, from its previously
hard-coded value of 1s.
Fixes#7422
* update CHANGELOG
* nginx: update docstring for nginx_restart
Fixes#8169
This PR improves snaps remote builds script by dumping the output of `snapcraft remote-build` when unexpected behavior is detected:
* when all builds for a project finish with a zero status code, and none of them are marked as failed, we expect to have all the associated snap files available locally.
* when some builds are marked as failed, we expect to have a build output for each of them available locally.
In these two situations, if the expectation are not matched, then the script will display the output of `snapcraft remote-build` itself. I added also a control error to handle nicely the absence of an expected build output on the local machine.
* Improve log dump in snaps remote builds when an unexpected behavior is detected
* Use the manager
* Update tools/snap/build_remote.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Fixes#7863.
Connect command is `sudo snap connect certbot-dns-dnsimple:certbot-metadata certbot:certbot-metadata`
Logs are `cat /var/snap/certbot-dns-dnsimple/current/debuglog`
Echos in hook are only printed to terminal when it exits 0; otherwise, check logs in `debuglog` mentioned above.
Manual tests include all iterations of connected, unconnected, installed for the first, second time, etc, with passing and failing version checks.
* Make dnsimple not update if certbot is too old
* create an interface to read cb version
* add missing newline
* fix syntax
* trying to figure out the consumer syntax
* trying to figure out the consumer syntax, again
* only check post first install
* valid setting name
* test for first install differently
* snapctl doesn't error if it fails I guess
* time to do some print debugging
* continue playing with syntax
* once again, fooled by bash int vs string comparisons!
* debugging
* if we use post and pre together we can do this
* is this how content interface syntax works
* it's a directory?
* more debug
* what's that error message again?
* try other syntax
* if it's not documented just guess at syntax
* actually, I think this is the syntax
* oops didn't set for new hook
* test passing information along connection
* interface attributes can only be set during the execution of prepare hooks
* just do it with main connection
* undo last few test changes
* Add some printing to make sure we understand what's going on
* create empty directory to bind to
* put mkdir in the correct part
* let's inspect the environment
* it can't run bash directly.
* perhaps only directories can be shared via the contente interface
* update name of folder
* echo to debug log to understand what's going on exactly. we have file access though!
* update grep for new file
* more printing
* echo to the debug log
* ok NOW all print statements are going to the log
* why does echo need two >s
* remove unnecessary extra check, just check if the init file is available
* check if certbot version will be available post-refresh after all
* pre-refresh hook is not necessary to get certbot version
* update mkdir so we don't have to clean each time
* try comparing version numbers in python
* it's python3
* we need different prints for if we succeed or if we fail.
* improve bash syntax
* remove some debugging code
* Remove debug script
* remove spaces for clarity
* consolidate parts and remove more test code
* s/certbot-version/certbot-metadata/g
* use sys.exit instead of exit
* find and save certbot version on the certbot side
* change presence test to new file
* switch to using packaging.version.parse instead of LooseVersion
* switch to requiring certbot version >= plugin version
* add plugin snap changes to generate script
* Add comment to generation file saying not to edit generated files manually
* Create post-refresh hook for all plugins with script
* generate files using new script
* update snapcraft.yaml files for plugins
* bin/sh comes first
* Add packaging to install_requires
* Check that refresh is allowed in integration test
* switch plug and slot names in integration test
* Update tools/generate_dnsplugins_postrefreshhook.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* small bash fixes
* Update snap readme with new instructions
* Run tools/generate_dnsplugins_postrefreshhook.sh
* Update tools/snap/generate_dnsplugins_postrefreshhook.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Snapcraft has a feature name `remote-build`. It allows to compile snaps using the Canonical dedicated build architecture for several architectures. Compared to the QEMU-enabled Docker approach used currently, the remote build has several advantages:
* the builds are done on the native architecture, making them basically faster than what can be achieved on QEMU
* it avoids to depend on `adferrand/snapcraft` (which could be otherwise be fixed with the merge of https://github.com/snapcore/snapcraft/pull/3144, but this will not happen in the short term)
* when everything is good, all snaps build can be run in parallel and then can be orchestrated by one single Azure Pipeline job, since the heavy tasks are done remotely.
This PR makes the necessary ajustements to use the remote build feature instead of the QEMU-enabled docker approach.
One complex task was to be able to compile the `certbot` snap on `arm64` and `armhf`. Indeed on these architectures the pre-compiled wheel for `cffi` is not available. So it needs to be compiled during the snap build. Sadly, the current version of the python plugin in snapcraft is limited by the fact that `wheels` is not installed in the virtual environment set up to build the python packages, and there is no easy way to change that except by overridding the whole build process.
In the long term, I think I will open a PR on `snapcraft` Git repository to provide a consistent solution. But for the short term, I used the possibility to provide arguments to the `venv` module, to add the flag `--system-site-packages`. With it, the virtual environment can use the system site package, where `wheel` is available.
The other significant additions are in `tools/snap/build_remote.py` script. If invoking the remote build on a local machine is quite straight-forward, it is another story on the CI because we need build auditability and resiliency during these non-interactive actions. In particular we should avoid as possible inconsistent results on the nightly pipeline and the release pipeline.
So this script wraps the `snapcraft` call into a retry logic, and improves its logs in the context of parallel builds.
For the minor modifications, it is mainly about ensuring that plugins can be built (some of them also need `cffi` for instance), and simplify the Azure Pipeline since all snaps are retrieved in one go.
Please note that the `test-` branches still run only the `amd64` architecture. Indeed I noticed that builds on `arm64` and `armhf` are tending to be very slow to start (up to 40 min) while the `amd64` ones wait at max 10 mins, and usually 30 seconds only when the overall load on Canonical side is low.
To work on `certbot/certbot` repository, one secured file needs to be added, because `snapcraft` needs to be authenticated against Launchpad with credentials allowing remote builds. To do so, from a local machine that have this capability, one can extract the existing file at `$HOME/.local/share/snapcraft/provider/launchpad/credentials`, and register it as a secured file in Azure Pipeline with the name `snapcraftRemoteBuildCredentials`.
* Define scripts
* Setup pipeline to use remote builds
* Focus on packaging builds
* Set credentials
* Setup git
* Launch all builds in parallel
* Add dev dependencies to build cffi and cryptography
* Convert to a python logic
* Reorganize the pipeline
* Handle the fact that snap builds may be taken from cache
* Generate constraints
* Exit code
* Check existence
* Try to handle better non zero exit code
* Add --system-site-packages to get wheel in the venv
* Add executable permissions
* Troubleshoot
* Dynamic display, take the maximum timeout for snap build job
* Allow retries if the remote build does not start
* Trigger only amd64 builds for test branches
* Exit properly
* Update snapcraft.yaml
* Fix snap run
* Set secured file name
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Move order in deps
* Reactivate all builds
* Use Manager() as a context manager
* Use Pool as a context manager
* Some nice refactorings
* Check snapcraft execution interruption with exit codes
* Use f-string and format expressions
* Start log
* Consistent use of single/double quotes
* Better loop to extract lines
* Retry on build failures
* Few optimizations
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Fixes#8149
This PR adds warnings to warn about the incoming deprecation of Python 3.5 in Certbot.
* Add warnings about Python 3.5 deprecation in Certbot
* Update certbot/certbot/__init__.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
«When you add or change DNS zones or records, your changes will now be
reflected at our authoritative nameservers in under 60 seconds. This is
down from the previous “every quarter hour” approach that we had for so
long.» - https://www.linode.com/blog/linode/linode-turns-17/
Fixes#4351
This PR proposes a solution to use the third party plugins with the prefix `pip_package_name:` in the plugin name, plugin specific flags and keys in dns plugin credential files.
A first solution has been proposed in #6372, and a more advanced one in #7026. In #7026 was also added a deprecation warning when the old plugin name `pip_package_name:plugin_name` was used.
However there were some limitations with #7026, in particular the fact that existing flags of type `pip_package_name:dns_plugin_option` or keys like `pip_package_name:key` in dns plugin credential files were not read anymore. This would have led to silent failures during renewals if the configuration was not explicitly updated by the user.
I tried to fix that based on #7026, but the changes needed are complex, and create new problems on their own, like unexpected erasure of values in the renewal configurations.
Instead I try in this PR a new approach: the `PluginsRegistry` in `certbot._internal.plugins.disco` module register two plugins for a given entrypoint refering to a third party plugin when `find_all()` is called:
* one plugin with the name `plugin_name`
* one plugin with the name `pip_package_name:plugin_name` (like before)
This way, every existing configuration continues to work without any change (credentials, renewal configuration, CLI flags). And new configurations can refer to the new plugin name without prefix, and use the approriate CLI flags, credentials without this prefix.
On top of it I added the deprecation path given in #7026 (thanks @coldfix!):
* the plugin named `pip_package_name:plugin_name` is hidden from `certbot plugins` output
* the help for this plugin is still displayed, and a deprecation warning is displayed in the description
* when invoked, the same deprecation warning is displayed in the terminal
* Support both prefixed and not prefix third party plugins
* Adapt tests
* Add deprecation path
* Named parameters
* Add deprecation warning in CLI
* Add a changelog
Fixes#8041
This PR makes Azure Pipeline build the DNS plugins snaps for the 3 architectures during the CI.
It leverages the existing logic for building the Certbot snap in order to deploy a QEMU environment with Docker, and leverages the local PyPI index to speed up the build when installing `cffi` and `cryptography`.
All DNS plugins snaps are constructed in one unique docker container, in order to save the time required to install the system dependencies upon first start of `snapcraft`, and so speed up significantly the build.
Finally, all `amd64` DNS plugins snaps are built within 6 minutes. For `arm64` and `armhf`, it is around 40 mins: this is quite fast in fact, considering that 14 DNS plugins snaps are built.
However, this is still an extremely heavy task to make the full 3 architectures builds, even for Azure Pipelines and its 10 parallel jobs capability. That is why I make the `arm64` and `armhf` builds be skipped for the `full-test-suite`, and let them run only for `nightly` and `release`. This means however that these builds will not be done for the release branches. If this is a problem, I can put a more elaborate suspend condition to triggers the builds in this case.
All snaps are stored in the pipeline artifacts storage, making them available for publication during a `release` pipeline.
The PR is set as Draft for now, because I use temporarily `pr_test-suite` to validate the packaging jobs when commits are pushed. Once the PR is ready, I will revert it back to the normal configuration (run the standard tests).
* Configure a script to build DNS snaps
* Focus on packaging
* Trigger all architectures
* Add extra index
* Prepare conditional suspend
* Set final suspend logic
* Set final suspend value
* Loop for publication
* Use python3
* Clean before build
* Add a test
* Add test job in Azure
* Preserve env
* Apply normal config for pipelines
* Skip QEMU jobs only for test branches
* Makes snap run tests depends also on the Certbot snap build
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update .azure-pipelines/templates/stages/deploy-stage.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* More accurate way to get the plugin snap name
* Integrate DNS snap tests into certbot-ci
* Fixes
* Update certbot-ci/snap_integration_tests/conftest.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update certbot-ci/snap_integration_tests/conftest.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Clean an _init_.py file
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Short PR to improve some things during snap builds:
* cleanup snapcraft assets before a build, in order to avoid some weird errors when two builds are executed consecutively without cleanup
* use python3 explicitly in `tools/simple_http_server.py` because on several recent distributions, `python` binary is not exposed anymore, only `python2` or `python3`.
If you go to a URL like https://snapcraft.io/certbot/releases and try to move the Certbot snap into the candidate or stable channels, you cannot do so. There is a tooltip which says that revisions with the grade devel cannot be promoted to candidate or stable channels.
The documentation for `grade` can be found at https://snapcraft.io/docs/snapcraft-yaml-reference where it says the value is optional and
> Defines the quality grade of the snap.
Type: enum
Can be either devel (i.e. a development version of the snap, so not to be published to the stable or candidate channels) or stable (i.e. a stable release or release candidate, which can be released to all channels)
Example: [stable or devel]
I'm working on a proposal for our next steps for snaps which involves moving the Certbot snap to the stable channel. I of course won't make those changes without giving others a chance to share their opinion, but I'd like to avoid the situation where we're technically unable to move the Certbot 1.6.0 snap to the stable channel despite wanting to do so.
I started to make the same changes to the DNS plugins, but I personally think it's too soon to propose stable versions of those yet and `grade` is a simple way to ensure we don't accidentally promote something there.
You can see the snap being built and run successfully with this change at https://dev.azure.com/certbot/certbot/_build/results?buildId=2246&view=results.
Fixes#8071 and fixes https://github.com/certbot/certbot/issues/8110.
This PR migrates every job from Travis in Azure Pipeline.
This PR essentially converts the Travis jobs into Azure Pipeline with a complete iso-fonctionality (or I made a mistake). The jobs are added in the relevant existing pipelines (`main`, `nightly`, `advanced-test`, `release`). A global refactoring thanks to the templating system is done to reduce greatly the verbosity of the pipeline descriptions.
A specific feature (not present in Travis) is added: the stage `On_Failure`. Using directly the Mattermost API, it allows to notify pipeline failure in a Mattermost channel with a link to the failed pipelines without the need to authenticate to Microsoft.
See https://github.com/certbot/certbot/pull/8098#issuecomment-649873641 for the post merge actions to do at the end of this work.
Fixes#7420.
* Set up CentOS 8 test farm tests
* Don't add to apache2_targets until 7273 is resolved
* Start upgrade test from a version that works on centos 8
* remove when possible from targets
* acme: add support for alternative cert. chains
* certbot: add --preferred-chain
* remove support for issuer SKI matching
* show --preferred-chain in "run" help
* warn if no chain matched and it's not a dry-run
* fix existing failing tests
* add unit, integration tests
* bump acme dependency to dev version
* simplify test to avoid py2.7 recursion bug
* add preferred_chain to STR_CONFIG_ITEMS
* reduce preferred_chain warning to info level
* acme: fix some docstrings in .messages
* certbot: fix docstring in crypto_util
* try to fix certbot-nginx acme dep problem
Fixes#8093.
This PR modifies and audits all uses of `subprocess` and `Popen` outside of tests, `certbot-ci/`, `certbot-compatibility-test/`, `letsencrypt-auto-source/`, `tools/`, and `windows-installer/`. Calls to outside programs have their `env` modified to remove the `SNAP` components of paths, if they exist. This includes any calls made from hooks, calls to `apachectl` and `nginx`, and to `openssl` from `ocsp.py`.
For testing manually, rsync flags will look something like:
```
rsync -avzhe ssh root@focal.domain:/home/certbot/certbot/certbot_*_amd64.snap .
rsync -avzhe ssh certbot_*_amd64.snap root@centos7.domain:/root/certbot/
```
With these modifications, `certbot plugins --prepare` now passes on Centos 7.
If I'm wrong and we package the `openssl` binary, the modifications should be removed from `ocsp.py`, and `env` should be passed into `run_script` rather than set internally in its calls from nginx and apache.
One caveat with this approach is the disconnect between why it's a problem (packaging) and where it's solved (internal to Certbot). I considered a wrapping approach, but we'd still have to audit specific calls. I think the best way to address this is robust testing; specifically, running the snap on other systems.
For hooks, all calls will remove the snap paths if they exist. This is probably fine, because even if the hook intends to call back into certbot, it can do that, it'll just create a new snap.
I'm not sure if we need these modifications for the Mac OS X/ Darwin calls, but they can't hurt.
* Add method to plugins util to get env without snap paths
* Use modified environment in Nginx plugin
* Pass through env to certbot.util.run_script
* Use modified environment in Apache plugin
* move env_no_snap_for_external_calls to certbot.util
* Set env internally to run_script, since we use that only to call out
* Add env to mac subprocess calls in certbot.util
* Add env to openssl call in ocsp.py
* Add env for hooks calls in certbot.compat.misc.
* Pass env into execute_command to avoid circular dependency
* Update hook test to assert called with env
* Fix mypy type hint to account for new param
* Change signature to include Optional
* go back to using CERTBOT_PLUGIN_PATH
* no need to modify PYTHONPATH in env
* robustly detect when we're in a snap
* Improve env util fxn docstring
* Update changelog
* Add unit tests for env_no_snap_for_external_calls
* Import compat.os
Fixes#8103.
* Update the DNS plugin generator script to core20 syntax
* Generate new snapcraft.yamls for the DNS plugins
* Update certbot.wrapper to search for python3.8 paths
Fixes#7957
This PR makes snapcraft generate itself the dependencies constraints file during the snap build, instead of having an external script that does it before calling snapcraft.