Fixes https://github.com/certbot/certbot/issues/8165.
I moved `prerequisites` up to the "Running a local copy of the client" `contributing.html#prerequisites` still links to information about installing Cerbot's dependencies.
I left all certbot-auto documentation that wasn't explicitly encouraging its use. I think we can rip that out once the script is deprecated.
* Create bootstrap script
* Delete a whole bunch of the bootstrap script
* modify test_tests to use new script
* put python version checking in back in
* add x
* call the venv creation from inside the bootstrap
* add targets back
* modify test_apache2 to use new format
* shouldn't need virtualenv on rhel
* readd targets
* Update test_sdists to use new script
* move setting up venv back out of script so it's not run with sudo
* take venv3.py call out of bootstrap in all scripts
* add additional python3-devel pkg name
* fix test_sdists
* enable additional rhel7 repos
* clean up code and comments
* Update tests and instructions to use auto_targets.yaml with test_leauto_upgrades.sh and test_letsencrypt_auto_certonly_standalone.sh
* only install python3-devel.x86_64 for rhel7
* Upgrade python version for debian in test_apache2.sh
* don't run test_tests or test_sdists on debian 9 or ubuntu 16.04
* Add 20.04 and 20.04 arm images to targets.yaml
* use pyenv to upgrade to python3.5
* remove arm64 instance because it's having auth trouble
* correct pyenv usage on ubuntu
* add arm64 target to targets.yaml
* replace debian 9 arm64 with ubuntu 20
* don't try to upgrade a perfectly good python version
* let's just add ubuntu20 to apache2_targets while we're here
* uncomment test_apache2
* move adding python3-devel.x86_64 to bootstrap_os_packages to avoid potential race condition
* no need to specify the arch once extra rhel7 repos enabled
* explicitly specify python3
* don't fail if we can't enable rhel7 extras
* capture python36-devel as well
Fixes https://github.com/certbot/certbot/issues/8022, https://github.com/certbot-docker/certbot-docker/issues/25, and https://github.com/certbot-docker/certbot-docker/issues/20.
This PR builds on https://github.com/certbot/certbot/pull/8192 to set up similar builds in Azure to what we currently have at release time as well as nightly builds allowing us to catch problems in these images before a release. It also fully automates our Docker deployments removing a manual step from our release process. We'll need to update our release instructions once this PR lands.
If you're not familiar with our `certbot-docker` setup, you can read about how these scripts customized the build process on Docker Hub at https://docs.docker.com/docker-hub/builds/advanced/.
You can see the process working properly at:
* Nightly build on my fork: https://dev.azure.com/bmw0523/bmw/_build/results?buildId=345&view=logs&j=70ac378a-cb1f-50d1-b328-169807afbcfa
* Release build on my fork: https://dev.azure.com/bmw0523/bmw/_build/results?buildId=346&view=logs&j=70ac378a-cb1f-50d1-b328-169807afbcfa
* Nightly build on Certbot's Azure setup: https://dev.azure.com/certbot/certbot/_build/results?buildId=2426&view=logs&j=70ac378a-cb1f-50d1-b328-169807afbcfa
The builds on my fork pushed to https://hub.docker.com/u/certbottest. The credentials for this account are in our shared vault in 1password if you want to play around with this.
While the scripts will (almost?) always be run in CI, I tested the scripts successfully on macOS, Ubuntu 18.04, and Ubuntu 20.04, however, **the scripts do not seem to work when using the Docker snap, at least on Ubuntu 20.04.** It does work with the `docker.io` packages from `apt`. I was able to make things work by no longer setting `DOCKER_BUILDKIT`, but as I described in the code comments, this breaks things on Azure.
When writing this PR, I tried to make the minimal modifications to our current set up to get the behavior we want. I'm planning on working on splitting the Docker builds into different Azure jobs so it doesn't increase the overall build time, but this isn't trivial so I figured it would be best done in a separate PR.
* Remove license.
* update build scripts
* write deploy code
* Remove unused READMEs.
* rewrite readme
* Make testing on a fork easier.
* Set up Azure automation.
* fix typo
* Make output more verbose.
* clean up cleanup...everybody everywhere
* separate build and deploy
* Document docker-hub credentials
* Use Docker BuildKit when building.
* Remove unneeded .gitignore files.
* Fix tools/docker/README.md grammar
Co-authored-by: ohemorange <ebportnoy@gmail.com>
* Clarify <TAG> in README.
* no docker snap
* rename docker job
Co-authored-by: Erica Portnoy <ebportnoy@gmail.com>
* Improve github release creation process
* Comment file
* Update tools/create_github_release.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* run chmod +x on tools/create_github_release.py
* Add description of create github release method
* remove references to unnecessary azure credential
* remove unnecessary import
* Add reminders to update other file to definitions in .azure-pipelines
* Raise an error if we fail to fetch the artifact from azure
* Create github release as a draft, upload artifacts, then un-draft, for hooks to be run at the right point
* get the version number from the release
* add new packages to dev3_extras so they're installed by tools/venv3.py
* remove unnecessary import
* fun fact: tempdirs behave differently when used as a context manager
* Move comment to construct.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
It seems my old instruction isn't required any longer for Gentoo. To be honest, I don't have a clue since when, but my own Gentoo server isn't even using the workaround mentioned currently in the documentation at the moment. So it seems the Apache plugin works just fine without this workaround 🤦
Also, the Gentoo repository obviously also includes the nginx since a long time. I guess my original text is ancient.. It also includes *one* of the many DNS plugins, with a different maintainer than the other "main" packages. It currently only has version 0.39.0, so I don't have a clue if it's being maintained officially.
* Remove obsolete Gentoo instructions and add packages.
* Capitalize note
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* nginx: add --nginx-sleep-seconds
As described in #7422, reloading nginx is an asynchronous process and
Certbot does not know when it is complete. In an environment where this
reload takes a long time, the nginx plugin suffers from an issue where
it responds to and fails the ACME challenge before the nginx server is
ready to serve it.
Following the discussion in a previous PR #7740, this commit introduces
a new flag, --nginx-sleep-seconds, which may be used to increase the
duration that Certbot will wait for nginx to reload, from its previously
hard-coded value of 1s.
Fixes#7422
* update CHANGELOG
* nginx: update docstring for nginx_restart
Fixes#8169
This PR improves snaps remote builds script by dumping the output of `snapcraft remote-build` when unexpected behavior is detected:
* when all builds for a project finish with a zero status code, and none of them are marked as failed, we expect to have all the associated snap files available locally.
* when some builds are marked as failed, we expect to have a build output for each of them available locally.
In these two situations, if the expectation are not matched, then the script will display the output of `snapcraft remote-build` itself. I added also a control error to handle nicely the absence of an expected build output on the local machine.
* Improve log dump in snaps remote builds when an unexpected behavior is detected
* Use the manager
* Update tools/snap/build_remote.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Fixes#7863.
Connect command is `sudo snap connect certbot-dns-dnsimple:certbot-metadata certbot:certbot-metadata`
Logs are `cat /var/snap/certbot-dns-dnsimple/current/debuglog`
Echos in hook are only printed to terminal when it exits 0; otherwise, check logs in `debuglog` mentioned above.
Manual tests include all iterations of connected, unconnected, installed for the first, second time, etc, with passing and failing version checks.
* Make dnsimple not update if certbot is too old
* create an interface to read cb version
* add missing newline
* fix syntax
* trying to figure out the consumer syntax
* trying to figure out the consumer syntax, again
* only check post first install
* valid setting name
* test for first install differently
* snapctl doesn't error if it fails I guess
* time to do some print debugging
* continue playing with syntax
* once again, fooled by bash int vs string comparisons!
* debugging
* if we use post and pre together we can do this
* is this how content interface syntax works
* it's a directory?
* more debug
* what's that error message again?
* try other syntax
* if it's not documented just guess at syntax
* actually, I think this is the syntax
* oops didn't set for new hook
* test passing information along connection
* interface attributes can only be set during the execution of prepare hooks
* just do it with main connection
* undo last few test changes
* Add some printing to make sure we understand what's going on
* create empty directory to bind to
* put mkdir in the correct part
* let's inspect the environment
* it can't run bash directly.
* perhaps only directories can be shared via the contente interface
* update name of folder
* echo to debug log to understand what's going on exactly. we have file access though!
* update grep for new file
* more printing
* echo to the debug log
* ok NOW all print statements are going to the log
* why does echo need two >s
* remove unnecessary extra check, just check if the init file is available
* check if certbot version will be available post-refresh after all
* pre-refresh hook is not necessary to get certbot version
* update mkdir so we don't have to clean each time
* try comparing version numbers in python
* it's python3
* we need different prints for if we succeed or if we fail.
* improve bash syntax
* remove some debugging code
* Remove debug script
* remove spaces for clarity
* consolidate parts and remove more test code
* s/certbot-version/certbot-metadata/g
* use sys.exit instead of exit
* find and save certbot version on the certbot side
* change presence test to new file
* switch to using packaging.version.parse instead of LooseVersion
* switch to requiring certbot version >= plugin version
* add plugin snap changes to generate script
* Add comment to generation file saying not to edit generated files manually
* Create post-refresh hook for all plugins with script
* generate files using new script
* update snapcraft.yaml files for plugins
* bin/sh comes first
* Add packaging to install_requires
* Check that refresh is allowed in integration test
* switch plug and slot names in integration test
* Update tools/generate_dnsplugins_postrefreshhook.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* small bash fixes
* Update snap readme with new instructions
* Run tools/generate_dnsplugins_postrefreshhook.sh
* Update tools/snap/generate_dnsplugins_postrefreshhook.sh
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Snapcraft has a feature name `remote-build`. It allows to compile snaps using the Canonical dedicated build architecture for several architectures. Compared to the QEMU-enabled Docker approach used currently, the remote build has several advantages:
* the builds are done on the native architecture, making them basically faster than what can be achieved on QEMU
* it avoids to depend on `adferrand/snapcraft` (which could be otherwise be fixed with the merge of https://github.com/snapcore/snapcraft/pull/3144, but this will not happen in the short term)
* when everything is good, all snaps build can be run in parallel and then can be orchestrated by one single Azure Pipeline job, since the heavy tasks are done remotely.
This PR makes the necessary ajustements to use the remote build feature instead of the QEMU-enabled docker approach.
One complex task was to be able to compile the `certbot` snap on `arm64` and `armhf`. Indeed on these architectures the pre-compiled wheel for `cffi` is not available. So it needs to be compiled during the snap build. Sadly, the current version of the python plugin in snapcraft is limited by the fact that `wheels` is not installed in the virtual environment set up to build the python packages, and there is no easy way to change that except by overridding the whole build process.
In the long term, I think I will open a PR on `snapcraft` Git repository to provide a consistent solution. But for the short term, I used the possibility to provide arguments to the `venv` module, to add the flag `--system-site-packages`. With it, the virtual environment can use the system site package, where `wheel` is available.
The other significant additions are in `tools/snap/build_remote.py` script. If invoking the remote build on a local machine is quite straight-forward, it is another story on the CI because we need build auditability and resiliency during these non-interactive actions. In particular we should avoid as possible inconsistent results on the nightly pipeline and the release pipeline.
So this script wraps the `snapcraft` call into a retry logic, and improves its logs in the context of parallel builds.
For the minor modifications, it is mainly about ensuring that plugins can be built (some of them also need `cffi` for instance), and simplify the Azure Pipeline since all snaps are retrieved in one go.
Please note that the `test-` branches still run only the `amd64` architecture. Indeed I noticed that builds on `arm64` and `armhf` are tending to be very slow to start (up to 40 min) while the `amd64` ones wait at max 10 mins, and usually 30 seconds only when the overall load on Canonical side is low.
To work on `certbot/certbot` repository, one secured file needs to be added, because `snapcraft` needs to be authenticated against Launchpad with credentials allowing remote builds. To do so, from a local machine that have this capability, one can extract the existing file at `$HOME/.local/share/snapcraft/provider/launchpad/credentials`, and register it as a secured file in Azure Pipeline with the name `snapcraftRemoteBuildCredentials`.
* Define scripts
* Setup pipeline to use remote builds
* Focus on packaging builds
* Set credentials
* Setup git
* Launch all builds in parallel
* Add dev dependencies to build cffi and cryptography
* Convert to a python logic
* Reorganize the pipeline
* Handle the fact that snap builds may be taken from cache
* Generate constraints
* Exit code
* Check existence
* Try to handle better non zero exit code
* Add --system-site-packages to get wheel in the venv
* Add executable permissions
* Troubleshoot
* Dynamic display, take the maximum timeout for snap build job
* Allow retries if the remote build does not start
* Trigger only amd64 builds for test branches
* Exit properly
* Update snapcraft.yaml
* Fix snap run
* Set secured file name
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Move order in deps
* Reactivate all builds
* Use Manager() as a context manager
* Use Pool as a context manager
* Some nice refactorings
* Check snapcraft execution interruption with exit codes
* Use f-string and format expressions
* Start log
* Consistent use of single/double quotes
* Better loop to extract lines
* Retry on build failures
* Few optimizations
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Fixes#8149
This PR adds warnings to warn about the incoming deprecation of Python 3.5 in Certbot.
* Add warnings about Python 3.5 deprecation in Certbot
* Update certbot/certbot/__init__.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
«When you add or change DNS zones or records, your changes will now be
reflected at our authoritative nameservers in under 60 seconds. This is
down from the previous “every quarter hour” approach that we had for so
long.» - https://www.linode.com/blog/linode/linode-turns-17/
Fixes#4351
This PR proposes a solution to use the third party plugins with the prefix `pip_package_name:` in the plugin name, plugin specific flags and keys in dns plugin credential files.
A first solution has been proposed in #6372, and a more advanced one in #7026. In #7026 was also added a deprecation warning when the old plugin name `pip_package_name:plugin_name` was used.
However there were some limitations with #7026, in particular the fact that existing flags of type `pip_package_name:dns_plugin_option` or keys like `pip_package_name:key` in dns plugin credential files were not read anymore. This would have led to silent failures during renewals if the configuration was not explicitly updated by the user.
I tried to fix that based on #7026, but the changes needed are complex, and create new problems on their own, like unexpected erasure of values in the renewal configurations.
Instead I try in this PR a new approach: the `PluginsRegistry` in `certbot._internal.plugins.disco` module register two plugins for a given entrypoint refering to a third party plugin when `find_all()` is called:
* one plugin with the name `plugin_name`
* one plugin with the name `pip_package_name:plugin_name` (like before)
This way, every existing configuration continues to work without any change (credentials, renewal configuration, CLI flags). And new configurations can refer to the new plugin name without prefix, and use the approriate CLI flags, credentials without this prefix.
On top of it I added the deprecation path given in #7026 (thanks @coldfix!):
* the plugin named `pip_package_name:plugin_name` is hidden from `certbot plugins` output
* the help for this plugin is still displayed, and a deprecation warning is displayed in the description
* when invoked, the same deprecation warning is displayed in the terminal
* Support both prefixed and not prefix third party plugins
* Adapt tests
* Add deprecation path
* Named parameters
* Add deprecation warning in CLI
* Add a changelog
Fixes#8041
This PR makes Azure Pipeline build the DNS plugins snaps for the 3 architectures during the CI.
It leverages the existing logic for building the Certbot snap in order to deploy a QEMU environment with Docker, and leverages the local PyPI index to speed up the build when installing `cffi` and `cryptography`.
All DNS plugins snaps are constructed in one unique docker container, in order to save the time required to install the system dependencies upon first start of `snapcraft`, and so speed up significantly the build.
Finally, all `amd64` DNS plugins snaps are built within 6 minutes. For `arm64` and `armhf`, it is around 40 mins: this is quite fast in fact, considering that 14 DNS plugins snaps are built.
However, this is still an extremely heavy task to make the full 3 architectures builds, even for Azure Pipelines and its 10 parallel jobs capability. That is why I make the `arm64` and `armhf` builds be skipped for the `full-test-suite`, and let them run only for `nightly` and `release`. This means however that these builds will not be done for the release branches. If this is a problem, I can put a more elaborate suspend condition to triggers the builds in this case.
All snaps are stored in the pipeline artifacts storage, making them available for publication during a `release` pipeline.
The PR is set as Draft for now, because I use temporarily `pr_test-suite` to validate the packaging jobs when commits are pushed. Once the PR is ready, I will revert it back to the normal configuration (run the standard tests).
* Configure a script to build DNS snaps
* Focus on packaging
* Trigger all architectures
* Add extra index
* Prepare conditional suspend
* Set final suspend logic
* Set final suspend value
* Loop for publication
* Use python3
* Clean before build
* Add a test
* Add test job in Azure
* Preserve env
* Apply normal config for pipelines
* Skip QEMU jobs only for test branches
* Makes snap run tests depends also on the Certbot snap build
* Update .azure-pipelines/templates/jobs/packaging-jobs.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update .azure-pipelines/templates/stages/deploy-stage.yml
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* More accurate way to get the plugin snap name
* Integrate DNS snap tests into certbot-ci
* Fixes
* Update certbot-ci/snap_integration_tests/conftest.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Update certbot-ci/snap_integration_tests/conftest.py
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
* Clean an _init_.py file
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
People who are considering running Certbot with Docker are probably doing so because their webserver is to be run with Docker. These changes to the README should help them to understand that doing so will require knowledge of Docker volumes and that the architectural justification for running Certbot in a separate container is the "one service per container" best practice.