* Fix TTL mismatch leading to HTTP 412
This PR is a follow up from #8521 where we address the
issue of potentially having a mismatch of TTL when executing
a DNS change (transaction = deletion + additions). Let's say
we have a record `foo.org 30 IN TXT foo-content` with TTL 30s,
when creating challenge or cleaning we might need to perform
a deletion operation in the transaction. Currently certbot
would ask Google API to delete the foo record like this:
`foo.org 60 in TXT foo-content` ignoring the record's original
TTL and using 60s instead. This leads to HTTP 412 as Google would
expect a perfect match of what we want to delete with what it is
on the DNS. See also #8523
* remove ttl from default data to avoid confusions
* Refactor tests and add a missing case
This commit adds a test that covers the case when we are
deleting a TXT record which contains a single rrdatas. Also,
refactoring a couple of tests.
* Make get_existing_txt_rrset documentation more precise about return value
* Add missing assertions in tests.
* fix linting issues
* Mention fix on changelog
* Explain fix around user impact
* Explain what happens when no records are returned
* Update certbot/CHANGELOG.md
* Update certbot/CHANGELOG.md
* Added note to each DNS documentation index page to mention that plugins need to be installed and are not included as standard.
* Resolved issue with white space in doc files
* Changed wording as discussed in PR.
* Changing URL to new wildcard instructions link
* Update certbot-dns-cloudflare/certbot_dns_cloudflare/__init__.py
* update_account: print correct message for -m ""
When -m "" was passed on the CLI, Certbot would print that it updated
the email to '' (an empty string) rather than printing that it removed
the contact details.
This commit also refactors the update_account tests to be a bit more
modern.
* use addCleanup instead of tearDown in tests
* Fix fetch of existing records from Google DNS
There has been many complaints regarding `certbot_dns_google` plugin
failing with:
* HTTP 412 - Precondition not met
* HTTP 409 - Conflict
See #6036. This PR fixes that situation. The bug lies on how we
fetch the TXT records from google. For large amount of records
the Google API paginates the result but we ignore the subsequent
pages and assume that if the record is not in the first response then
it doesn't exist. This leads to either HTTP 409, or HTTP 412 or both.
In this PR we leverage the use of filters on the API to get exactly
the records we are looking for. Apart from fixing the problem stated
above, it has the extra benefit of making the process faster by
reducing the amount of API calls and it doesn't require us to handle
any pagination logic
* Explain changes on CHANGELOG
* Edit AUTHORS.md
* make execute static
* Update certbot/CHANGELOG.md
Being more specific for which plugin this fix bug is meant for.
Co-authored-by: alexzorin <alex@zor.io>
* Fix if expression to be more python-idiomatic
Co-authored-by: alexzorin <alex@zor.io>
* Sort AUTHORS.md
* Simplify tests
Make rrs_mock modeling simpler and refactor
* Revert "Simplify tests"
This reverts commit 9de9623ba7.
* Reimplement conditional mock
We still want to use a conditional mock by make it more
simple to understand by using MagicMock.
* Revert "Sort AUTHORS.md"
This reverts commit b3aa35bcf1.
* Add name in AUTHORS.md
Co-authored-by: alexzorin <alex@zor.io>
In 96a05d9, mypy testing was added to certbot-ci, but introduced an
undeclared dependency on acme.magic_typing, resulting in a crash when
run under the integration-external tox environment.
This change uses the typing module in certbot-ci in place of
acme.magic_typing. It is already provided via dev_constraints.
Fixes https://github.com/certbot/certbot/issues/8519.
I left the `certbot-auto` docs in `install.rst` to avoid breaking links and to help propagate information about our changes there. I moved it closer to the bottom of the doc though since I think our documentation about OS packages and Docker is more helpful to most people.
* clean up certbot-auto docs
* add more info to changelog
* remove more certbot-auto references
Fixes#8256
First let's sum up the problem to solve. We disabled the build isolation available in pip>=19 because it could potential break certbot build without a control on our side. Basically builds are not reproductible. Indeed the build isolation triggers build of PEP-517 enabled transitive dependencies (like `cryptography`) with the build dependencies defined in their `pyproject.toml`. For `cryptography` in particular these requirements include `setuptools>=40.6.0`, and quite logically pip will install the latest version of `setuptools` for the build. And when `setuptools` broke with the version 50, our build did the same.
But disabling the build isolation is not a long term solution, as more and more project will migrate on this approach and it basically provides a lot of benefit in how dependencies are built.
The ideal solution would be to be able to apply version constraints on our side on the build dependencies, in order to pin `setuptools` for instance, and decide precisely when we upgrade to a newer version. However for now pip does not provide a mechanism for that (like a `--build-constraint` flag or propagation of existing `--constraint` flag).
Until I saw https://github.com/pypa/pip/issues/9081 and https://github.com/pypa/pip/issues/8439.
Apart the fact that https://github.com/pypa/pip/issues/9081 shows that pip maintainers are working on this issue, it explains how pip works regarding PEP-517 and infers which workaround can be used to still pin the build dependencies. It turns out that pip invokes itself in each build isolation to install the build dependencies. It means that even if some flags (like `--constraint`) are not explicitly passed to the pip sub call, the global environment remains, in particular the environment variables.
Thus it is known that every pip flag can alternatively be set by environment variable using the following pattern for the variable name: `PIP_[FLAG_NAME_UPPERCASE]`. So for `--constraint`, it is `PIP_CONSTRAINT`. And so you can pass a constraint file to the pip sub call through that mechanism.
I made some tests with a constraint file containing pinning for `setuptools`: indeed under isolation zone, the constraint file has been honored and the provided pinned version has been used to build the dependencies (I tested it with `cryptography`).
Finally this PR takes advantage of this mechanism, by setting `PIP_CONSTRAINT` to `pip_install`, the snap building process, the Dockerfiles and the windows installer building process.
I also extracted out the requirements of the new `pipstrap.py` to be reusable in these various build processes.
* Use workaround to fix build requirements in build isolation, and renable build isolation
* Clean imports in pipstrap
* Externalize pipstrap reqs to be reusable
* Inject pipstrap constraints during pip_install
* Update docker build
* Update snapcraft build
* Prepare installer build
* Fix pipstrap constraints in snap build
* Add back --no-build-cache option in Docker images build
* Update snap/snapcraft.yaml
* Use proper flags with pip
Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
This PR adds a `--timeout` flag to `tools/snap/build_remote.py` in order to fail the process if the time execution reaches the provided timeout. It is set to 5h30 on the relevant Azure job, while the job itself has a timeout of 6h managed on Azure side. This allows a slightly better output for these jobs when the snapcraft build stales for any reason.
This PR proposes an alternative configuration for the snap build that avoid the need to use `--system-site-package` when constructing the virtual environment in the snap.
The rationale of `--system-site-package` was that by default, snapcraft creates a virtual environment without `wheel` installed in it. However we need it to build the wheels like `cryptography` on ARM architectures. Sadly there is not way to instruct snapcraft to install some build dependencies in the virtual environment before it kicks in the build phase itself, without overriding that entire phase (which is possible with `parts.override-build`).
The alternative proposed here is to not override the entire build part, but just add some preparatory steps that will be done before the main actions handled by the `python` snap plugin. To do so, I take advantage of the `--upgrade` flag available for the `venv` module in Python 3. This allows to reuse a preexisting virtual environment, and upgrade its component. Adding a flag to the `venv` call is possible in snapcraft, thanks to the `SNAPCRAFT_PYTHON_VENV_ARGS` environment variable (and it is already used to set the `--system-site-package`).
Given `SNAPCRAFT_PYTHON_VENV_ARGS` set to `--upgrade` , we configure the build phase as follows:
* create the virtual environment ourselves in the expected place (`SNAPCRAFT_PART_INSTALL`)
* leverage `tools/pipstrap.py` to install `setuptools`, `pip`, and of course, `wheel`
* let the standard build operations kick in with a call to `snapcraftctl build`: at that point the `--upgrade` flag will be appended to the standard virtual environment creation, reusing our crafted venv instead of creating a new one.
This approach has also the advantage to invoke `pipstrap.py` as it is done for the other deployable artifacts, and for the PR validations, reducing risks of shifts between the various deployment methods.
Although Certbot is a classic snap, it shouldn't load Python code from
the host system. This change prevents packages being loaded from the
"user site-packages directory" (PEP-370). i.e. Certbot will no longer
load DNS plugins installed via `pip install --user certbot-dns-*`.
This adds a 'Error parsing credentials file ...' wrapper to any errors
raised inside certbot-dns-google's usage of oauth2client, to make it
obvious to the user where the problem lies.
* cli: clean up `certbot renew` summary
- Unduplicate output which was being sent to both stdout and stderr
- Don't use IDisplay.notification to buffer output
- Remove big "DRY RUN" guards above and below, instead change language
to "renewal" or "simulated renewal"
- Reword "Attempting to renew cert ... produced an unexpected error"
to be more concise.
* add newline to docstring
Co-authored-by: ohemorange <ebportnoy@gmail.com>
Co-authored-by: ohemorange <ebportnoy@gmail.com>
Fixes https://github.com/certbot/certbot/issues/8495.
To further explain the problem here, `modify_kwargs_for_default_detection` as called in `add` is simplistic and doesn't always work. See https://github.com/certbot/certbot/issues/6164 for one other example.
In this case, were bitten by the code d1e7404358/certbot/certbot/_internal/cli/helpful.py (L393-L395)
The action used for deprecated arguments isn't in `ZERO_ARG_ACTIONS` so it assumes that all deprecated flags take one parameter.
Rather than trying to fix this function (which I think can only realistically be fixed by https://github.com/certbot/certbot/issues/4493), I took the approach that was previously used in `HelpfulArgumentParser.add_deprecated_argument` of bypassing this extra logic entirely. I adapted that function to now call `HelpfulArgumentParser.add` as well for consistency and to make testing easier.
* Rename deprecated arg action class
* Skip extra parsing for deprecated arguments
* Add back test of --manual-public-ip-logging-ok
* Add changelog entry
(cherry picked from commit 5f73274390)
Fixes https://github.com/certbot/certbot/issues/8495.
To further explain the problem here, `modify_kwargs_for_default_detection` as called in `add` is simplistic and doesn't always work. See https://github.com/certbot/certbot/issues/6164 for one other example.
In this case, were bitten by the code d1e7404358/certbot/certbot/_internal/cli/helpful.py (L393-L395)
The action used for deprecated arguments isn't in `ZERO_ARG_ACTIONS` so it assumes that all deprecated flags take one parameter.
Rather than trying to fix this function (which I think can only realistically be fixed by https://github.com/certbot/certbot/issues/4493), I took the approach that was previously used in `HelpfulArgumentParser.add_deprecated_argument` of bypassing this extra logic entirely. I adapted that function to now call `HelpfulArgumentParser.add` as well for consistency and to make testing easier.
* Rename deprecated arg action class
* Skip extra parsing for deprecated arguments
* Add back test of --manual-public-ip-logging-ok
* Add changelog entry
* Don't deprecate certbot-auto quite yet
* Remove centos6 test farm tests
* undo changes to test farm test scripts
(cherry picked from commit e5113d5815)
* nginx: fix py2 unicode sandwich
The nginx parser would crash when saving configuraitons containing
Unicode, because py2's `str` type does not support Unicode.
This change fixes that crash by ensuring that a string type supporting
Unicode is used in both Python 2 and Python 3.
* nginx: add unicode to the integration test config
* update CHANGELOG