* chore(pre-commit): match black version with requirements-dev
* run `make black` against repo
* ci: switch to black 24.4.2
* fix: py312
* fix: flake8 errors
* fix: flake8 conflicts
* chore: add git blame ignore revs file
* storage: make cloudfront_distribution_org_overrides optional (PROJQUAY-5788)
This is causing issues with config editor where it
configure CloudFront provider because of the required
override param
When completing a chunked upload, if the chunk list is empty do not attempt to assemble anything.
Using oras to copy an artifact from an outside registry to quay results in a 5XX error. This is because at some point the upload chunk list is empty and attempting to complete the chunked upload causes an exception. Not trying to write to storage if there are no chunks allows the copy operation to successfully complete.
* storage: Add MultiCDN storage provider (PROJQUAY-5048)
This storage provider can route to different underlying sub-providers
based on a critiera. Currently supported filters are source_ip and
namespace.
Example Config:
- MultiCDNStorage
- providers:
TargetName1:
- ProviderName1
- porviderConfig1
Targetname2:
- ProviderName2
- ProviderConfig2
default_provider: TargetName1
rules:
- namespace: test
continent: APAC
target: TargetName2
This optimization ensures that we return the direct S3 URL for
CloudFront storage only for requests from the same region. This
ensures we don't get charged for cross-region traffic to S3
* Update peewee types
Also remove tools/sharedimagestorage.py as it doesn't work anymore.
tools/sharedimagestorage.py:3: error: "ModelSelect[ImageStorage]" has no attribute "annotate"
* Remove endpoints/api/test/test_security.py from exclude list
* Format storage/test/test_azure.py
Boto3 behaves unexpectedly when the resource client is not set to use
the correct region. Boto3 can't seem to correctly set the
X-Amz-Credential header when generating presigned urls if the region
name is not explicitly set, and will always fall back to us-east-1.
To reproduce this:
- Create a bucket in a different region from us-east-1 (e.g
eu-north-1)
- Create a boto3 client/resource without specifying the region
- Generate a presigned url
This seems to be a DNS issue with AWS that only happens shortly after
a bucket has been created, and resolves itself eventually.
Ref:
- https://github.com/boto/boto3/issues/2989
- https://stackoverflow.com/questions/56517156/s3-presigned-url-works-90-minutes-after-bucket-creation
To workaround this, one can specify the bucket endpoint, either
explicitly via endpoint_url, or by setting s3_region, which will be
used to generate the bucket's virtual address.
Currently blobs leftover in the uploads directory during cancelled uploads do not get cleaned up since they are no longer tracked. This change cleans up the uploads storage directory directly.
Migrate from using boto2 to boto3. Changes include:
- Removes explicit bucket addressing style: Boto3 will initially try virtual-style addressing first then fallback to path-style addressing (https://github.com/boto/boto3/blob/develop/docs/source/guide/configuration.rst)
- GCS workarounds to use boto3:
- Handles CORS config
- Update signed url access key parameter name
- Uses ListBucket V1 API
- On client-side chunks join, copy using non-multipart api: Use copy_from instead of copy when joining chunks client-side. This is because copy assumes multipart upload should be used which GCS and Rados are not compatible with (S3's version. They have their own parallel upload api)
- Update RDS healthcheck to use boto3
- Add Werkzeug's LimitedStream + Any binary stream (IOBase) to Swift's type assertion
- Allow LimitingStream from util.registry.filelike to seek backward, since it is required by the Swift client in order to retry operations, if it is configured to do so
- Update use of _pyio to io (io is implemented in C instead of pure Python) in Swift's implementation
- Updates the Azure client to use the recent v12, which provides better
support for large file uploads and access to newer api versions from Azure.
- Increase chunk size when iterating over chunks' streams (for some
reason, read() calls are slower in Python 3 than in Python
2, which caused timeouts on larger layers. Increasing the amount read
from 4096 per iteration is a workaround to get performance similar to
Python 2).
* Convert all Python2 to Python3 syntax.
* Removes oauth2lib dependency
* Replace mockredis with fakeredis
* byte/str conversions
* Removes nonexisting __nonzero__ in Python3
* Python3 Dockerfile and related
* [PROJQUAY-98] Replace resumablehashlib with rehash
* PROJQUAY-123 - replace gpgme with python3-gpg
* [PROJQUAY-135] Fix unhashable class error
* Update external dependencies for Python 3
- Move github.com/app-registry/appr to github.com/quay/appr
- github.com/coderanger/supervisor-stdout
- github.com/DevTable/container-cloud-config
- Update to latest mockldap with changes applied from coreos/mockldap
- Update dependencies in requirements.txt and requirements-dev.txt
* Default FLOAT_REPR function to str in json encoder and removes keyword assignment
True, False, and str were not keywords in Python2...
* [PROJQUAY-165] Replace package `bencode` with `bencode.py`
- Bencode is not compatible with Python 3.x and is no longer
maintained. Bencode.py appears to be a drop-in replacement/fork
that is compatible with Python 3.
* Make sure monkey.patch is called before anything else (
* Removes anunidecode dependency and replaces it with text_unidecode
* Base64 encode/decode pickle dumps/loads when storing value in DB
Base64 encodes/decodes the serialized values when storing them in the
DB. Also make sure to return a Python3 string instead of a Bytes when
coercing for db, otherwise, Postgres' TEXT field will convert it into
a hex representation when storing the value.
* Implement __hash__ on Digest class
In Python 3, if a class defines __eq__() but not __hash__(), its
instances will not be usable as items in hashable collections (e.g sets).
* Remove basestring check
* Fix expected message in credentials tests
* Fix usage of Cryptography.Fernet for Python3 (#219)
- Specifically, this addresses the issue where Byte<->String
conversions weren't being applied correctly.
* Fix utils
- tar+stream layer format utils
- filelike util
* Fix storage tests
* Fix endpoint tests
* Fix workers tests
* Fix docker's empty layer bytes
* Fix registry tests
* Appr
* Enable CI for Python 3.6
* Skip buildman tests
Skip buildman tests while it's being rewritten to allow ci to pass.
* Install swig for CI
* Update expected exception type in redis validation test
* Fix gpg signing calls
Fix gpg calls for updated gpg wrapper, and add signing tests.
* Convert / to // for Python3 integer division
* WIP: Update buildman to use asyncio instead of trollius.
This dependency is considered deprecated/abandoned and was only
used as an implementation/backport of asyncio on Python 2.x
This is a work in progress, and is included in the PR just to get the
rest of the tests passing. The builder is actually being rewritten.
* Target Python 3.8
* Removes unused files
- Removes unused files that were added accidentally while rebasing
- Small fixes/cleanup
- TODO tasks comments
* Add TODO to verify rehash backward compat with resumablehashlib
* Revert "[PROJQUAY-135] Fix unhashable class error" and implements __hash__ instead.
This reverts commit 735e38e3c1d072bf50ea864bc7e119a55d3a8976.
Instead, defines __hash__ for encryped fields class, using the parent
field's implementation.
* Remove some unused files ad imports
Co-authored-by: Kenny Lee Sin Cheong <kenny.lee@redhat.com>
Co-authored-by: Tom McKay <thomasmckay@redhat.com>
This change replaces the metricqueue library with a native Prometheus
client implementation with the intention to aggregated results with the
Prometheus PushGateway.
This change also adds instrumentation for greenlet context switches.