Instead of an expensive subquery to find candidates, we use the slab
allocator.
This change also adds a feature flag for disabling the worker if
necessary, and changes the worker's timeout to be much longer
Fixes https://issues.redhat.com/browse/PROJQUAY-779
pymemcache is apparently not thread safe, so our reuse of the client
was causing the occasional hang on read. This change has us open a new
connection per request, and then close it once complete. We also make
sure to close the DB connection before making the memcache connection,
in case the memcache connection takes a bit of time
We should investigate switching to the PooledClient library for reusing
connections (safely) once we verify this change works
* Convert all Python2 to Python3 syntax.
* Removes oauth2lib dependency
* Replace mockredis with fakeredis
* byte/str conversions
* Removes nonexisting __nonzero__ in Python3
* Python3 Dockerfile and related
* [PROJQUAY-98] Replace resumablehashlib with rehash
* PROJQUAY-123 - replace gpgme with python3-gpg
* [PROJQUAY-135] Fix unhashable class error
* Update external dependencies for Python 3
- Move github.com/app-registry/appr to github.com/quay/appr
- github.com/coderanger/supervisor-stdout
- github.com/DevTable/container-cloud-config
- Update to latest mockldap with changes applied from coreos/mockldap
- Update dependencies in requirements.txt and requirements-dev.txt
* Default FLOAT_REPR function to str in json encoder and removes keyword assignment
True, False, and str were not keywords in Python2...
* [PROJQUAY-165] Replace package `bencode` with `bencode.py`
- Bencode is not compatible with Python 3.x and is no longer
maintained. Bencode.py appears to be a drop-in replacement/fork
that is compatible with Python 3.
* Make sure monkey.patch is called before anything else (
* Removes anunidecode dependency and replaces it with text_unidecode
* Base64 encode/decode pickle dumps/loads when storing value in DB
Base64 encodes/decodes the serialized values when storing them in the
DB. Also make sure to return a Python3 string instead of a Bytes when
coercing for db, otherwise, Postgres' TEXT field will convert it into
a hex representation when storing the value.
* Implement __hash__ on Digest class
In Python 3, if a class defines __eq__() but not __hash__(), its
instances will not be usable as items in hashable collections (e.g sets).
* Remove basestring check
* Fix expected message in credentials tests
* Fix usage of Cryptography.Fernet for Python3 (#219)
- Specifically, this addresses the issue where Byte<->String
conversions weren't being applied correctly.
* Fix utils
- tar+stream layer format utils
- filelike util
* Fix storage tests
* Fix endpoint tests
* Fix workers tests
* Fix docker's empty layer bytes
* Fix registry tests
* Appr
* Enable CI for Python 3.6
* Skip buildman tests
Skip buildman tests while it's being rewritten to allow ci to pass.
* Install swig for CI
* Update expected exception type in redis validation test
* Fix gpg signing calls
Fix gpg calls for updated gpg wrapper, and add signing tests.
* Convert / to // for Python3 integer division
* WIP: Update buildman to use asyncio instead of trollius.
This dependency is considered deprecated/abandoned and was only
used as an implementation/backport of asyncio on Python 2.x
This is a work in progress, and is included in the PR just to get the
rest of the tests passing. The builder is actually being rewritten.
* Target Python 3.8
* Removes unused files
- Removes unused files that were added accidentally while rebasing
- Small fixes/cleanup
- TODO tasks comments
* Add TODO to verify rehash backward compat with resumablehashlib
* Revert "[PROJQUAY-135] Fix unhashable class error" and implements __hash__ instead.
This reverts commit 735e38e3c1d072bf50ea864bc7e119a55d3a8976.
Instead, defines __hash__ for encryped fields class, using the parent
field's implementation.
* Remove some unused files ad imports
Co-authored-by: Kenny Lee Sin Cheong <kenny.lee@redhat.com>
Co-authored-by: Tom McKay <thomasmckay@redhat.com>
While a transaction is obviously safer, with the number of tables
and rows referencing these tables now, a transaction is potentially
locking up a significant chunk of the database. Since we're already
performing cleanup before calling the delete, including disabling
new data being written for the User or Repository, deletion without
a transaction should (usually) be sufficient; if it isn't, an
IntegrityError will be raised, and the workers can retry continuing
the GC operation
* Optimize repository lookup queries to meet the expected maximums
We were accidentally looking up more data that strictly allowed
Adds some additional assertions and testing as well
Fixes https://issues.redhat.com/browse/PROJQUAY-439
* Change loading of repositories in the repo view to be paginated
We drop the "card" view and switch to a table-only view, but still
load the full set of repositories
A followup change will begin to change the UI to only load additional
repos when requested
* Change storage GC to process a single row at a time
This should remove the deadlock under the transaction and be much less
heavy on the DB
* Ensure we don't select repositories for GC from those already marked
for deletion or those under to-be-deleted namespaces
* Ensure that GC operations occur under global locks, to prevent
concurrent GC of the same repositories, which should reduce lock
contention on the database
Up until now, the "if not found_results" line could throw an UnboundLocalError because the variable was assigned inside a try block which could fail but the variable was later referenced.
Add an extra check and return an empty dict if no repo is given.
This is needed because `Tag.repository << [rid for rid in
repository_ids]` will fail on MySQL if the list is empty.
* Implement OCI manifest and index support
* Remove unnecessary data model check in registry protocol fixtures
* Implement OCI testing
* Add migration for adding OCI content types
* Remove unused supports_schema2
* Add OCI_NAMESPACE_WHITELIST and reformat with black
* Catch errors in legacy image population and raise appropriately
* Add support for registration of additional artifact types
This change adds the infrastructure to support artifacts in OCI
manifests, but does not yet register any types
* Add a feature flag for enabling experimental Helm support via OCI
See: https://helm.sh/docs/topics/registries/
* DBA operator migration generator bug fixes.
Fix a bug where historical migrations had table and index names swapped.
Fix a bug where migrations were always being generated on upgrade and
downgrade.
* reformat with black
We now cap the index name length at 64 (as per MySQL restrictions). If
an index name is *longer* than that, we hash the index name in its
entirety and then append the short-sha of the hash as a suffix to a
shortened version of the index name. This ensures the index name meets
the length requirements while *also* being stable across generations.
This change also adds a legacy map of long index names to the names we
manually applied, to ensure they don't get regenerated either
When stored directly, we were encountering unicode errors for the image
metadata on older Docker clients. By serializing to/from JSON, we ensure
the unicode is handled properly
Instead, we now simply save all the Image information into the
in-memory session and then construct the manifest directly at the end.
Fixes https://issues.redhat.com/browse/PROJQUAY-513