mirror of
https://github.com/minio/docs.git
synced 2025-07-28 19:42:10 +03:00
BUGFIX: Multiple Issues
DOCS-676: Resource ARN "*" description misleading DOCS-677: Site Replication Recovery requires removing failed site first MINIO-16268: Further simplify erasure set stripe calculation docs to avoid further confusion
This commit is contained in:
@ -182,17 +182,16 @@ policy elements, see the :aws-docs:`IAM JSON Policy Elements Reference
|
||||
]
|
||||
}
|
||||
|
||||
- For the ``Statement.Action`` array, specify one or more
|
||||
:ref:`supported S3 API operations <minio-policy-actions>`. MinIO deployments
|
||||
supports a subset of AWS S3 API operations.
|
||||
- For the ``Statement.Action`` array, specify one or more :ref:`supported S3 API operations <minio-policy-actions>`.
|
||||
|
||||
- For the ``Statement.Resource`` key, you can replace the ``*`` with
|
||||
the specific bucket to which the policy statement should apply.
|
||||
Using ``*`` applies the statement to all resources on the MinIO deployment.
|
||||
- For the ``Statement.Resource`` key, specify the bucket or bucket prefix to which to restrict the policy.
|
||||
You can use ``*`` and ``?`` wildcard characters as per the :s3-docs:`S3 Resource Spec <s3-arn-format.html>`.
|
||||
|
||||
- For the ``Statement.Condition`` key, you can specify one or more
|
||||
:ref:`supported Conditions <minio-policy-conditions>`. MinIO
|
||||
deployments supports a subset of AWS S3 conditions.
|
||||
The ``*`` wildcard may result in unintended application of a policy to multiple buckets or prefixes based on the pattern match.
|
||||
For example, ``arn:aws:s3:::data*`` would match the buckets ``data``, ``data_private``, and ``data_internal``.
|
||||
Specifying only ``*`` as the resource key applies the policy to all buckets and prefixes on the deployment.
|
||||
|
||||
- For the ``Statement.Condition`` key, you can specify one or more :ref:`supported Conditions <minio-policy-conditions>`.
|
||||
|
||||
.. _minio-policy-actions:
|
||||
|
||||
|
@ -42,24 +42,17 @@ Zero-parity deployments depend on the underlying storage for resiliency and avai
|
||||
Erasure Sets
|
||||
------------
|
||||
|
||||
An *Erasure Set* is a set of drives in a MinIO deployment that support Erasure Coding.
|
||||
MinIO evenly distributes object data and parity blocks among the drives in the Erasure Set.
|
||||
MinIO randomly and uniformly distributes the data and parity blocks across drives in the erasure set with *no overlap*.
|
||||
Each unique object has no more than one data or parity block per drive in the set.
|
||||
An *Erasure Set* is a group of drives onto which MinIO writes erasure coded objects.
|
||||
MinIO randomly and uniformly distributes the data and parity blocks of a given object across the erasure set drives, where a given drive has no more than one block of either type per object (no overlap).
|
||||
|
||||
MinIO automatically calculates the number and size of Erasure Sets ("stripe size") based on the total number of nodes and drives in the :ref:`Server Pool <minio-intro-server-pool>`, where the minimum stripe size is 2 and the maximum stripe size is 16.
|
||||
All erasure sets in a given pool have the same stripe size, and MinIO never modifies nor allows modification of stripe size after initial configuration.
|
||||
The algorithm for selecting stripe size takes into account the total number of nodes in the deployment, such that the selected stripe allows for uniform distribution of erasure set drives across all nodes in the pool.
|
||||
|
||||
MinIO calculates the number and size of *Erasure Sets* by dividing the total number of drives in the :ref:`Server Pool <minio-intro-server-pool>` into sets consisting of between 4 and 16 drives each.
|
||||
|
||||
For clusters, pools, or deployments with more than 16 drives, MinIO divides the drives into multiple erasure sets of the same number of drives.
|
||||
For this reason, the total number of drives in a deployment must be divisible evenly by a number between 4 and 16.
|
||||
|
||||
For example, 20 drives are divided into two erasure sets of 10 drives each.
|
||||
28 drives are divided into 2 erasure sets of 14 drives each.
|
||||
40 drives are divided into 4 erasure sets of 10 drives each.
|
||||
|
||||
Because numbers such as 17, 19, or 34 cannot be evenly divided by any number between 2 and 16, you cannot have a deployment with such a number of drives.
|
||||
Add or remove drives to return to an allowable number of drives.
|
||||
|
||||
Use the MinIO `Erasure Coding Calculator <https://min.io/product/erasure-code-calculator>`__ to determine the optimal erasure set size for your preferred MinIO topology.
|
||||
Erasure set stripe size dictates the maximum possible :ref:`parity <minio-ec-parity>` of the deployment.
|
||||
Use the MinIO `Erasure Coding Calculator <https://min.io/product/erasure-code-calculator>`__ to explore the possible erasure set size and distributions for your planned topology.
|
||||
MinIO strongly recommends architecture reviews via |SUBNET| as part of your provisioning and deployment process to ensure long term success and stability.
|
||||
As a general guide, plan your topologies to have an even number of nodes and drives where both the nodes and drives have a common denominator of 16.
|
||||
|
||||
.. _minio-ec-parity:
|
||||
|
||||
|
@ -32,12 +32,20 @@ If a peer site fails, such as due to a major disaster or long power outage, you
|
||||
|
||||
The following procedure can restore data in scenarios where :ref:`site replication <minio-site-replication-overview>` was active prior to the site loss.
|
||||
|
||||
1. Deploy a new MinIO site using the same ``root`` credentials as used on other deployments in the site replication configuration
|
||||
1. Remove the failed site from the MinIO site replication configuration using the :mc-cmd:`mc admin replicate remove` command.
|
||||
For example, the following command removes a failed site with :ref:`alias <alias>` ``siteB`` from a site replication configuration that includes healthy site with alias ``siteA``:
|
||||
|
||||
You can use the original hardware, if still available, but you must first wipe any remaining data before creating the new site.
|
||||
2. Configure the new site with the same Identity Provider (IDp) as the other site(s)
|
||||
3. :ref:`Expand the existing site replication <minio-expand-site-replication>` by adding the newly deployed site
|
||||
4. Remove the lost site from the site replication configuration
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin replicate remove siteA siteB --force
|
||||
|
||||
2. Deploy a new MinIO site using the same ``root`` credentials as used on other deployments in the site replication configuration
|
||||
|
||||
You can use the original hardware from the failed site, if still available and functional, but you must first wipe any remaining data before creating the new site.
|
||||
Ensure you have fully remediated any issues that resulted in the original failure state prior to reusing the hardware.
|
||||
3. Configure the new site with the same Identity Provider (IDp) as the other site(s)
|
||||
4. :ref:`Expand the existing site replication <minio-expand-site-replication>` by adding the newly deployed site
|
||||
|
||||
Site replication healing automatically adds IAM settings, buckets, bucket configurations, and objects from the existing site(s) to the new site with no further action required.
|
||||
|
||||
|
Reference in New Issue
Block a user