mirror of
https://github.com/minio/docs.git
synced 2025-07-30 07:03:26 +03:00
OPTIMIZATION: Storage, Capacity, and Prerequisites (#1118)
No related issue here, just freewheeling from an internal request. This started with the request to change our recommendation around label/uuid-based drive mounting to a requirement. Looking at the pages I feel like our pre-req and considerations are a little long in the tooth, and are at least slightly duplicative of what is on the checklist pages (hardware, software) This is at least a first swing at tidying things up. I think in a second pass I'll move more of the pre-reqs into the Hardware/Software/Security checklist pages, and keep the on-tutorial sections as simple defnlists so that the page flows more easily. We can push users to the details if they want it while keeping the high level requirements there. Noting this does **not** yet address the new features related to non-sequential hostname support. That has to come later. --------- Co-authored-by: Eco <41090896+eco-minio@users.noreply.github.com> Co-authored-by: Andrea Longo <feorlen@users.noreply.github.com> Co-authored-by: Daryl White <53910321+djwfyi@users.noreply.github.com>
This commit is contained in:
@ -211,71 +211,45 @@ MinIO **does not** support arbitrary migration of a drive with existing MinIO da
|
|||||||
|
|
||||||
.. end-local-jbod-single-node-desc
|
.. end-local-jbod-single-node-desc
|
||||||
|
|
||||||
.. start-local-jbod-desc
|
.. start-storage-requirements-desc
|
||||||
|
|
||||||
MinIO strongly recommends direct-attached :abbr:`JBOD (Just a Bunch of Disks)`
|
The following requirements summarize the :ref:`minio-hardware-checklist-storage` section of MinIO's hardware recommendations:
|
||||||
arrays with XFS-formatted disks for best performance.
|
|
||||||
|
|
||||||
- Direct-Attached Storage (DAS) has significant performance and consistency
|
Use Local Storage
|
||||||
advantages over networked storage (NAS, SAN, NFS).
|
Direct-Attached Storage (DAS) has significant performance and consistency advantages over networked storage (:abbr:`NAS (Network Attached Storage)`, :abbr:`SAN (Storage Area Network)`, :abbr:`NFS (Network File Storage)`).
|
||||||
|
MinIO strongly recommends flash storage (NVMe, SSD) for primary or "hot" data.
|
||||||
|
|
||||||
- Deployments using non-XFS filesystems (ext4, btrfs, zfs) tend to have
|
Use XFS-Formatting for Drives
|
||||||
lower performance while exhibiting unexpected or undesired behavior.
|
MinIO strongly recommends provisioning XFS formatted drives for storage.
|
||||||
|
MinIO uses XFS as part of internal testing and validation suites, providing additional confidence in performance and behavior at all scales.
|
||||||
|
|
||||||
- RAID or similar technologies do not provide additional resilience or
|
MinIO does **not** test nor recommend any other filesystem, such as EXT4, BTRFS, or ZFS.
|
||||||
availability benefits when used with distributed MinIO deployments, and
|
|
||||||
typically reduce system performance.
|
|
||||||
|
|
||||||
Ensure all nodes in the |deployment| use the same type (NVMe, SSD, or HDD) of
|
Use Consistent Type of Drive
|
||||||
drive with identical capacity (e.g. ``N`` TB) . MinIO does not distinguish drive
|
MinIO does not distinguish drive types and does not benefit from mixed storage types.
|
||||||
types and does not benefit from mixed storage types. Additionally. MinIO limits
|
Each :term:`pool` must use the same type (NVMe, SSD)
|
||||||
the size used per drive to the smallest drive in the deployment. For example, if
|
|
||||||
the deployment has 15 10TB drives and 1 1TB drive, MinIO limits the per-drive
|
|
||||||
capacity to 1TB.
|
|
||||||
|
|
||||||
MinIO *requires* using expansion notation ``{x...y}`` to denote a sequential
|
For example, deploy a pool consisting of only NVMe drives.
|
||||||
series of drives when creating the new |deployment|, where all nodes in the
|
If you deploy some drives as SSD or HDD, MinIO treats those drives identically to the NVMe drives.
|
||||||
|deployment| have an identical set of mounted drives. MinIO also
|
This can result in performance issues, as some drives have differing or worse read/write characteristics and cannot respond at the same rate as the NVMe drives.
|
||||||
requires that the ordering of physical drives remain constant across restarts,
|
|
||||||
such that a given mount point always points to the same formatted drive. MinIO
|
|
||||||
therefore **strongly recommends** using ``/etc/fstab`` or a similar file-based
|
|
||||||
mount configuration to ensure that drive ordering cannot change after a reboot.
|
|
||||||
For example:
|
|
||||||
|
|
||||||
.. code-block:: shell
|
Use Consistent Size of Drive
|
||||||
|
MinIO limits the size used per drive to the smallest drive in the deployment.
|
||||||
|
|
||||||
$ mkfs.xfs /dev/sdb -L DISK1
|
For example, deploy a pool consisting of the same number of NVMe drives with identical capacity of ``7.68TiB``.
|
||||||
$ mkfs.xfs /dev/sdc -L DISK2
|
If you deploy one drive with ``3.84TiB``, MinIO treats all drives in the pool as having that smaller capacity.
|
||||||
$ mkfs.xfs /dev/sdd -L DISK3
|
|
||||||
$ mkfs.xfs /dev/sde -L DISK4
|
|
||||||
|
|
||||||
$ nano /etc/fstab
|
Configure Sequential Drive Mounting
|
||||||
|
MinIO uses Go expansion notation ``{x...y}`` to denote a sequential series of drives when creating the new |deployment|, where all nodes in the |deployment| have an identical set of mounted drives.
|
||||||
|
Configure drive mounting paths as a sequential series to best support this notation.
|
||||||
|
For example, mount your drives using a pattern of ``/mnt/drive-n``, where ``n`` starts at ``1`` and increments by ``1`` per drive.
|
||||||
|
|
||||||
# <file system> <mount point> <type> <options> <dump> <pass>
|
Persist Drive Mounting and Mapping Across Reboots
|
||||||
LABEL=DISK1 /mnt/disk1 xfs defaults,noatime 0 2
|
Use ``/etc/fstab`` to ensure consistent drive-to-mount mapping across node reboots.
|
||||||
LABEL=DISK2 /mnt/disk2 xfs defaults,noatime 0 2
|
|
||||||
LABEL=DISK3 /mnt/disk3 xfs defaults,noatime 0 2
|
|
||||||
LABEL=DISK4 /mnt/disk4 xfs defaults,noatime 0 2
|
|
||||||
|
|
||||||
You can then specify the entire range of drives using the expansion notation
|
Non-Linux Operating Systems should use the equivalent drive mount management tool.
|
||||||
``/mnt/disk{1...4}``. If you want to use a specific subfolder on each drive,
|
|
||||||
specify it as ``/mnt/disk{1...4}/minio``.
|
|
||||||
|
|
||||||
MinIO **does not** support arbitrary migration of a drive with existing MinIO
|
.. end-storage-requirements-desc
|
||||||
data to a new mount position, whether intentional or as the result of OS-level
|
|
||||||
behavior.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Cloud environment instances which depend on mounted external storage may encounter boot failure if one or more of the remote file mounts return errors or failure.
|
|
||||||
For example, an AWS ECS instances with mounted persistent EBS volumes may fail to boot with the standard ``/etc/fstab`` configuration if one or more EBS volumes fail to mount.
|
|
||||||
|
|
||||||
You can set the ``nofail`` option to silence error reporting at boot and allow the instance to boot with one or more mount issues.
|
|
||||||
|
|
||||||
You should not use this option on systems which have locally attached disks, as silencing drive errors prevents both MinIO and the OS from responding to those errors in a normal fashion.
|
|
||||||
|
|
||||||
|
|
||||||
.. end-local-jbod-desc
|
|
||||||
|
|
||||||
.. start-nondisruptive-upgrade-desc
|
.. start-nondisruptive-upgrade-desc
|
||||||
|
|
||||||
|
@ -8,6 +8,35 @@ This procedure does not require taking downtime and is non-disruptive to ongoing
|
|||||||
This page documents methods for upgrading using the update-then-restart method for both ``systemctl`` and user-managed MinIO deployments.
|
This page documents methods for upgrading using the update-then-restart method for both ``systemctl`` and user-managed MinIO deployments.
|
||||||
Deployments using Ansible, Terraform, or other management tools can use the procedures here as guidance for implementation within the existing automation framework.
|
Deployments using Ansible, Terraform, or other management tools can use the procedures here as guidance for implementation within the existing automation framework.
|
||||||
|
|
||||||
|
Prerequisites
|
||||||
|
-------------
|
||||||
|
|
||||||
|
Back Up Cluster Settings First
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Use the :mc:`mc admin cluster bucket export` and :mc:`mc admin cluster iam export` commands to take a snapshot of the bucket metadata and IAM configurations prior to starting decommissioning.
|
||||||
|
You can use these snapshots to restore :ref:`bucket <minio-mc-admin-cluster-bucket-import>` and :ref:`IAM <minio-mc-admin-cluster-iam-import>` settings to recover from user or process errors as necessary.
|
||||||
|
|
||||||
|
Check Release Notes
|
||||||
|
~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
MinIO publishes :minio-git:`Release Notes <minio/releases>` for your reference as part of identifying the changes applied in each release.
|
||||||
|
Review the associated release notes between your current MinIO version and the newer release so you have a complete view of any changes.
|
||||||
|
|
||||||
|
Pay particular attention to any releases that are *not* backwards compatible.
|
||||||
|
You cannot trivially downgrade from any such release.
|
||||||
|
|
||||||
|
Test Upgrades Before Applying To Production
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
MinIO uses a testing and validation suite as part of all releases.
|
||||||
|
However, no testing suite can account for unique combinations and permutations of hardware, software, and workloads of your production environment.
|
||||||
|
|
||||||
|
You should always validate any MinIO upgrades in a lower environment (Dev/QA/Staging) *before* applying those upgrades to Production deployments, or any other environment containing critical data.
|
||||||
|
Performing updates to production environments without first validating in lower environments is done at your own risk.
|
||||||
|
|
||||||
|
For MinIO deployments that are significantly behind latest stable (6+ months), consider using |SUBNET| for additional support and guidance during the upgrade procedure.
|
||||||
|
|
||||||
Considerations
|
Considerations
|
||||||
--------------
|
--------------
|
||||||
|
|
||||||
@ -27,15 +56,6 @@ For virtualized environments which *require* rolling updates, you should modify
|
|||||||
3. Update the virtual machine/container configuration to use the matching newer MinIO image.
|
3. Update the virtual machine/container configuration to use the matching newer MinIO image.
|
||||||
4. Perform the rolling restart of each machine/container with the updated image.
|
4. Perform the rolling restart of each machine/container with the updated image.
|
||||||
|
|
||||||
Check Release Notes
|
|
||||||
~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
MinIO publishes :minio-git:`Release Notes <minio/releases>` for your reference as part of identifying the changes applied in each release.
|
|
||||||
Review the associated release notes between your current MinIO version and the newer release so you have a complete view of any changes.
|
|
||||||
|
|
||||||
Pay particular attention to any releases that are *not* backwards compatible.
|
|
||||||
You cannot trivially downgrade from any such release.
|
|
||||||
|
|
||||||
.. _minio-upgrade-systemctl:
|
.. _minio-upgrade-systemctl:
|
||||||
|
|
||||||
Update ``systemctl``-Managed MinIO Deployments
|
Update ``systemctl``-Managed MinIO Deployments
|
||||||
|
@ -32,6 +32,12 @@ Production Hardware Recommendations
|
|||||||
The following checklist follows MinIO's `Recommended Configuration <https://min.io/product/reference-hardware?ref-docs>`__ for production deployments.
|
The following checklist follows MinIO's `Recommended Configuration <https://min.io/product/reference-hardware?ref-docs>`__ for production deployments.
|
||||||
The provided guidance is intended as a baseline and cannot replace |subnet| Performance Diagnostics, Architecture Reviews, and direct-to-engineering support.
|
The provided guidance is intended as a baseline and cannot replace |subnet| Performance Diagnostics, Architecture Reviews, and direct-to-engineering support.
|
||||||
|
|
||||||
|
MinIO, like any distributed system, benefits from selecting identical configurations for all nodes in a given :term:`server pool`.
|
||||||
|
Ensure a consistent selection of hardware (CPU, memory, motherboard, storage adapters) and software (operating system, kernel settings, system services) across pool nodes.
|
||||||
|
|
||||||
|
Deployments may exhibit unpredictable performance if nodes have varying hardware or software configurations.
|
||||||
|
Workloads that benefit from storing aged data on lower-cost hardware should instead deploy a dedicated "warm" or "cold" MinIO deployment and :ref:`transition <minio-lifecycle-management-tiering>` data to that tier.
|
||||||
|
|
||||||
.. admonition:: MinIO does not provide hosted services or hardware sales
|
.. admonition:: MinIO does not provide hosted services or hardware sales
|
||||||
:class: important
|
:class: important
|
||||||
|
|
||||||
@ -202,16 +208,80 @@ Storage
|
|||||||
|
|
||||||
.. cond:: not k8s
|
.. cond:: not k8s
|
||||||
|
|
||||||
|
Recommended Storage Mediums
|
||||||
|
+++++++++++++++++++++++++++
|
||||||
|
|
||||||
MinIO recommends using flash-based storage (NVMe or SSD) for all workload types and scales.
|
MinIO recommends using flash-based storage (NVMe or SSD) for all workload types and scales.
|
||||||
Workloads that require high performance should prefer NVMe over SSD.
|
Workloads that require high performance should prefer NVMe over SSD.
|
||||||
|
|
||||||
MinIO deployments using HDD-based storage are best suited as cold-tier targets for :ref:`Object Transition ("Tiering") <minio-lifecycle-management-tiering>` of aged data.
|
MinIO deployments using HDD-based storage are best suited as cold-tier targets for :ref:`Object Transition ("Tiering") <minio-lifecycle-management-tiering>` of aged data.
|
||||||
HDD storage typically does not provide the necessary performance to meet the expectations of modern workloads, and any cost efficiencies at scale are offset by the performance constraints of the medium.
|
HDD storage typically does not provide the necessary performance to meet the expectations of modern workloads, and any cost efficiencies at scale are offset by the performance constraints of the medium.
|
||||||
|
|
||||||
Format Drives as XFS
|
Use Direct-Attached "Local" Storage (DAS)
|
||||||
++++++++++++++++++++
|
+++++++++++++++++++++++++++++++++++++++++
|
||||||
|
|
||||||
|
:abbr:`DAS (Direct-Attached Storage)`, such as locally-attached JBOD (Just a Bunch of Disks) arrays, provide significant performance and consistency advantages over networked (NAS, SAN, NFS) storage.
|
||||||
|
|
||||||
|
.. dropdown:: Network File System Volumes Break Consistency Guarantees
|
||||||
|
:class-title: note
|
||||||
|
|
||||||
|
MinIO's strict **read-after-write** and **list-after-write** consistency model requires local drive filesystems.
|
||||||
|
MinIO cannot provide consistency guarantees if the underlying storage volumes are NFS or a similar network-attached storage volume.
|
||||||
|
|
||||||
|
Use XFS-Formatted Drives with Labels
|
||||||
|
++++++++++++++++++++++++++++++++++++
|
||||||
|
|
||||||
Format drives as XFS and present them to MinIO as a :abbr:`JBOD (Just a Bunch of Disks)` array with no RAID or other pooling configurations.
|
Format drives as XFS and present them to MinIO as a :abbr:`JBOD (Just a Bunch of Disks)` array with no RAID or other pooling configurations.
|
||||||
|
Using any other type of backing storage (SAN/NAS, ext4, RAID, LVM) typically results in a reduction in performance, reliability, predictability, and consistency.
|
||||||
|
|
||||||
|
When formatting XFS drives, apply a unique label per drive.
|
||||||
|
For example, the following command formats four drives as XFS and applies a corresponding drive label.
|
||||||
|
|
||||||
|
.. code-block:: shell
|
||||||
|
|
||||||
|
mkfs.xfs /dev/sdb -L MINIODRIVE1
|
||||||
|
mkfs.xfs /dev/sdc -L MINIODRIVE2
|
||||||
|
mkfs.xfs /dev/sdd -L MINIODRIVE3
|
||||||
|
mkfs.xfs /dev/sde -L MINIODRIVE4
|
||||||
|
|
||||||
|
Mount Drives using ``/etc/fstab``
|
||||||
|
+++++++++++++++++++++++++++++++++
|
||||||
|
|
||||||
|
MinIO **requires** that drives maintain their ordering at the mounted position across restarts.
|
||||||
|
MinIO **does not** support arbitrary migration of a drive with existing MinIO data to a new mount position, whether intentional or as the result of OS-level behavior.
|
||||||
|
|
||||||
|
You **must** use ``/etc/fstab`` or a similar mount control system to mount drives at a consistent path.
|
||||||
|
For example:
|
||||||
|
|
||||||
|
.. code-block:: shell
|
||||||
|
:class: copyable
|
||||||
|
|
||||||
|
$ nano /etc/fstab
|
||||||
|
|
||||||
|
# <file system> <mount point> <type> <options> <dump> <pass>
|
||||||
|
LABEL=MINIODRIVE1 /mnt/drive-1 xfs defaults,noatime 0 2
|
||||||
|
LABEL=MINIODRIVE2 /mnt/drive-2 xfs defaults,noatime 0 2
|
||||||
|
LABEL=MINIODRIVE3 /mnt/drive-3 xfs defaults,noatime 0 2
|
||||||
|
LABEL=MINIODRIVE4 /mnt/drive-4 xfs defaults,noatime 0 2
|
||||||
|
|
||||||
|
You can use ``mount -a`` to mount those drives at those paths during initial setup.
|
||||||
|
The Operating System should otherwise mount these drives as part of the node startup process.
|
||||||
|
|
||||||
|
MinIO **strongly recommends** using label-based mounting rules over UUID-based rules.
|
||||||
|
Label-based rules allow swapping an unhealthy or non-working drive with a replacement that has matching format and label.
|
||||||
|
UUID-based rules require editing the ``/etc/fstab`` file to replace mappings with the new drive UUID.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Cloud environment instances which depend on mounted external storage may encounter boot failure if one or more of the remote file mounts return errors or failure.
|
||||||
|
For example, an AWS ECS instance with mounted persistent EBS volumes may not boot with the standard ``/etc/fstab`` configuration if one or more EBS volumes fail to mount.
|
||||||
|
|
||||||
|
You can set the ``nofail`` option to silence error reporting at boot and allow the instance to boot with one or more mount issues.
|
||||||
|
|
||||||
|
You should not use this option on systems with locally attached disks, as silencing drive errors prevents both MinIO and the OS from responding to those errors in a normal fashion.
|
||||||
|
|
||||||
|
Disable XFS Retry On Error
|
||||||
|
++++++++++++++++++++++++++
|
||||||
|
|
||||||
MinIO **strongly recommends** disabling `retry-on-error <https://docs.kernel.org/admin-guide/xfs.html?highlight=xfs#error-handling>`__ behavior using the ``max_retries`` configuration for the following error classes:
|
MinIO **strongly recommends** disabling `retry-on-error <https://docs.kernel.org/admin-guide/xfs.html?highlight=xfs#error-handling>`__ behavior using the ``max_retries`` configuration for the following error classes:
|
||||||
|
|
||||||
|
@ -95,26 +95,14 @@ You can specify the entire range of hostnames using the expansion notation ``min
|
|||||||
|
|
||||||
.. _deploy-minio-distributed-prereqs-storage:
|
.. _deploy-minio-distributed-prereqs-storage:
|
||||||
|
|
||||||
Local JBOD Storage with Sequential Mounts
|
Storage Requirements
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
.. |deployment| replace:: deployment
|
.. |deployment| replace:: deployment
|
||||||
|
|
||||||
.. include:: /includes/common-installation.rst
|
.. include:: /includes/common-installation.rst
|
||||||
:start-after: start-local-jbod-desc
|
:start-after: start-storage-requirements-desc
|
||||||
:end-before: end-local-jbod-desc
|
:end-before: end-storage-requirements-desc
|
||||||
|
|
||||||
.. admonition:: Network File System Volumes Break Consistency Guarantees
|
|
||||||
:class: note
|
|
||||||
|
|
||||||
MinIO's strict **read-after-write** and **list-after-write** consistency
|
|
||||||
model requires local drive filesystems.
|
|
||||||
|
|
||||||
MinIO cannot provide consistency guarantees if the underlying storage
|
|
||||||
volumes are NFS or a similar network-attached storage volume.
|
|
||||||
|
|
||||||
For deployments that *require* using network-attached storage, use
|
|
||||||
NFSv4 for best results.
|
|
||||||
|
|
||||||
Time Synchronization
|
Time Synchronization
|
||||||
~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~
|
||||||
@ -128,67 +116,30 @@ Check the documentation for your operating system for how to set up and maintain
|
|||||||
Considerations
|
Considerations
|
||||||
--------------
|
--------------
|
||||||
|
|
||||||
Homogeneous Node Configurations
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
MinIO strongly recommends selecting substantially similar hardware
|
|
||||||
configurations for all nodes in the deployment. Ensure the hardware (CPU,
|
|
||||||
memory, motherboard, storage adapters) and software (operating system, kernel
|
|
||||||
settings, system services) is consistent across all nodes.
|
|
||||||
|
|
||||||
Deployment may exhibit unpredictable performance if nodes have heterogeneous
|
|
||||||
hardware or software configurations. Workloads that benefit from storing aged
|
|
||||||
data on lower-cost hardware should instead deploy a dedicated "warm" or "cold"
|
|
||||||
MinIO deployment and :ref:`transition <minio-lifecycle-management-tiering>`
|
|
||||||
data to that tier.
|
|
||||||
|
|
||||||
Erasure Coding Parity
|
Erasure Coding Parity
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
MinIO :ref:`erasure coding <minio-erasure-coding>` is a data redundancy and
|
MinIO :ref:`erasure coding <minio-erasure-coding>` is a data redundancy and availability feature that allows MinIO deployments to automatically reconstruct objects on-the-fly despite the loss of multiple drives or nodes in the cluster.
|
||||||
availability feature that allows MinIO deployments to automatically reconstruct
|
|
||||||
objects on-the-fly despite the loss of multiple drives or nodes in the cluster.
|
|
||||||
Erasure Coding provides object-level healing with less overhead than adjacent
|
|
||||||
technologies such as RAID or replication. Distributed deployments implicitly
|
|
||||||
enable and rely on erasure coding for core functionality.
|
|
||||||
|
|
||||||
Erasure Coding splits objects into data and parity blocks, where parity blocks
|
MinIO defaults to ``EC:4``, or 4 parity blocks per :ref:`erasure set <minio-ec-erasure-set>`.
|
||||||
support reconstruction of missing or corrupted data blocks. The number of parity
|
You can set a custom parity level by setting the appropriate :ref:`MinIO Storage Class environment variable <minio-server-envvar-storage-class>`.
|
||||||
blocks in a deployment controls the deployment's relative data redundancy.
|
Consider using the MinIO `Erasure Code Calculator <https://min.io/product/erasure-code-calculator>`__ for guidance in selecting the appropriate erasure code parity level for your cluster.
|
||||||
Higher levels of parity allow for higher tolerance of drive loss at the cost of
|
|
||||||
total available storage.
|
|
||||||
|
|
||||||
MinIO defaults to ``EC:4`` , or 4 parity blocks per
|
.. important::
|
||||||
:ref:`erasure set <minio-ec-erasure-set>`. You can set a custom parity
|
|
||||||
level by setting the appropriate
|
While you can change erasure parity settings at any time, objects written with a given parity do **not** automatically update to the new parity settings.
|
||||||
:ref:`MinIO Storage Class environment variable
|
|
||||||
<minio-server-envvar-storage-class>`. Consider using the MinIO
|
|
||||||
`Erasure Code Calculator <https://min.io/product/erasure-code-calculator>`__ for
|
|
||||||
guidance in selecting the appropriate erasure code parity level for your
|
|
||||||
cluster.
|
|
||||||
|
|
||||||
Capacity-Based Planning
|
Capacity-Based Planning
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
MinIO generally recommends planning capacity such that
|
MinIO recommends planning storage capacity sufficient to store **at least** 2 years of data before reaching 70% usage.
|
||||||
:ref:`server pool expansion <expand-minio-distributed>` is only required after
|
Performing :ref:`server pool expansion <expand-minio-distributed>` more frequently or on a "just-in-time" basis generally indicates an architecture or planning issue.
|
||||||
2+ years of deployment uptime.
|
|
||||||
|
|
||||||
For example, consider an application suite that is estimated to produce 10TB of
|
For example, consider an application suite expected to produce at least 100 TiB of data per year and a 3 year target before expansion.
|
||||||
data per year. The MinIO deployment should provide *at minimum*:
|
By ensuring the deployment has ~500TiB of usable storage up front, the cluster can safely meet the 70% threshold with additional buffer for growth in data storage output per year.
|
||||||
|
|
||||||
``10TB + 10TB + 10TB = 30TB``
|
Since MinIO :ref:`erasure coding <minio-erasure-coding>` requires some storage for parity, the total **raw** storage must exceed the planned **usable** capacity.
|
||||||
|
Consider using the MinIO `Erasure Code Calculator <https://min.io/product/erasure-code-calculator>`__ for guidance in planning capacity around specific erasure code settings.
|
||||||
MinIO recommends adding buffer storage to account for potential growth in
|
|
||||||
stored data (e.g. 40TB of total usable storage). As a rule-of-thumb, more
|
|
||||||
capacity initially is preferred over frequent just-in-time expansion to meet
|
|
||||||
capacity requirements.
|
|
||||||
|
|
||||||
Since MinIO :ref:`erasure coding <minio-erasure-coding>` requires some
|
|
||||||
storage for parity, the total **raw** storage must exceed the planned **usable**
|
|
||||||
capacity. Consider using the MinIO `Erasure Code Calculator
|
|
||||||
<https://min.io/product/erasure-code-calculator>`__ for guidance in planning
|
|
||||||
capacity around specific erasure code settings.
|
|
||||||
|
|
||||||
Recommended Operating Systems
|
Recommended Operating Systems
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
@ -24,28 +24,16 @@ The procedures on this page cover deploying MinIO in a Single-Node Multi-Drive (
|
|||||||
Prerequisites
|
Prerequisites
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
.. _deploy-minio-standalone-multidrive:
|
Storage Requirements
|
||||||
|
~~~~~~~~~~~~~~~~~~~~
|
||||||
Local JBOD Storage with Sequential Mounts
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
.. |deployment| replace:: deployment
|
.. |deployment| replace:: deployment
|
||||||
|
|
||||||
.. include:: /includes/common-installation.rst
|
.. include:: /includes/common-installation.rst
|
||||||
:start-after: start-local-jbod-single-node-desc
|
:start-after: start-storage-requirements-desc
|
||||||
:end-before: end-local-jbod-single-node-desc
|
:end-before: end-storage-requirements-desc
|
||||||
|
|
||||||
.. admonition:: Network File System Volumes Break Consistency Guarantees
|
.. _deploy-minio-standalone-multidrive:
|
||||||
:class: note
|
|
||||||
|
|
||||||
MinIO's strict **read-after-write** and **list-after-write** consistency
|
|
||||||
model requires local drive filesystems.
|
|
||||||
|
|
||||||
MinIO cannot provide consistency guarantees if the underlying storage
|
|
||||||
volumes are NFS or a similar network-attached storage volume.
|
|
||||||
|
|
||||||
For deployments that *require* using network-attached storage, use
|
|
||||||
NFSv4 for best results.
|
|
||||||
|
|
||||||
Deploy Single-Node Multi-Drive MinIO
|
Deploy Single-Node Multi-Drive MinIO
|
||||||
------------------------------------
|
------------------------------------
|
||||||
|
@ -17,6 +17,10 @@ Expansion does not provide Business Continuity/Disaster Recovery (BC/DR)-grade p
|
|||||||
While each pool is an independent set of servers with distinct :ref:`erasure sets <minio-ec-erasure-set>` for availability, the complete loss of one pool results in MinIO stopping I/O for all pools in the deployment.
|
While each pool is an independent set of servers with distinct :ref:`erasure sets <minio-ec-erasure-set>` for availability, the complete loss of one pool results in MinIO stopping I/O for all pools in the deployment.
|
||||||
Similarly, an erasure set which loses quorum in one pool represents data loss of objects stored in that set, regardless of the number of other erasure sets or pools.
|
Similarly, an erasure set which loses quorum in one pool represents data loss of objects stored in that set, regardless of the number of other erasure sets or pools.
|
||||||
|
|
||||||
|
The new server pool does **not** need to use the same type or size of hardware and software configuration as any existing server pool, though doing so may allow for simplified cluster management and more predictable performance across pools.
|
||||||
|
All drives in the new pool **should** be of the same type and size within the new pool.
|
||||||
|
Review MinIO's :ref:`hardware recommendations <minio-hardware-checklist>` for more complete guidance on selecting an appropriate configuration.
|
||||||
|
|
||||||
To provide BC-DR grade failover and recovery support for your single or multi-pool MinIO deployments, use :ref:`site replication <minio-site-replication-overview>`.
|
To provide BC-DR grade failover and recovery support for your single or multi-pool MinIO deployments, use :ref:`site replication <minio-site-replication-overview>`.
|
||||||
|
|
||||||
The procedure on this page expands an existing :ref:`distributed <deploy-minio-distributed>` MinIO deployment with an additional server pool.
|
The procedure on this page expands an existing :ref:`distributed <deploy-minio-distributed>` MinIO deployment with an additional server pool.
|
||||||
@ -84,26 +88,14 @@ You can specify the entire range of hostnames using the expansion notation
|
|||||||
|
|
||||||
Configuring DNS to support MinIO is out of scope for this procedure.
|
Configuring DNS to support MinIO is out of scope for this procedure.
|
||||||
|
|
||||||
Local JBOD Storage with Sequential Mounts
|
Storage Requirements
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
.. |deployment| replace:: server pool
|
.. |deployment| replace:: server pool
|
||||||
|
|
||||||
.. include:: /includes/common-installation.rst
|
.. include:: /includes/common-installation.rst
|
||||||
:start-after: start-local-jbod-desc
|
:start-after: start-storage-requirements-desc
|
||||||
:end-before: end-local-jbod-desc
|
:end-before: end-storage-requirements-desc
|
||||||
|
|
||||||
.. admonition:: Network File System Volumes Break Consistency Guarantees
|
|
||||||
:class: note
|
|
||||||
|
|
||||||
MinIO's strict **read-after-write** and **list-after-write** consistency
|
|
||||||
model requires local drive filesystems (``xfs``, ``ext4``, etc.).
|
|
||||||
|
|
||||||
MinIO cannot provide consistency guarantees if the underlying storage
|
|
||||||
volumes are NFS or a similar network-attached storage volume.
|
|
||||||
|
|
||||||
For deployments that *require* using network-attached storage, use
|
|
||||||
NFSv4 for best results.
|
|
||||||
|
|
||||||
Minimum Drives for Erasure Code Parity
|
Minimum Drives for Erasure Code Parity
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
@ -123,6 +115,21 @@ You can use the
|
|||||||
listed value is at least ``2 x EC:N``, the pool supports the deployment's
|
listed value is at least ``2 x EC:N``, the pool supports the deployment's
|
||||||
erasure parity settings.
|
erasure parity settings.
|
||||||
|
|
||||||
|
Time Synchronization
|
||||||
|
~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Multi-node systems must maintain synchronized time and date to maintain stable internode operations and interactions.
|
||||||
|
Make sure all nodes sync to the same time server regularly.
|
||||||
|
Operating systems vary for methods used to synchronize time and date, such as with ``ntp``, ``timedatectl``, or ``timesyncd``.
|
||||||
|
|
||||||
|
Check the documentation for your operating system for how to set up and maintain accurate and identical system clock times across nodes.
|
||||||
|
|
||||||
|
Back Up Cluster Settings First
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Use the :mc:`mc admin cluster bucket export` and :mc:`mc admin cluster iam export` commands to take a snapshot of the bucket metadata and IAM configurations respectively prior to starting decommissioning.
|
||||||
|
You can use these snapshots to restore :ref:`bucket <minio-mc-admin-cluster-bucket-import>` and :ref:`IAM <minio-mc-admin-cluster-iam-import>` settings to recover from user or process errors as necessary.
|
||||||
|
|
||||||
Considerations
|
Considerations
|
||||||
--------------
|
--------------
|
||||||
|
|
||||||
@ -159,27 +166,6 @@ For more about how rebalancing works, see :ref:`managing objects across a deploy
|
|||||||
|
|
||||||
Likewise, MinIO does not write to pools in a decommissioning process.
|
Likewise, MinIO does not write to pools in a decommissioning process.
|
||||||
|
|
||||||
Homogeneous Node Configurations
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
MinIO strongly recommends selecting substantially similar hardware
|
|
||||||
configurations for all nodes in the new server pool. Ensure the hardware (CPU,
|
|
||||||
memory, motherboard, storage adapters) and software (operating system, kernel
|
|
||||||
settings, system services) is consistent across all nodes in the pool.
|
|
||||||
|
|
||||||
The new pool may exhibit unpredictable performance if nodes have heterogeneous
|
|
||||||
hardware or software configurations. Workloads that benefit from storing aged
|
|
||||||
data on lower-cost hardware should instead deploy a dedicated "warm" or "cold"
|
|
||||||
MinIO deployment and :ref:`transition <minio-lifecycle-management-tiering>`
|
|
||||||
data to that tier.
|
|
||||||
|
|
||||||
The new server pool does **not** need to be substantially similar in hardware
|
|
||||||
and software configuration to any existing server pool, though this may allow
|
|
||||||
for simplified cluster management and more predictable performance across pools.
|
|
||||||
|
|
||||||
See :ref:`deploy-minio-distributed-recommendations` for more guidance on
|
|
||||||
selecting hardware for MinIO deployments.
|
|
||||||
|
|
||||||
Expansion is Non-Disruptive
|
Expansion is Non-Disruptive
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -193,45 +179,22 @@ deployment at around same time.
|
|||||||
Capacity-Based Planning
|
Capacity-Based Planning
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
MinIO generally recommends planning capacity such that
|
MinIO recommends planning storage capacity sufficient to store **at least** 2 years of data before reaching 70% usage.
|
||||||
:ref:`server pool expansion <expand-minio-distributed>` is only required after
|
Performing :ref:`server pool expansion <expand-minio-distributed>` more frequently or on a "just-in-time" basis generally indicates an architecture or planning issue.
|
||||||
2+ years of deployment uptime.
|
|
||||||
|
|
||||||
For example, consider an application suite that is estimated to produce 10TB of
|
For example, consider an application suite expected to produce at least 100 TiB of data per year and a 3 year target before expansion.
|
||||||
data per year. The current deployment is running low on free storage and
|
The deployment has ~500TiB of usable storage in the initial server pool, such that the cluster safely met the 70% threshold with some buffer for data growth.
|
||||||
therefore requires expansion to meet the ongoing storage demands of the
|
The new server pool should **ideally** meet at minimum 500TiB of additional storage to allow for a similar lifespan before further expansion.
|
||||||
application. The new server pool should provide *at minimum*
|
|
||||||
|
|
||||||
``10TB + 10TB + 10TB = 30TB``
|
Since MinIO :ref:`erasure coding <minio-erasure-coding>` requires some storage for parity, the total **raw** storage must exceed the planned **usable** capacity.
|
||||||
|
Consider using the MinIO `Erasure Code Calculator <https://min.io/product/erasure-code-calculator>`__ for guidance in planning capacity around specific erasure code settings.
|
||||||
MinIO recommends adding buffer storage to account for potential growth in stored
|
|
||||||
data (e.g. 40TB of total usable storage). The total planned *usable* storage in
|
|
||||||
the deployment would therefore be ~80TB. As a rule-of-thumb, more capacity
|
|
||||||
initially is preferred over frequent just-in-time expansion to meet capacity
|
|
||||||
requirements.
|
|
||||||
|
|
||||||
Since MinIO :ref:`erasure coding <minio-erasure-coding>` requires some
|
|
||||||
storage for parity, the total **raw** storage must exceed the planned **usable**
|
|
||||||
capacity. Consider using the MinIO `Erasure Code Calculator
|
|
||||||
<https://min.io/product/erasure-code-calculator>`__ for guidance in planning
|
|
||||||
capacity around specific erasure code settings.
|
|
||||||
|
|
||||||
Recommended Operating Systems
|
Recommended Operating Systems
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
This tutorial assumes all hosts running MinIO use a
|
This tutorial assumes all hosts running MinIO use a :ref:`recommended Linux operating system <minio-installation-platform-support>`.
|
||||||
:ref:`recommended Linux operating system <minio-installation-platform-support>`
|
|
||||||
such as RHEL8+ or Ubuntu 18.04+.
|
|
||||||
|
|
||||||
For other operating systems such as Windows or OSX, visit
|
All hosts in the deployment should run with matching :ref:`software configurations <minio-software-checklists>`.
|
||||||
`https://min.io/download <https://min.io/download?ref=docs>`__ and select the
|
|
||||||
tab associated to your operating system. Follow the displayed instructions to
|
|
||||||
install the MinIO server binary on each node. Defer to the OS best practices for
|
|
||||||
starting MinIO as a service (e.g. not attached to the terminal/shell session).
|
|
||||||
|
|
||||||
Support for running MinIO in distributed mode on Windows hosts is
|
|
||||||
**experimental**. Contact MinIO at hello@min.io if your infrastructure requires
|
|
||||||
deployment onto Windows hosts.
|
|
||||||
|
|
||||||
.. _expand-minio-distributed-baremetal:
|
.. _expand-minio-distributed-baremetal:
|
||||||
|
|
||||||
|
@ -10,13 +10,6 @@ Upgrade a MinIO Deployment
|
|||||||
:local:
|
:local:
|
||||||
:depth: 2
|
:depth: 2
|
||||||
|
|
||||||
.. admonition:: Test Upgrades In a Lower Environment
|
|
||||||
:class: important
|
|
||||||
|
|
||||||
Your unique deployment topology, workload patterns, or overall environment requires testing of any MinIO upgrades in a lower environment (Dev/QA/Staging) *before* applying those upgrades to Production deployments, or any other environment containing critical data.
|
|
||||||
Performing "blind" updates to production environments is done at your own risk.
|
|
||||||
|
|
||||||
For MinIO deployments that are significantly behind latest stable (6+ months), consider using |SUBNET| for additional support and guidance during the upgrade procedure.
|
|
||||||
|
|
||||||
.. cond:: linux
|
.. cond:: linux
|
||||||
|
|
||||||
|
Reference in New Issue
Block a user