1
0
mirror of https://github.com/minio/docs.git synced 2025-07-30 07:03:26 +03:00

OPTIMIZATION: Storage, Capacity, and Prerequisites (#1118)

No related issue here, just freewheeling from an internal request.

This started with the request to change our recommendation around
label/uuid-based drive mounting to a requirement.

Looking at the pages I feel like our pre-req and considerations are a
little long in the tooth, and are at least slightly duplicative of what
is on the checklist pages (hardware, software)

This is at least a first swing at tidying things up. I think in a second
pass I'll move more of the pre-reqs into the Hardware/Software/Security
checklist pages, and keep the on-tutorial sections as simple defnlists
so that the page flows more easily. We can push users to the details if
they want it while keeping the high level requirements there.

Noting this does **not** yet address the new features related to
non-sequential hostname support. That has to come later.

---------

Co-authored-by: Eco <41090896+eco-minio@users.noreply.github.com>
Co-authored-by: Andrea Longo <feorlen@users.noreply.github.com>
Co-authored-by: Daryl White <53910321+djwfyi@users.noreply.github.com>
This commit is contained in:
Ravind Kumar
2024-02-14 13:09:59 -05:00
committed by GitHub
parent 24ee2ef360
commit 3203cf7c3e
7 changed files with 182 additions and 223 deletions

View File

@ -211,71 +211,45 @@ MinIO **does not** support arbitrary migration of a drive with existing MinIO da
.. end-local-jbod-single-node-desc
.. start-local-jbod-desc
.. start-storage-requirements-desc
MinIO strongly recommends direct-attached :abbr:`JBOD (Just a Bunch of Disks)`
arrays with XFS-formatted disks for best performance.
The following requirements summarize the :ref:`minio-hardware-checklist-storage` section of MinIO's hardware recommendations:
- Direct-Attached Storage (DAS) has significant performance and consistency
advantages over networked storage (NAS, SAN, NFS).
Use Local Storage
Direct-Attached Storage (DAS) has significant performance and consistency advantages over networked storage (:abbr:`NAS (Network Attached Storage)`, :abbr:`SAN (Storage Area Network)`, :abbr:`NFS (Network File Storage)`).
MinIO strongly recommends flash storage (NVMe, SSD) for primary or "hot" data.
- Deployments using non-XFS filesystems (ext4, btrfs, zfs) tend to have
lower performance while exhibiting unexpected or undesired behavior.
Use XFS-Formatting for Drives
MinIO strongly recommends provisioning XFS formatted drives for storage.
MinIO uses XFS as part of internal testing and validation suites, providing additional confidence in performance and behavior at all scales.
- RAID or similar technologies do not provide additional resilience or
availability benefits when used with distributed MinIO deployments, and
typically reduce system performance.
MinIO does **not** test nor recommend any other filesystem, such as EXT4, BTRFS, or ZFS.
Ensure all nodes in the |deployment| use the same type (NVMe, SSD, or HDD) of
drive with identical capacity (e.g. ``N`` TB) . MinIO does not distinguish drive
types and does not benefit from mixed storage types. Additionally. MinIO limits
the size used per drive to the smallest drive in the deployment. For example, if
the deployment has 15 10TB drives and 1 1TB drive, MinIO limits the per-drive
capacity to 1TB.
Use Consistent Type of Drive
MinIO does not distinguish drive types and does not benefit from mixed storage types.
Each :term:`pool` must use the same type (NVMe, SSD)
MinIO *requires* using expansion notation ``{x...y}`` to denote a sequential
series of drives when creating the new |deployment|, where all nodes in the
|deployment| have an identical set of mounted drives. MinIO also
requires that the ordering of physical drives remain constant across restarts,
such that a given mount point always points to the same formatted drive. MinIO
therefore **strongly recommends** using ``/etc/fstab`` or a similar file-based
mount configuration to ensure that drive ordering cannot change after a reboot.
For example:
For example, deploy a pool consisting of only NVMe drives.
If you deploy some drives as SSD or HDD, MinIO treats those drives identically to the NVMe drives.
This can result in performance issues, as some drives have differing or worse read/write characteristics and cannot respond at the same rate as the NVMe drives.
.. code-block:: shell
Use Consistent Size of Drive
MinIO limits the size used per drive to the smallest drive in the deployment.
$ mkfs.xfs /dev/sdb -L DISK1
$ mkfs.xfs /dev/sdc -L DISK2
$ mkfs.xfs /dev/sdd -L DISK3
$ mkfs.xfs /dev/sde -L DISK4
For example, deploy a pool consisting of the same number of NVMe drives with identical capacity of ``7.68TiB``.
If you deploy one drive with ``3.84TiB``, MinIO treats all drives in the pool as having that smaller capacity.
$ nano /etc/fstab
Configure Sequential Drive Mounting
MinIO uses Go expansion notation ``{x...y}`` to denote a sequential series of drives when creating the new |deployment|, where all nodes in the |deployment| have an identical set of mounted drives.
Configure drive mounting paths as a sequential series to best support this notation.
For example, mount your drives using a pattern of ``/mnt/drive-n``, where ``n`` starts at ``1`` and increments by ``1`` per drive.
# <file system> <mount point> <type> <options> <dump> <pass>
LABEL=DISK1 /mnt/disk1 xfs defaults,noatime 0 2
LABEL=DISK2 /mnt/disk2 xfs defaults,noatime 0 2
LABEL=DISK3 /mnt/disk3 xfs defaults,noatime 0 2
LABEL=DISK4 /mnt/disk4 xfs defaults,noatime 0 2
Persist Drive Mounting and Mapping Across Reboots
Use ``/etc/fstab`` to ensure consistent drive-to-mount mapping across node reboots.
You can then specify the entire range of drives using the expansion notation
``/mnt/disk{1...4}``. If you want to use a specific subfolder on each drive,
specify it as ``/mnt/disk{1...4}/minio``.
Non-Linux Operating Systems should use the equivalent drive mount management tool.
MinIO **does not** support arbitrary migration of a drive with existing MinIO
data to a new mount position, whether intentional or as the result of OS-level
behavior.
.. note::
Cloud environment instances which depend on mounted external storage may encounter boot failure if one or more of the remote file mounts return errors or failure.
For example, an AWS ECS instances with mounted persistent EBS volumes may fail to boot with the standard ``/etc/fstab`` configuration if one or more EBS volumes fail to mount.
You can set the ``nofail`` option to silence error reporting at boot and allow the instance to boot with one or more mount issues.
You should not use this option on systems which have locally attached disks, as silencing drive errors prevents both MinIO and the OS from responding to those errors in a normal fashion.
.. end-local-jbod-desc
.. end-storage-requirements-desc
.. start-nondisruptive-upgrade-desc

View File

@ -8,6 +8,35 @@ This procedure does not require taking downtime and is non-disruptive to ongoing
This page documents methods for upgrading using the update-then-restart method for both ``systemctl`` and user-managed MinIO deployments.
Deployments using Ansible, Terraform, or other management tools can use the procedures here as guidance for implementation within the existing automation framework.
Prerequisites
-------------
Back Up Cluster Settings First
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Use the :mc:`mc admin cluster bucket export` and :mc:`mc admin cluster iam export` commands to take a snapshot of the bucket metadata and IAM configurations prior to starting decommissioning.
You can use these snapshots to restore :ref:`bucket <minio-mc-admin-cluster-bucket-import>` and :ref:`IAM <minio-mc-admin-cluster-iam-import>` settings to recover from user or process errors as necessary.
Check Release Notes
~~~~~~~~~~~~~~~~~~~
MinIO publishes :minio-git:`Release Notes <minio/releases>` for your reference as part of identifying the changes applied in each release.
Review the associated release notes between your current MinIO version and the newer release so you have a complete view of any changes.
Pay particular attention to any releases that are *not* backwards compatible.
You cannot trivially downgrade from any such release.
Test Upgrades Before Applying To Production
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MinIO uses a testing and validation suite as part of all releases.
However, no testing suite can account for unique combinations and permutations of hardware, software, and workloads of your production environment.
You should always validate any MinIO upgrades in a lower environment (Dev/QA/Staging) *before* applying those upgrades to Production deployments, or any other environment containing critical data.
Performing updates to production environments without first validating in lower environments is done at your own risk.
For MinIO deployments that are significantly behind latest stable (6+ months), consider using |SUBNET| for additional support and guidance during the upgrade procedure.
Considerations
--------------
@ -27,15 +56,6 @@ For virtualized environments which *require* rolling updates, you should modify
3. Update the virtual machine/container configuration to use the matching newer MinIO image.
4. Perform the rolling restart of each machine/container with the updated image.
Check Release Notes
~~~~~~~~~~~~~~~~~~~
MinIO publishes :minio-git:`Release Notes <minio/releases>` for your reference as part of identifying the changes applied in each release.
Review the associated release notes between your current MinIO version and the newer release so you have a complete view of any changes.
Pay particular attention to any releases that are *not* backwards compatible.
You cannot trivially downgrade from any such release.
.. _minio-upgrade-systemctl:
Update ``systemctl``-Managed MinIO Deployments