From be4816f1ea9758a04cc1af705aa369adb4cac584 Mon Sep 17 00:00:00 2001 From: ravindk89 Date: Wed, 24 Nov 2021 19:23:32 -0500 Subject: [PATCH] QuickWin: Add fstab instructions for disk mounting --- source/includes/common-installation.rst | 50 +++++++++++++++- .../installation/deploy-minio-distributed.rst | 58 +++++++------------ .../installation/expand-minio-distributed.rst | 50 +++++----------- .../lifecycle-management-overview.rst | 6 +- 4 files changed, 86 insertions(+), 78 deletions(-) diff --git a/source/includes/common-installation.rst b/source/includes/common-installation.rst index ff2bbc54..d2ccab00 100644 --- a/source/includes/common-installation.rst +++ b/source/includes/common-installation.rst @@ -178,4 +178,52 @@ Identity and Access Management, Metrics and Log Monitoring, or Server Configuration. Each MinIO server includes its own embedded MinIO Console. -.. end-install-minio-console-desc \ No newline at end of file +.. end-install-minio-console-desc + +.. start-local-jbod-desc + +MinIO strongly recommends local :abbr:`JBOD (Just a Bunch of Disks)` arrays with +XFS-formatted disks for best performance. RAID or similar technologies do not +provide additional resilience or availability benefits when used with +distributed MinIO deployments, and typically reduce system performance. + +Ensure all nodes in the |deployment| use the same type (NVMe, SSD, or HDD) of +drive with identical capacity (e.g. ``N`` TB) . MinIO does not distinguish drive +types and does not benefit from mixed storage types. Additionally. MinIO limits +the size used per disk to the smallest drive in the deployment. For example, if +the deployment has 15 10TB disks and 1 1TB disk, MinIO limits the per-disk +capacity to 1TB. + +MinIO *requires* using expansion notation ``{x...y}`` to denote a sequential +series of disks when creating the new |deployment|, where all nodes in the +|deployment| have an identical set of mounted drives. MinIO also +requires that the ordering of physical disks remain constant across restarts, +such that a given mount point always points to the same formatted disk. MinIO +therefore **strongly recommends** using ``/etc/fstab`` or a similar file-based +mount configuration to ensure that drive ordering cannot change after a reboot. +For example: + +.. code-block:: shell + + $ mkfs.xfs /dev/sdb -L DISK1 + $ mkfs.xfs /dev/sdc -L DISK2 + $ mkfs.xfs /dev/sdd -L DISK3 + $ mkfs.xfs /dev/sde -L DISK4 + + $ nano /etc/fstab + + # + LABEL=DISK1 /mnt/disk1 xfs defaults,noatime 0 2 + LABEL=DISK2 /mnt/disk2 xfs defaults,noatime 0 2 + LABEL=DISK3 /mnt/disk3 xfs defaults,noatime 0 2 + LABEL=DISK4 /mnt/disk4 xfs defaults,noatime 0 2 + +You can then specify the entire range of disks using the expansion notation +``/mnt/disk{1...4}``. If you want to use a specific subfolder on each disk, +specify it as ``/mnt/disk{1...4}/minio``. + +MinIO **does not** support arbitrary migration of a drive with existing MinIO +data to a new mount position, whether intentional or as the result of OS-level +behavior. + +.. end-local-jbod-desc \ No newline at end of file diff --git a/source/installation/deploy-minio-distributed.rst b/source/installation/deploy-minio-distributed.rst index 57c701ec..bce1fad7 100644 --- a/source/installation/deploy-minio-distributed.rst +++ b/source/installation/deploy-minio-distributed.rst @@ -101,39 +101,11 @@ Configuring DNS to support MinIO is out of scope for this procedure. Local JBOD Storage with Sequential Mounts ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -MinIO strongly recommends local :abbr:`JBOD (Just a Bunch of Disks)` arrays for -best performance. RAID or similar technologies do not provide additional -resilience or availability benefits when used with distributed MinIO -deployments, and typically reduce system performance. +.. |deployment| replace:: deployment -MinIO generally recommends ``xfs`` formatted drives for best performance. - -MinIO *requires* using expansion notation ``{x...y}`` to denote a sequential -series of disks when creating a server pool. MinIO therefore *requires* -using sequentially-numbered drives on each node in the deployment, where the -number sequence is *duplicated* across all nodes. For example, the following -sequence of mounted drives would support a 4-drive per node distributed -deployment: - -- ``/mnt/disk1`` -- ``/mnt/disk2`` -- ``/mnt/disk3`` -- ``/mnt/disk4`` - -Each mount should correspond to a locally-attached drive of the same type and -size. If using ``/etc/fstab`` or a similar file-based mount configuration, -MinIO **strongly recommends** using drive UUID or labels to assign drives to -mounts. This ensures that drive ordering cannot change after a reboot. - -You can specify the entire range of disks using the expansion notation -``/mnt/disk{1...4}``. If you want to use a specific subfolder on each disk, -specify it as ``/mnt/disk{1...4}/minio``. - -MinIO limits the size used per disk to the smallest drive in the -deployment. For example, if the deployment has 15 10TB disks and 1 1TB disk, -MinIO limits the per-disk capacity to 1TB. Similarly, use the same model NVME, -SSD, or HDD drives consistently across all nodes. Mixing drive types in the -same distributed deployment can result in unpredictable performance. +.. include:: /includes/common-installation.rst + :start-after: start-local-jbod-desc + :end-before: end-local-jbod-desc .. admonition:: Network File System Volumes Break Consistency Guarantees :class: note @@ -153,13 +125,16 @@ Considerations Homogeneous Node Configurations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -MinIO strongly recommends selecting a hardware configuration for all nodes in -the deployment. Ensure the hardware (CPU, memory, motherboard, storage adapters) -and software (operating system, kernel settings, system services) is consistent -across all nodes. +MinIO strongly recommends selecting substantially similar hardware +configurations for all nodes in the deployment. Ensure the hardware (CPU, +memory, motherboard, storage adapters) and software (operating system, kernel +settings, system services) is consistent across all nodes. -The deployment may exhibit unpredictable performance if nodes have heterogeneous -hardware or software configurations. +Deployment may exhibit unpredictable performance if nodes have heterogeneous +hardware or software configurations. Workloads that benefit from storing aged +data on lower-cost hardware should instead deploy a dedicated "warm" or "cold" +MinIO deployment and :ref:`transition ` +data to that tier. Erasure Coding Parity ~~~~~~~~~~~~~~~~~~~~~ @@ -420,6 +395,13 @@ large-scale data storage: no RAID or similar technologies. MinIO recommends XFS formatting for best performance. + Use the same type of disk (NVMe, SSD, or HDD) with the same capacity + across all nodes in the deployment. MinIO does not distinguish drive + types when using the underlying storage and does not benefit from mixed + storage types. Additionally. MinIO limits the size used per disk to the + smallest drive in the deployment. For example, if the deployment has 15 + 10TB disks and 1 1TB disk, MinIO limits the per-disk capacity to 1TB. + Networking ~~~~~~~~~~ diff --git a/source/installation/expand-minio-distributed.rst b/source/installation/expand-minio-distributed.rst index 8a39a6c0..a5cca9c6 100644 --- a/source/installation/expand-minio-distributed.rst +++ b/source/installation/expand-minio-distributed.rst @@ -95,38 +95,11 @@ Configuring DNS to support MinIO is out of scope for this procedure. Local JBOD Storage with Sequential Mounts ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -MinIO strongly recommends local :abbr:`JBOD (Just a Bunch of Disks)` arrays for -best performance. RAID or similar technologies do not provide additional -resilience or availability benefits when used with distributed MinIO -deployments, and typically reduce system performance. +.. |deployment| replace:: server pool -MinIO generally recommends ``xfs`` formatted drives for best performance. - -MinIO *requires* using expansion notation ``{x...y}`` to denote a sequential -series of disks when creating a server pool. MinIO therefore *requires* -using sequentially-numbered drives on each node in the deployment, where the -number sequence is *duplicated* across all nodes. For example, the following -sequence of mounted drives would support a 4-drive per node server pool: - -- ``/mnt/disk1`` -- ``/mnt/disk2`` -- ``/mnt/disk3`` -- ``/mnt/disk4`` - -Each mount should correspond to a locally-attached drive of the same type and -size. If using ``/etc/fstab`` or a similar file-based mount configuration, -MinIO **strongly recommends** using drive UUID or labels to assign drives to -mounts. This ensures that drive ordering cannot change after a reboot. - -You can specify the entire range of disks using the expansion notation -``/mnt/disk{1...4}``. If you want to use a specific subfolder on each disk, -specify it as ``/mnt/disk{1...4}/minio``. - -MinIO limits the size used per disk to the smallest drive in the -deployment. For example, if the deployment has 15 10TB disks and 1 1TB disk, -MinIO limits the per-disk capacity to 1TB. Similarly, use the same model NVME, -SSD, or HDD drives consistently across all nodes. Mixing drive types in the -same distributed deployment can result in unpredictable performance. +.. include:: /includes/common-installation.rst + :start-after: start-local-jbod-desc + :end-before: end-local-jbod-desc .. admonition:: Network File System Volumes Break Consistency Guarantees :class: note @@ -164,11 +137,16 @@ Considerations Homogeneous Node Configurations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -MinIO strongly recommends selecting a hardware configuration for all nodes in -the new server pool. Ensure the hardware (CPU, memory, motherboard, storage -adapters) and software (operating system, kernel settings, system services) is -consistent across all nodes in the pool. The new pool may exhibit unpredictable -performance if nodes have heterogeneous hardware or software configurations. +MinIO strongly recommends selecting substantially similar hardware +configurations for all nodes in the new server pool. Ensure the hardware (CPU, +memory, motherboard, storage adapters) and software (operating system, kernel +settings, system services) is consistent across all nodes in the pool. + +The new pool may exhibit unpredictable performance if nodes have heterogeneous +hardware or software configurations. Workloads that benefit from storing aged +data on lower-cost hardware should instead deploy a dedicated "warm" or "cold" +MinIO deployment and :ref:`transition ` +data to that tier. The new server pool does **not** need to be substantially similar in hardware and software configuration to any existing server pool, though this may allow diff --git a/source/lifecycle-management/lifecycle-management-overview.rst b/source/lifecycle-management/lifecycle-management-overview.rst index dfd4a49f..67a884d8 100644 --- a/source/lifecycle-management/lifecycle-management-overview.rst +++ b/source/lifecycle-management/lifecycle-management-overview.rst @@ -37,9 +37,9 @@ following public cloud storage services: ` MinIO object transition supports use cases like moving aged data from MinIO -clusters in private or public cloud infrastructure to low-cost private or public cloud -storage solutions. MinIO manages retrieving tiered objects on-the-fly without -any additional application-side logic. +clusters in private or public cloud infrastructure to low-cost private or public +cloud storage solutions. MinIO manages retrieving tiered objects on-the-fly +without any additional application-side logic. Use the :mc-cmd:`mc admin tier` command to create a remote target for tiering data to a supported Cloud Service Provider object storage. You can then use the