mirror of
https://github.com/minio/docs.git
synced 2025-07-31 18:04:52 +03:00
QuickWin: Add fstab instructions for disk mounting
This commit is contained in:
committed by
Harshavardhana
parent
ec5ff12a29
commit
be4816f1ea
@ -179,3 +179,51 @@ Server Configuration. Each MinIO server includes its own embedded MinIO
|
||||
Console.
|
||||
|
||||
.. end-install-minio-console-desc
|
||||
|
||||
.. start-local-jbod-desc
|
||||
|
||||
MinIO strongly recommends local :abbr:`JBOD (Just a Bunch of Disks)` arrays with
|
||||
XFS-formatted disks for best performance. RAID or similar technologies do not
|
||||
provide additional resilience or availability benefits when used with
|
||||
distributed MinIO deployments, and typically reduce system performance.
|
||||
|
||||
Ensure all nodes in the |deployment| use the same type (NVMe, SSD, or HDD) of
|
||||
drive with identical capacity (e.g. ``N`` TB) . MinIO does not distinguish drive
|
||||
types and does not benefit from mixed storage types. Additionally. MinIO limits
|
||||
the size used per disk to the smallest drive in the deployment. For example, if
|
||||
the deployment has 15 10TB disks and 1 1TB disk, MinIO limits the per-disk
|
||||
capacity to 1TB.
|
||||
|
||||
MinIO *requires* using expansion notation ``{x...y}`` to denote a sequential
|
||||
series of disks when creating the new |deployment|, where all nodes in the
|
||||
|deployment| have an identical set of mounted drives. MinIO also
|
||||
requires that the ordering of physical disks remain constant across restarts,
|
||||
such that a given mount point always points to the same formatted disk. MinIO
|
||||
therefore **strongly recommends** using ``/etc/fstab`` or a similar file-based
|
||||
mount configuration to ensure that drive ordering cannot change after a reboot.
|
||||
For example:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
$ mkfs.xfs /dev/sdb -L DISK1
|
||||
$ mkfs.xfs /dev/sdc -L DISK2
|
||||
$ mkfs.xfs /dev/sdd -L DISK3
|
||||
$ mkfs.xfs /dev/sde -L DISK4
|
||||
|
||||
$ nano /etc/fstab
|
||||
|
||||
# <file system> <mount point> <type> <options> <dump> <pass>
|
||||
LABEL=DISK1 /mnt/disk1 xfs defaults,noatime 0 2
|
||||
LABEL=DISK2 /mnt/disk2 xfs defaults,noatime 0 2
|
||||
LABEL=DISK3 /mnt/disk3 xfs defaults,noatime 0 2
|
||||
LABEL=DISK4 /mnt/disk4 xfs defaults,noatime 0 2
|
||||
|
||||
You can then specify the entire range of disks using the expansion notation
|
||||
``/mnt/disk{1...4}``. If you want to use a specific subfolder on each disk,
|
||||
specify it as ``/mnt/disk{1...4}/minio``.
|
||||
|
||||
MinIO **does not** support arbitrary migration of a drive with existing MinIO
|
||||
data to a new mount position, whether intentional or as the result of OS-level
|
||||
behavior.
|
||||
|
||||
.. end-local-jbod-desc
|
@ -101,39 +101,11 @@ Configuring DNS to support MinIO is out of scope for this procedure.
|
||||
Local JBOD Storage with Sequential Mounts
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MinIO strongly recommends local :abbr:`JBOD (Just a Bunch of Disks)` arrays for
|
||||
best performance. RAID or similar technologies do not provide additional
|
||||
resilience or availability benefits when used with distributed MinIO
|
||||
deployments, and typically reduce system performance.
|
||||
.. |deployment| replace:: deployment
|
||||
|
||||
MinIO generally recommends ``xfs`` formatted drives for best performance.
|
||||
|
||||
MinIO *requires* using expansion notation ``{x...y}`` to denote a sequential
|
||||
series of disks when creating a server pool. MinIO therefore *requires*
|
||||
using sequentially-numbered drives on each node in the deployment, where the
|
||||
number sequence is *duplicated* across all nodes. For example, the following
|
||||
sequence of mounted drives would support a 4-drive per node distributed
|
||||
deployment:
|
||||
|
||||
- ``/mnt/disk1``
|
||||
- ``/mnt/disk2``
|
||||
- ``/mnt/disk3``
|
||||
- ``/mnt/disk4``
|
||||
|
||||
Each mount should correspond to a locally-attached drive of the same type and
|
||||
size. If using ``/etc/fstab`` or a similar file-based mount configuration,
|
||||
MinIO **strongly recommends** using drive UUID or labels to assign drives to
|
||||
mounts. This ensures that drive ordering cannot change after a reboot.
|
||||
|
||||
You can specify the entire range of disks using the expansion notation
|
||||
``/mnt/disk{1...4}``. If you want to use a specific subfolder on each disk,
|
||||
specify it as ``/mnt/disk{1...4}/minio``.
|
||||
|
||||
MinIO limits the size used per disk to the smallest drive in the
|
||||
deployment. For example, if the deployment has 15 10TB disks and 1 1TB disk,
|
||||
MinIO limits the per-disk capacity to 1TB. Similarly, use the same model NVME,
|
||||
SSD, or HDD drives consistently across all nodes. Mixing drive types in the
|
||||
same distributed deployment can result in unpredictable performance.
|
||||
.. include:: /includes/common-installation.rst
|
||||
:start-after: start-local-jbod-desc
|
||||
:end-before: end-local-jbod-desc
|
||||
|
||||
.. admonition:: Network File System Volumes Break Consistency Guarantees
|
||||
:class: note
|
||||
@ -153,13 +125,16 @@ Considerations
|
||||
Homogeneous Node Configurations
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MinIO strongly recommends selecting a hardware configuration for all nodes in
|
||||
the deployment. Ensure the hardware (CPU, memory, motherboard, storage adapters)
|
||||
and software (operating system, kernel settings, system services) is consistent
|
||||
across all nodes.
|
||||
MinIO strongly recommends selecting substantially similar hardware
|
||||
configurations for all nodes in the deployment. Ensure the hardware (CPU,
|
||||
memory, motherboard, storage adapters) and software (operating system, kernel
|
||||
settings, system services) is consistent across all nodes.
|
||||
|
||||
The deployment may exhibit unpredictable performance if nodes have heterogeneous
|
||||
hardware or software configurations.
|
||||
Deployment may exhibit unpredictable performance if nodes have heterogeneous
|
||||
hardware or software configurations. Workloads that benefit from storing aged
|
||||
data on lower-cost hardware should instead deploy a dedicated "warm" or "cold"
|
||||
MinIO deployment and :ref:`transition <minio-lifecycle-management-transition>`
|
||||
data to that tier.
|
||||
|
||||
Erasure Coding Parity
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -420,6 +395,13 @@ large-scale data storage:
|
||||
no RAID or similar technologies. MinIO recommends XFS formatting for
|
||||
best performance.
|
||||
|
||||
Use the same type of disk (NVMe, SSD, or HDD) with the same capacity
|
||||
across all nodes in the deployment. MinIO does not distinguish drive
|
||||
types when using the underlying storage and does not benefit from mixed
|
||||
storage types. Additionally. MinIO limits the size used per disk to the
|
||||
smallest drive in the deployment. For example, if the deployment has 15
|
||||
10TB disks and 1 1TB disk, MinIO limits the per-disk capacity to 1TB.
|
||||
|
||||
Networking
|
||||
~~~~~~~~~~
|
||||
|
||||
|
@ -95,38 +95,11 @@ Configuring DNS to support MinIO is out of scope for this procedure.
|
||||
Local JBOD Storage with Sequential Mounts
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MinIO strongly recommends local :abbr:`JBOD (Just a Bunch of Disks)` arrays for
|
||||
best performance. RAID or similar technologies do not provide additional
|
||||
resilience or availability benefits when used with distributed MinIO
|
||||
deployments, and typically reduce system performance.
|
||||
.. |deployment| replace:: server pool
|
||||
|
||||
MinIO generally recommends ``xfs`` formatted drives for best performance.
|
||||
|
||||
MinIO *requires* using expansion notation ``{x...y}`` to denote a sequential
|
||||
series of disks when creating a server pool. MinIO therefore *requires*
|
||||
using sequentially-numbered drives on each node in the deployment, where the
|
||||
number sequence is *duplicated* across all nodes. For example, the following
|
||||
sequence of mounted drives would support a 4-drive per node server pool:
|
||||
|
||||
- ``/mnt/disk1``
|
||||
- ``/mnt/disk2``
|
||||
- ``/mnt/disk3``
|
||||
- ``/mnt/disk4``
|
||||
|
||||
Each mount should correspond to a locally-attached drive of the same type and
|
||||
size. If using ``/etc/fstab`` or a similar file-based mount configuration,
|
||||
MinIO **strongly recommends** using drive UUID or labels to assign drives to
|
||||
mounts. This ensures that drive ordering cannot change after a reboot.
|
||||
|
||||
You can specify the entire range of disks using the expansion notation
|
||||
``/mnt/disk{1...4}``. If you want to use a specific subfolder on each disk,
|
||||
specify it as ``/mnt/disk{1...4}/minio``.
|
||||
|
||||
MinIO limits the size used per disk to the smallest drive in the
|
||||
deployment. For example, if the deployment has 15 10TB disks and 1 1TB disk,
|
||||
MinIO limits the per-disk capacity to 1TB. Similarly, use the same model NVME,
|
||||
SSD, or HDD drives consistently across all nodes. Mixing drive types in the
|
||||
same distributed deployment can result in unpredictable performance.
|
||||
.. include:: /includes/common-installation.rst
|
||||
:start-after: start-local-jbod-desc
|
||||
:end-before: end-local-jbod-desc
|
||||
|
||||
.. admonition:: Network File System Volumes Break Consistency Guarantees
|
||||
:class: note
|
||||
@ -164,11 +137,16 @@ Considerations
|
||||
Homogeneous Node Configurations
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MinIO strongly recommends selecting a hardware configuration for all nodes in
|
||||
the new server pool. Ensure the hardware (CPU, memory, motherboard, storage
|
||||
adapters) and software (operating system, kernel settings, system services) is
|
||||
consistent across all nodes in the pool. The new pool may exhibit unpredictable
|
||||
performance if nodes have heterogeneous hardware or software configurations.
|
||||
MinIO strongly recommends selecting substantially similar hardware
|
||||
configurations for all nodes in the new server pool. Ensure the hardware (CPU,
|
||||
memory, motherboard, storage adapters) and software (operating system, kernel
|
||||
settings, system services) is consistent across all nodes in the pool.
|
||||
|
||||
The new pool may exhibit unpredictable performance if nodes have heterogeneous
|
||||
hardware or software configurations. Workloads that benefit from storing aged
|
||||
data on lower-cost hardware should instead deploy a dedicated "warm" or "cold"
|
||||
MinIO deployment and :ref:`transition <minio-lifecycle-management-tiering>`
|
||||
data to that tier.
|
||||
|
||||
The new server pool does **not** need to be substantially similar in hardware
|
||||
and software configuration to any existing server pool, though this may allow
|
||||
|
@ -37,9 +37,9 @@ following public cloud storage services:
|
||||
<minio-lifecycle-management-transition-to-azure>`
|
||||
|
||||
MinIO object transition supports use cases like moving aged data from MinIO
|
||||
clusters in private or public cloud infrastructure to low-cost private or public cloud
|
||||
storage solutions. MinIO manages retrieving tiered objects on-the-fly without
|
||||
any additional application-side logic.
|
||||
clusters in private or public cloud infrastructure to low-cost private or public
|
||||
cloud storage solutions. MinIO manages retrieving tiered objects on-the-fly
|
||||
without any additional application-side logic.
|
||||
|
||||
Use the :mc-cmd:`mc admin tier` command to create a remote target for tiering
|
||||
data to a supported Cloud Service Provider object storage. You can then use the
|
||||
|
Reference in New Issue
Block a user