mirror of
https://github.com/minio/docs.git
synced 2025-07-31 18:04:52 +03:00
QuickWin: Add fstab instructions for disk mounting
This commit is contained in:
committed by
Harshavardhana
parent
ec5ff12a29
commit
be4816f1ea
@ -178,4 +178,52 @@ Identity and Access Management, Metrics and Log Monitoring, or
|
|||||||
Server Configuration. Each MinIO server includes its own embedded MinIO
|
Server Configuration. Each MinIO server includes its own embedded MinIO
|
||||||
Console.
|
Console.
|
||||||
|
|
||||||
.. end-install-minio-console-desc
|
.. end-install-minio-console-desc
|
||||||
|
|
||||||
|
.. start-local-jbod-desc
|
||||||
|
|
||||||
|
MinIO strongly recommends local :abbr:`JBOD (Just a Bunch of Disks)` arrays with
|
||||||
|
XFS-formatted disks for best performance. RAID or similar technologies do not
|
||||||
|
provide additional resilience or availability benefits when used with
|
||||||
|
distributed MinIO deployments, and typically reduce system performance.
|
||||||
|
|
||||||
|
Ensure all nodes in the |deployment| use the same type (NVMe, SSD, or HDD) of
|
||||||
|
drive with identical capacity (e.g. ``N`` TB) . MinIO does not distinguish drive
|
||||||
|
types and does not benefit from mixed storage types. Additionally. MinIO limits
|
||||||
|
the size used per disk to the smallest drive in the deployment. For example, if
|
||||||
|
the deployment has 15 10TB disks and 1 1TB disk, MinIO limits the per-disk
|
||||||
|
capacity to 1TB.
|
||||||
|
|
||||||
|
MinIO *requires* using expansion notation ``{x...y}`` to denote a sequential
|
||||||
|
series of disks when creating the new |deployment|, where all nodes in the
|
||||||
|
|deployment| have an identical set of mounted drives. MinIO also
|
||||||
|
requires that the ordering of physical disks remain constant across restarts,
|
||||||
|
such that a given mount point always points to the same formatted disk. MinIO
|
||||||
|
therefore **strongly recommends** using ``/etc/fstab`` or a similar file-based
|
||||||
|
mount configuration to ensure that drive ordering cannot change after a reboot.
|
||||||
|
For example:
|
||||||
|
|
||||||
|
.. code-block:: shell
|
||||||
|
|
||||||
|
$ mkfs.xfs /dev/sdb -L DISK1
|
||||||
|
$ mkfs.xfs /dev/sdc -L DISK2
|
||||||
|
$ mkfs.xfs /dev/sdd -L DISK3
|
||||||
|
$ mkfs.xfs /dev/sde -L DISK4
|
||||||
|
|
||||||
|
$ nano /etc/fstab
|
||||||
|
|
||||||
|
# <file system> <mount point> <type> <options> <dump> <pass>
|
||||||
|
LABEL=DISK1 /mnt/disk1 xfs defaults,noatime 0 2
|
||||||
|
LABEL=DISK2 /mnt/disk2 xfs defaults,noatime 0 2
|
||||||
|
LABEL=DISK3 /mnt/disk3 xfs defaults,noatime 0 2
|
||||||
|
LABEL=DISK4 /mnt/disk4 xfs defaults,noatime 0 2
|
||||||
|
|
||||||
|
You can then specify the entire range of disks using the expansion notation
|
||||||
|
``/mnt/disk{1...4}``. If you want to use a specific subfolder on each disk,
|
||||||
|
specify it as ``/mnt/disk{1...4}/minio``.
|
||||||
|
|
||||||
|
MinIO **does not** support arbitrary migration of a drive with existing MinIO
|
||||||
|
data to a new mount position, whether intentional or as the result of OS-level
|
||||||
|
behavior.
|
||||||
|
|
||||||
|
.. end-local-jbod-desc
|
@ -101,39 +101,11 @@ Configuring DNS to support MinIO is out of scope for this procedure.
|
|||||||
Local JBOD Storage with Sequential Mounts
|
Local JBOD Storage with Sequential Mounts
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
MinIO strongly recommends local :abbr:`JBOD (Just a Bunch of Disks)` arrays for
|
.. |deployment| replace:: deployment
|
||||||
best performance. RAID or similar technologies do not provide additional
|
|
||||||
resilience or availability benefits when used with distributed MinIO
|
|
||||||
deployments, and typically reduce system performance.
|
|
||||||
|
|
||||||
MinIO generally recommends ``xfs`` formatted drives for best performance.
|
.. include:: /includes/common-installation.rst
|
||||||
|
:start-after: start-local-jbod-desc
|
||||||
MinIO *requires* using expansion notation ``{x...y}`` to denote a sequential
|
:end-before: end-local-jbod-desc
|
||||||
series of disks when creating a server pool. MinIO therefore *requires*
|
|
||||||
using sequentially-numbered drives on each node in the deployment, where the
|
|
||||||
number sequence is *duplicated* across all nodes. For example, the following
|
|
||||||
sequence of mounted drives would support a 4-drive per node distributed
|
|
||||||
deployment:
|
|
||||||
|
|
||||||
- ``/mnt/disk1``
|
|
||||||
- ``/mnt/disk2``
|
|
||||||
- ``/mnt/disk3``
|
|
||||||
- ``/mnt/disk4``
|
|
||||||
|
|
||||||
Each mount should correspond to a locally-attached drive of the same type and
|
|
||||||
size. If using ``/etc/fstab`` or a similar file-based mount configuration,
|
|
||||||
MinIO **strongly recommends** using drive UUID or labels to assign drives to
|
|
||||||
mounts. This ensures that drive ordering cannot change after a reboot.
|
|
||||||
|
|
||||||
You can specify the entire range of disks using the expansion notation
|
|
||||||
``/mnt/disk{1...4}``. If you want to use a specific subfolder on each disk,
|
|
||||||
specify it as ``/mnt/disk{1...4}/minio``.
|
|
||||||
|
|
||||||
MinIO limits the size used per disk to the smallest drive in the
|
|
||||||
deployment. For example, if the deployment has 15 10TB disks and 1 1TB disk,
|
|
||||||
MinIO limits the per-disk capacity to 1TB. Similarly, use the same model NVME,
|
|
||||||
SSD, or HDD drives consistently across all nodes. Mixing drive types in the
|
|
||||||
same distributed deployment can result in unpredictable performance.
|
|
||||||
|
|
||||||
.. admonition:: Network File System Volumes Break Consistency Guarantees
|
.. admonition:: Network File System Volumes Break Consistency Guarantees
|
||||||
:class: note
|
:class: note
|
||||||
@ -153,13 +125,16 @@ Considerations
|
|||||||
Homogeneous Node Configurations
|
Homogeneous Node Configurations
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
MinIO strongly recommends selecting a hardware configuration for all nodes in
|
MinIO strongly recommends selecting substantially similar hardware
|
||||||
the deployment. Ensure the hardware (CPU, memory, motherboard, storage adapters)
|
configurations for all nodes in the deployment. Ensure the hardware (CPU,
|
||||||
and software (operating system, kernel settings, system services) is consistent
|
memory, motherboard, storage adapters) and software (operating system, kernel
|
||||||
across all nodes.
|
settings, system services) is consistent across all nodes.
|
||||||
|
|
||||||
The deployment may exhibit unpredictable performance if nodes have heterogeneous
|
Deployment may exhibit unpredictable performance if nodes have heterogeneous
|
||||||
hardware or software configurations.
|
hardware or software configurations. Workloads that benefit from storing aged
|
||||||
|
data on lower-cost hardware should instead deploy a dedicated "warm" or "cold"
|
||||||
|
MinIO deployment and :ref:`transition <minio-lifecycle-management-transition>`
|
||||||
|
data to that tier.
|
||||||
|
|
||||||
Erasure Coding Parity
|
Erasure Coding Parity
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
@ -420,6 +395,13 @@ large-scale data storage:
|
|||||||
no RAID or similar technologies. MinIO recommends XFS formatting for
|
no RAID or similar technologies. MinIO recommends XFS formatting for
|
||||||
best performance.
|
best performance.
|
||||||
|
|
||||||
|
Use the same type of disk (NVMe, SSD, or HDD) with the same capacity
|
||||||
|
across all nodes in the deployment. MinIO does not distinguish drive
|
||||||
|
types when using the underlying storage and does not benefit from mixed
|
||||||
|
storage types. Additionally. MinIO limits the size used per disk to the
|
||||||
|
smallest drive in the deployment. For example, if the deployment has 15
|
||||||
|
10TB disks and 1 1TB disk, MinIO limits the per-disk capacity to 1TB.
|
||||||
|
|
||||||
Networking
|
Networking
|
||||||
~~~~~~~~~~
|
~~~~~~~~~~
|
||||||
|
|
||||||
|
@ -95,38 +95,11 @@ Configuring DNS to support MinIO is out of scope for this procedure.
|
|||||||
Local JBOD Storage with Sequential Mounts
|
Local JBOD Storage with Sequential Mounts
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
MinIO strongly recommends local :abbr:`JBOD (Just a Bunch of Disks)` arrays for
|
.. |deployment| replace:: server pool
|
||||||
best performance. RAID or similar technologies do not provide additional
|
|
||||||
resilience or availability benefits when used with distributed MinIO
|
|
||||||
deployments, and typically reduce system performance.
|
|
||||||
|
|
||||||
MinIO generally recommends ``xfs`` formatted drives for best performance.
|
.. include:: /includes/common-installation.rst
|
||||||
|
:start-after: start-local-jbod-desc
|
||||||
MinIO *requires* using expansion notation ``{x...y}`` to denote a sequential
|
:end-before: end-local-jbod-desc
|
||||||
series of disks when creating a server pool. MinIO therefore *requires*
|
|
||||||
using sequentially-numbered drives on each node in the deployment, where the
|
|
||||||
number sequence is *duplicated* across all nodes. For example, the following
|
|
||||||
sequence of mounted drives would support a 4-drive per node server pool:
|
|
||||||
|
|
||||||
- ``/mnt/disk1``
|
|
||||||
- ``/mnt/disk2``
|
|
||||||
- ``/mnt/disk3``
|
|
||||||
- ``/mnt/disk4``
|
|
||||||
|
|
||||||
Each mount should correspond to a locally-attached drive of the same type and
|
|
||||||
size. If using ``/etc/fstab`` or a similar file-based mount configuration,
|
|
||||||
MinIO **strongly recommends** using drive UUID or labels to assign drives to
|
|
||||||
mounts. This ensures that drive ordering cannot change after a reboot.
|
|
||||||
|
|
||||||
You can specify the entire range of disks using the expansion notation
|
|
||||||
``/mnt/disk{1...4}``. If you want to use a specific subfolder on each disk,
|
|
||||||
specify it as ``/mnt/disk{1...4}/minio``.
|
|
||||||
|
|
||||||
MinIO limits the size used per disk to the smallest drive in the
|
|
||||||
deployment. For example, if the deployment has 15 10TB disks and 1 1TB disk,
|
|
||||||
MinIO limits the per-disk capacity to 1TB. Similarly, use the same model NVME,
|
|
||||||
SSD, or HDD drives consistently across all nodes. Mixing drive types in the
|
|
||||||
same distributed deployment can result in unpredictable performance.
|
|
||||||
|
|
||||||
.. admonition:: Network File System Volumes Break Consistency Guarantees
|
.. admonition:: Network File System Volumes Break Consistency Guarantees
|
||||||
:class: note
|
:class: note
|
||||||
@ -164,11 +137,16 @@ Considerations
|
|||||||
Homogeneous Node Configurations
|
Homogeneous Node Configurations
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
MinIO strongly recommends selecting a hardware configuration for all nodes in
|
MinIO strongly recommends selecting substantially similar hardware
|
||||||
the new server pool. Ensure the hardware (CPU, memory, motherboard, storage
|
configurations for all nodes in the new server pool. Ensure the hardware (CPU,
|
||||||
adapters) and software (operating system, kernel settings, system services) is
|
memory, motherboard, storage adapters) and software (operating system, kernel
|
||||||
consistent across all nodes in the pool. The new pool may exhibit unpredictable
|
settings, system services) is consistent across all nodes in the pool.
|
||||||
performance if nodes have heterogeneous hardware or software configurations.
|
|
||||||
|
The new pool may exhibit unpredictable performance if nodes have heterogeneous
|
||||||
|
hardware or software configurations. Workloads that benefit from storing aged
|
||||||
|
data on lower-cost hardware should instead deploy a dedicated "warm" or "cold"
|
||||||
|
MinIO deployment and :ref:`transition <minio-lifecycle-management-tiering>`
|
||||||
|
data to that tier.
|
||||||
|
|
||||||
The new server pool does **not** need to be substantially similar in hardware
|
The new server pool does **not** need to be substantially similar in hardware
|
||||||
and software configuration to any existing server pool, though this may allow
|
and software configuration to any existing server pool, though this may allow
|
||||||
|
@ -37,9 +37,9 @@ following public cloud storage services:
|
|||||||
<minio-lifecycle-management-transition-to-azure>`
|
<minio-lifecycle-management-transition-to-azure>`
|
||||||
|
|
||||||
MinIO object transition supports use cases like moving aged data from MinIO
|
MinIO object transition supports use cases like moving aged data from MinIO
|
||||||
clusters in private or public cloud infrastructure to low-cost private or public cloud
|
clusters in private or public cloud infrastructure to low-cost private or public
|
||||||
storage solutions. MinIO manages retrieving tiered objects on-the-fly without
|
cloud storage solutions. MinIO manages retrieving tiered objects on-the-fly
|
||||||
any additional application-side logic.
|
without any additional application-side logic.
|
||||||
|
|
||||||
Use the :mc-cmd:`mc admin tier` command to create a remote target for tiering
|
Use the :mc-cmd:`mc admin tier` command to create a remote target for tiering
|
||||||
data to a supported Cloud Service Provider object storage. You can then use the
|
data to a supported Cloud Service Provider object storage. You can then use the
|
||||||
|
Reference in New Issue
Block a user