1
0
mirror of https://github.com/minio/docs.git synced 2025-07-28 19:42:10 +03:00

Addressing hardware checklist critical feedback (#1080)

We're getting a lot of repeated requests during
inbounds on sizing VMs specifically.

Our baremetal and K8s checklists are slightly generic to this and imply
a baseline that may be uncommon within a virtualized context

This attempts to bring together a slightly wider band while emphasizing
that our high-watermark is intended for high performance, while the low
watermark is a bare minimum.


---------

Co-authored-by: Andrea Longo <feorlen@users.noreply.github.com>
Co-authored-by: Daryl White <53910321+djwfyi@users.noreply.github.com>
This commit is contained in:
Ravind Kumar
2023-12-08 16:01:42 -05:00
committed by GitHub
parent 6e35feaadf
commit e30a2ac1bc
3 changed files with 144 additions and 75 deletions

View File

@ -0,0 +1,82 @@
.. start-linux-hardware-checklist
.. list-table::
:header-rows: 1
:widths: 5 45 25 25
:width: 100%
* -
- Description
- Minimum
- Recommended
* - :octicon:`circle`
- Dedicated Baremetal or Virtual Hosts ("hosts").
- 4 dedicated hosts
- 8+ dedicated hosts
* - :octicon:`circle`
- :ref:`Dedicated locally-attached drives for each host <minio-hardware-checklist-storage>`.
- 4 drives per MinIO Server
- 8+ drives per MinIO Server
* - :octicon:`circle`
- :ref:`High speed network infrastructure <minio-hardware-checklist-network>`.
- 25GbE
- 100GbE
* - :octicon:`circle`
- Server-grade CPUs with support for modern SIMD instructions (AVX-512), such as Intel® Xeon® Scalable or better.
- 8 CPU/socket or vCPU per host
- 16+ CPU/socket or vCPU per host
* - :octicon:`circle`
- :ref:`Available memory to meet or exceed per-server usage <minio-hardware-checklist-memory>` by a reasonable buffer.
- 32GB of available memory per host
- 128GB+ of available memory per host
.. end-linux-hardware-checklist
.. start-k8s-hardware-checklist
.. list-table::
:header-rows: 1
:widths: 5 55 20 20
:width: 100%
* -
- Description
- Minimum
- Recommended
* - :octicon:`circle`
- Kubernetes worker nodes to exclusively service the MinIO Tenant.
- 4 workers per Tenant
- 8+ workers per Tenant
* - :octicon:`circle`
- :ref:`Dedicated Persistent Volumes for the MinIO Tenant <minio-hardware-checklist-storage>`.
- 4 PV per MinIO Server pod
- 8+ PV per MinIO Server pod
* - :octicon:`circle`
- :ref:`High speed network infrastructure <minio-hardware-checklist-network>`.
- 25GbE
- 100GbE
* - :octicon:`circle`
- Server-grade CPUs with support for modern SIMD instructions (AVX-512), such as Intel® Xeon® Scalable or better.
- 4 vCPU per MinIO Pod
- 8+ vCPU per MinIO Pod
* - :octicon:`circle`
- :ref:`Available memory to meet or exceed per-server usage <minio-hardware-checklist-memory>` by a reasonable buffer.
- 32GB of available memory per worker node
- 128GB+ of available memory per worker node
.. end-k8s-hardware-checklist

View File

@ -37,51 +37,17 @@ The provided guidance is intended as a baseline and cannot replace |subnet| Perf
See our `Reference Hardware <https://min.io/product/reference-hardware#hardware?ref-docs>`__ page for a curated selection of servers and storage components from our hardware partners.
.. list-table::
:widths: auto
:width: 100%
.. cond:: k8s
* - :octicon:`circle`
- Sufficient CPU cores to achieve performance goals for hashing (for example, for healing) and encryption
.. include:: /includes/common/common-checklist.rst
:start-after: start-k8s-hardware-checklist
:end-before: end-k8s-hardware-checklist
MinIO recommends Single Socket Intel® Xeon® Scalable Gold CPUs (minimum 16 cores per socket).
.. cond:: not k8s
* - :octicon:`circle`
- Sufficient RAM to achieve performance goals based on the number of drives and anticipated concurrent requests (see the :ref:`formula and reference table <minio-hardware-checklist-memory>`).
MinIO recommends a minimum of 128GB of memory per node for best performance.
* - :octicon:`circle`
- .. cond:: k8s
MinIO requires a *minimum* of 4 worker nodes per MinIO Tenant.
MinIO strongly recommends allocating worker nodes dedicated to servicing the MinIO Tenant.
Colocating multiple high-performance services on the same nodes can result in resource contention and reduced overall performance.
.. cond:: linux or container or macos or windows
MinIO recommends a *minimum* of 4 host servers per distributed deployment.
MinIO strongly recommends hardware dedicated to servicing the MinIO Tenant.
Colocating multiple high-performance services on the same servers can result in resource contention and reduced overall performance.
* - :octicon:`circle`
- .. cond:: k8s
MinIO recommends a minimum of 4 Persistent Volumes per MinIO Server pod.
For better performance and storage efficiency, use 8 or more PV per server.
.. cond:: linux or container or macos or windows
MinIO recommends a minimum of 4 locally attached drives per MinIO Server.
For better performance and storage efficiency, use 8 or more drives per server.
Use the same type of drive (NVMe, SSD, or HDD) with the same capacity across all nodes in the deployment.
* - :octicon:`circle`
- | 25GbE Network as a baseline
| 100GbE Network for high performance
.. include:: /includes/common/common-checklist.rst
:start-after: start-linux-hardware-checklist
:end-before: end-linux-hardware-checklist
.. important::
@ -106,6 +72,8 @@ The provided guidance is intended as a baseline and cannot replace |subnet| Perf
The minimum recommendations above reflect MinIO's experience with assisting enterprise customers in deploying on a variety of IT infrastructures while maintaining the desired SLA/SLO.
While MinIO may run on less than the minimum recommended topology, any potential cost savings come at the risk of decreased reliability, performance, or overall functionality.
.. _minio-hardware-checklist-network:
Networking
~~~~~~~~~~
@ -119,16 +87,16 @@ This table assumes all network infrastructure components, such as routers, switc
* - NIC Bandwidth (Gbps)
- Estimated Aggregated Storage Throughput (GBps)
* - 10GbE
* - 10Gbps
- 1.25GBps
* - 25GbE
* - 25Gbps
- 3.125GBps
* - 50GbE
* - 50Gbps
- 6.25GBps
* - 100GbE
* - 100Gbps
- 12.5GBps
Networking has the greatest impact on MinIO performance, where low per-host bandwidth artificially constrains the potential performance of the storage.
@ -215,46 +183,62 @@ The following table provides general guidelines for allocating memory for use by
* - More than 1 Pebibyte (Pi)
- 128GiB
.. _minio-hardware-checklist-storage:
Storage
~~~~~~~
MinIO recommends selecting the type of drive based on your performance objectives.
The following table highlights the general use case for each drive type based on cost and performance:
.. cond:: k8s
NVMe/SSD - Hot Tier
HDD - Warm
MinIO recommends provisioning a storage class for each MinIO Tenant that meets the performance objectives for that tenant.
.. list-table::
:header-rows: 1
:widths: auto
:width: 100%
Where possible, configure the Storage Class, CSI, or other provisioner underlying the PV to format volumes as XFS to ensure best performance.
* - Type
- Cost
- Performance
- Tier
Ensure a consistent underlying storage type (NVMe, SSD, HDD) for all PVs provisioned in a Tenant.
* - NVMe
- High
- High
- Hot
Ensure the same presented capacity of each PV across all nodes in each Tenant :ref:`server pool <minio-intro-server-pool>`.
MinIO limits the maximum usable size per PV to the smallest PV in the pool.
For example, if a pool has 15 10TB PVs and 1 1TB PV, MinIO limits the per-PV capacity to 1TB.
* - SSD
- Balanced
- Balanced
- Hot/Warm
.. cond:: not k8s
* - HDD
- Low
- Low
- Cold/Archival
MinIO recommends selecting the type of drive based on your performance objectives.
The following table highlights the general use case for each drive type based on cost and performance:
Use the same type of drive (NVME, SSD, HDD) with the same capacity across all nodes in a MinIO deployment.
MinIO does not distinguish drive types when using the underlying storage and does not benefit from mixed storage types.
.. list-table::
:header-rows: 1
:widths: auto
:width: 100%
Use the same capacity of drive across all nodes in the MinIO :ref:`server pool <minio-intro-server-pool>`.
MinIO limits the maximum usable size per drive to the smallest size in the deployment.
For example, if a deployment has 15 10TB drives and 1 1TB drive, MinIO limits the per-drive capacity to 1TB.
* - Type
- Cost
- Performance
- Tier
* - NVMe
- High
- High
- Hot
* - SSD
- Balanced
- Balanced
- Hot/Warm
* - HDD
- Low
- Low
- Cold/Archival
Format drives as XFS and present them to MinIO as a :abbr:`JBOD (Just a Bunch of Disks)` array with no RAID or other pooling configurations.
Ensure a consistent drive type (NVMe, SSD, HDD) for the underyling storage.
MinIO does not distinguish between storage types.
Mixing storage types provides no benefit to MinIO.
Use the same capacity of drive across all nodes in each MinIO :ref:`server pool <minio-intro-server-pool>`.
MinIO limits the maximum usable size per drive to the smallest size in the deployment.
For example, if a deployment has 15 10TB drives and 1 1TB drive, MinIO limits the per-drive capacity to 1TB.
Recommended Hardware Tests
--------------------------

View File

@ -169,6 +169,8 @@ See :ref:`deploy-operator-kubernetes` for complete documentation on deploying th
For more complete information on Azure Virtual Machine types and Storage resources, see :azure-docs:`Sizes for virtual machines in Azure <virtual-machines/sizes>` and :azure-docs:`Azure managed disk types <virtual-machines/disks-types>`
.. _deploy-minio-tenant-pv:
Persistent Volumes
~~~~~~~~~~~~~~~~~~
@ -177,6 +179,7 @@ Persistent Volumes
MinIO can use any Kubernetes :kube-docs:`Persistent Volume (PV) <concepts/storage/persistent-volumes>` that supports the :kube-docs:`ReadWriteOnce <concepts/storage/persistent-volumes/#access-modes>` access mode.
MinIO's consistency guarantees require the exclusive storage access that ``ReadWriteOnce`` provides.
Additionally, MinIO recommends setting a reclaim policy of ``Retain`` for the PVC :kube-docs:`StorageClass <concepts/storage/storage-classes>`.
Where possible, configure the Storage Class, CSI, or other provisioner underlying the PV to format volumes as XFS to ensure best performance.
For Kubernetes clusters where nodes have Direct Attached Storage, MinIO strongly recommends using the `DirectPV CSI driver <https://min.io/directpv?ref=docs>`__.
DirectPV provides a distributed persistent volume manager that can discover, format, mount, schedule, and monitor drives across Kubernetes nodes.