1
0
mirror of https://github.com/minio/docs.git synced 2025-07-28 19:42:10 +03:00

Final pass on platformization (#555)

This commit is contained in:
Ravind Kumar
2022-09-16 16:40:20 -04:00
committed by GitHub
parent 5efcffbff1
commit d815aa9ce8
144 changed files with 1510 additions and 1102 deletions

View File

@ -366,208 +366,3 @@ MinIO service:
- :ref:`Create users and policies to control access to the deployment
<minio-authentication-and-identity-management>`.
.. _deploy-minio-distributed-recommendations:
Deployment Recommendations
--------------------------
Minimum Nodes per Deployment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For all production deployments, MinIO recommends a *minimum* of 4 nodes per
:ref:`server pool <minio-intro-server-pool>` with 4 drives per server.
With the default :ref:`erasure code parity <minio-erasure-coding>` setting of
``EC:4``, this topology can continue serving read and write operations
despite the loss of up to 4 drives *or* one node.
The minimum recommendation reflects MinIO's experience with assisting enterprise
customers in deploying on a variety of IT infrastructures while maintaining the
desired SLA/SLO. While MinIO may run on less than the minimum recommended
topology, any potential cost savings come at the risk of decreased reliability.
Server Hardware
~~~~~~~~~~~~~~~
MinIO is hardware agnostic and runs on a variety of hardware architectures
ranging from ARM-based embedded systems to high-end x64 and POWER9 servers.
The following recommendations match MinIO's
`Reference Hardware <https://min.io/product/reference-hardware>`__ for
large-scale data storage:
.. list-table::
:stub-columns: 1
:widths: 20 80
:width: 100%
* - Processor
- Dual Intel Xeon Scalable Gold CPUs with 8 cores per socket.
* - Memory
- 128GB of Memory per pod
* - Network
- Minimum of 25GbE NIC and supporting network infrastructure between nodes.
MinIO can make maximum use of drive throughput, which can fully saturate
network links between MinIO nodes or clients. Large clusters may require
100GbE network infrastructure to fully utilize MinIO's per-node
performance potential.
* - Drives
- SATA/SAS NVMe/SSD with a minimum of 8 drives per server.
Drives should be :abbr:`JBOD (Just a Bunch of Disks)` arrays with
no RAID or similar technologies. MinIO recommends XFS formatting for
best performance.
Use the same type of disk (NVMe, SSD, or HDD) with the same capacity
across all nodes in the deployment. MinIO does not distinguish drive
types when using the underlying storage and does not benefit from mixed
storage types. Additionally. MinIO limits the size used per disk to the
smallest drive in the deployment. For example, if the deployment has 15
10TB disks and 1 1TB disk, MinIO limits the per-disk capacity to 1TB.
Networking
~~~~~~~~~~
MinIO recommends high speed networking to support the maximum possible
throughput of the attached storage (aggregated drives, storage controllers,
and PCIe busses). The following table provides general guidelines for the
maximum storage throughput supported by a given NIC:
.. list-table::
:header-rows: 1
:width: 100%
:widths: 40 60
* - NIC bandwidth (Gbps)
- Estimated Aggregated Storage Throughput (GBps)
* - 10GbE
- 1GBps
* - 25GbE
- 2.5GBps
* - 50GbE
- 5GBps
* - 100GbE
- 10GBps
CPU Allocation
~~~~~~~~~~~~~~
MinIO can perform well with consumer-grade processors. MinIO can take advantage
of CPUs which support AVX-512 SIMD instructions for increased performance of
certain operations.
MinIO benefits from allocating CPU based on the expected per-host network
throughput. The following table provides general guidelines for allocating CPU
for use by based on the total network bandwidth supported by the host:
.. list-table::
:header-rows: 1
:width: 100%
:widths: 40 60
* - Host NIC Bandwidth
- Recommended Pod vCPU
* - 10GbE or less
- 8 vCPU per pod.
* - 25GbE
- 16 vCPU per pod.
* - 50GbE
- 32 vCPU per pod.
* - 100GbE
- 64 vCPU per pod.
.. _minio-k8s-production-considerations-memory:
Memory Allocation
~~~~~~~~~~~~~~~~~
MinIO benefits from allocating memory based on the total storage of each host.
The following table provides general guidelines for allocating memory for use
by MinIO server processes based on the total amount of local storage on the
host:
.. list-table::
:header-rows: 1
:width: 100%
:widths: 40 60
* - Total Host Storage
- Recommended Host Memory
* - Up to 1 Tebibyte (Ti)
- 8GiB
* - Up to 10 Tebibyte (Ti)
- 16GiB
* - Up to 100 Tebibyte (Ti)
- 32GiB
* - Up to 1 Pebibyte (Pi)
- 64GiB
* - More than 1 Pebibyte (Pi)
- 128GiB
.. _minio-requests-per-node:
Requests Per Node
~~~~~~~~~~~~~~~~~
You can calculate the maximum number of concurrent requests per host with this formula:
:math:`totalRam / ramPerRequest`
To calculate the amount of RAM used for each request, use this formula:
:math:`((2MiB + 128KiB) * driveCount) + (2 * 10MiB) + (2 * 1 MiB)`
10MiB is the default erasure block size v1.
1 MiB is the default erasure block size v2.
The following table lists the maximum concurrent requests on a node based on the number of host drives and the *free* system RAM:
.. list-table::
:header-rows: 1
:width: 100%
* - Number of Drives
- 32 GiB of RAM
- 64 GiB of RAM
- 128 GiB of RAM
- 256 GiB of RAM
- 512 GiB of RAM
* - 4 Drives
- 1,074
- 2,149
- 4,297
- 8,595
- 17,190
* - 8 Drives
- 840
- 1,680
- 3,361
- 6,722
- 13,443
* - 16 Drives
- 585
- 1,170
- 2.341
- 4,681
- 9,362