1
0
mirror of https://github.com/minio/docs.git synced 2025-04-21 08:05:59 +03:00
Daryl White 0a68ca4ff9
Adding bucket limit information (#649)
- Imports the limits doc from legacy into the Checklists section
- Adds 500K limit to buckets in several places

Closes #548
2022-11-22 15:25:44 -06:00

15 KiB
Raw Blame History

Hardware Checklist

minio

Table of Contents

Use the following checklist when planning the hardware configuration for a production, distributed MinIO deployment.

Considerations

When selecting hardware for your MinIO implementation, take into account the following factors:

  • Expected amount of data in tebibytes to store at launch
  • Expected growth in size of data for at least the next two years
  • Number of objects by average object size
  • Average retention time of data in years
  • Number of sites to be deployed
  • Number of expected buckets

Production Hardware Requirements

The following checklist provides a minimum hardware specification for production MinIO deployments. MinIO takes full advantage of the modern hardware improvements such as AVX-512 SIMD acceleration, 100GbE networking, and NVMe SSDs, when available. While MinIO can run on commodity or "budget" hardware, we strongly recommend using this table as guidance for best results in production environments.

Note

See our Reference Hardware page for a curated selection of servers and storage components from our hardware partners.

MinIO does not provide hosted services or hardware sales.

circle

Sufficient CPU cores to achieve performance goals for hashing (for example, for healing) and encryption

MinIO recommends Dual Intel® Xeon® Scalable Gold CPUs (minimum 8 cores per socket) or any CPU with AVX512 instructions

circle

Sufficient RAM to achieve performance goals based on the number of drives and anticipated concurrent requests (see the formula and reference table <minio-hardware-checklist-memory>).

MinIO recommends a minimum of 128GB of memory per node for best performance.

circle

Minimum of four nodes dedicated to object storage.

For containers or Kubernetes in virtualized environments, MinIO requires four distinct physical nodes. Colocating multiple high-performance softwares on the same nodes can result in resource contention and reduced overall performance.

circle
SATA/SAS drives for balanced capacity-to-performance
NVMe SSDs for high-performance.
MinIO recommends a minimum of 8 drives per server.
Use the same type of drive (NVMe, SSD, or HDD) with the same capacity across all nodes in the deployment.
circle
25GbE Network as a baseline
100GbE Network for high performance

Important

The following areas have the greatest impact on MinIO performance, listed in order of importance:

Network Infrastructure Insufficient or limited throughput constrains performance
Storage Controller Old firmware, limited throughput, or failing hardware constrains performance and affects reliability
Storage (Drive) Old firmware, or slow/aging/failing hardware constrains performance and affects reliability

Prioritize securing the necessary components for each of these areas before focusing on other hardware resources, such as compute-related constraints.

Minimum Nodes per Deployment

k8s

MinIO requires a minimum of 4 worker nodes per MinIO Tenant with 4 drives per node. Each drive must consist of a Persistent Volume associated to a storage resource.

linux or container or macos or windows

MinIO recommends a minimum of 4 host servers per deployment with 4 locally attached drives per server.

The "4x4" topology provides a baseline of performance with tolerance for the loss of up to 4 drives or one node while maintaining read and write operations. You can increase the erasure code parity <minio-erasure-coding> of the deployment to improve resiliency at the cost of available storage.

The minimum recommendation reflects MinIO's experience with assisting enterprise customers in deploying on a variety of IT infrastructures while maintaining the desired SLA/SLO. While MinIO may run on less than the minimum recommended topology, any potential cost savings come at the risk of decreased reliability.

Networking

MinIO recommends high speed networking to support the maximum possible throughput of the attached storage (aggregated drives, storage controllers, and PCIe busses). The following table provides a general guideline for the maximum storage throughput supported by a given physical or virtual network interface. This table assumes all network infrastructure components, such as routers, switches, and physical cabling, also supports the NIC bandwidth.

NIC Bandwidth (Gbps) Estimated Aggregated Storage Throughput (GBps)
10GbE 1.25GBps
25GbE 3.125GBps
50GbE 6.25GBps
100GbE 12.5GBps

Networking has the greatest impact on MinIO performance, where low per-host bandwidth artificially constrains the potential performance of the storage. The following examples of network throughput constraints assume spinning disks with ~100MB/S sustained I/O

  • 1GbE network link can support up to 125MB/s, or one spinning disk
  • 10GbE network can support approximately 1.25GB/s, potentially supporting 10-12 spinning disk
  • 25GbE network can support approximately 3.125GB/s, potentially supporting ~30 disks

The recommended minimum MinIO cluster of 4 nodes with 4 drives each (16 total disks) requires a 25GbE network to support the total potential aggregate throughput.

Memory

Memory primarily constrains the number of concurrent simultaneous connections per node.

You can calculate the maximum number of concurrent requests per node with this formula:

totalRam/ramPerRequest

To calculate the amount of RAM used for each request, use this formula:

((2MiB+128KiB)*driveCount)+(2*10MiB)+(2*1MiB)

10MiB is the default erasure block size v1. 1 MiB is the default erasure block size v2.

The following table lists the maximum concurrent requests on a node based on the number of host drives and the free system RAM:

Number of Drives 32 GiB of RAM 64 GiB of RAM 128 GiB of RAM 256 GiB of RAM 512 GiB of RAM
4 Drives 1,074 2,149 4,297 8,595 17,190
8 Drives 840 1,680 3,361 6,722 13,443
16 Drives 585 1,170 2.341 4,681 9,362

The following table provides general guidelines for allocating memory for use by MinIO based on the total amount of local storage on the node:

Total Host Storage Recommended Host Memory
Up to 1 Tebibyte (Ti) 8GiB
Up to 10 Tebibyte (Ti) 16GiB
Up to 100 Tebibyte (Ti) 32GiB
Up to 1 Pebibyte (Pi) 64GiB
More than 1 Pebibyte (Pi) 128GiB

Storage

MinIO recommends selecting the type of drive based on your performance objectives. The following table highlights the general use case for each drive type based on cost and performance:

NVMe/SSD - Hot Tier HDD - Warm

Type Cost Performance Tier
NVMe High High Hot
SSD Balanced Balanced Hot/Warm
HDD Low Low Cold/Archival

Use the same type of drive (NVME, SSD, HDD) with the same capacity across all nodes in a MinIO deployment. MinIO does not distinguish drive types when using the underlying storage and does not benefit from mixed storage types.

Use the same capacity of drive across all nodes in the MinIO server pool <minio-intro-server-pool>. MinIO limits the maximum usable size per drive to the smallest size in the deployment. For example, if a deployment has 15 10TB drives and 1 1TB drive, MinIO limits the per-drive capacity to 1TB.

MinIO Diagnostics

Run the built in health diagnostic tool. If you have access to SUBNET <minio-docs-subnet>, you can upload the results there.

mc support diag ALIAS --airgap

Replace ALIAS with the ~mc alias defined for the deployment.

MinIO Support Diagnostic Tools

For deployments registered with MinIO , you can run the built-in support diagnostic tools.

Run the three mc support perf tests.

These server-side tests validate network, drive, and object throughput. Run all three tests with default options.

  1. Network test

    Run a network throughput test on a cluster with alias minio1.

    mc support perf net minio1
  2. Drive test

    Run drive read/write performance measurements on all drive on all nodes for a cluster with alias minio1. The command uses the default blocksize of 4MiB.

    mc support perf drive minio1
  3. Object test

    Measure the performance of S3 read/write of an object on the alias minio1. MinIO autotunes concurrency to obtain maximum throughput and IOPS (Input/Output Per Second).

    mc support perf object minio1

Operating System Diagnostic Tools

If you cannot run the mc support diag or the results show unexpected results, you can use the operating system's default tools.

Test each drive independently on all servers to ensure they are identical in performance. Use the results of these OS-level tools to verify the capabilities of your storage hardware. Record the results for later reference.

  1. Test the drive's performance during write operations

    This tests checks a drive's ability to write new data (uncached) to the drive by creating a specified number of blocks at up to a certain number of bytes at a time to mimic how a drive would function with writing uncached data. This allows you to see the actual drive performance with consistent file I/O.

    dd if=/dev/zero of=/mnt/driveN/testfile bs=128k count=80000 oflag=direct conv=fdatasync > dd-write-drive1.txt

    Replace driveN with the path for the drive you are testing.

    dd The command to copy and paste data.
    if=/dev/zero Read from /dev/zero, an system-generated endless stream of 0 bytes used to create a file of a specified size
    of=/mnt/driveN/testfile Write to /mnt/driveN/testfile
    bs=128k Write up to 128,000 bytes at a time
    count=80000 Write up to 80000 blocks of data
    oflag=direct Use direct I/O to write to avoid data from caching
    conv=fdatasync Physically write output file data before finishing
    > dd-write-drive1.txt Write the contents of the operation's output to dd-write-drive1.txt in the current working directory

    The operation returns the number of files written, total size written in bytes, the total length of time for the operation (in seconds), and the speed of the writing in some order of bytes per second.

  2. Test the drive's performance during read operations

    dd if=/mnt/driveN/testfile of=/dev/null bs=128k iflag=direct > dd-read-drive1.txt

    Replace driveN with the path for the drive you are testing.

    dd The command to copy and paste data
    if=/mnt/driveN/testfile Read from /mnt/driveN/testfile; replace with the path to the file to use for testing the drive's read performance
    of=/dev/null Write to /dev/null, a virtual file that does not persist after the operation completes
    bs=128k Write up to 128,000 bytes at a time
    count=80000 Write up to 80000 blocks of data
    iflag=direct Use direct I/O to read and avoid data from caching
    > dd-read-drive1.txt Write the contents of the operation's output to dd-read-drive1.txt in the current working directory

    Use a sufficiently sized file that mimics the primary use case for your deployment to get accurate read test results.

    The following guidelines may help during performance testing:

    • Small files: < 128KB
    • Normal files: 128KB 1GB
    • Large files: > 1GB

    You can use the head command to create a file to use. The following command example creates a 10 Gigabyte file called testfile.

    head -c 10G </dev/urandom > testfile

    The operation returns the number of files read, total size read in bytes, the total length of time for the operation (in seconds), and the speed of the reading in bytes per second.

Third Party Diagnostic Tools

IO Controller test

Use IOzone to test the input/output controller and all drives in combination. Document the performance numbers for each server in your deployment.

iozone -s 1g -r 4m -i 0 -i 1 -i 2 -I -t 160 -F /mnt/sdb1/tmpfile.{1..16} /mnt/sdc1/tmpfile.{1..16} /mnt/sdd1/tmpfile.{1..16} /mnt/sde1/tmpfile.{1..16} /mnt/sdf1/tmpfile.{1..16} /mnt/sdg1/tmpfile.{1..16} /mnt/sdh1/tmpfile.{1..16} /mnt/sdi1/tmpfile.{1..16} /mnt/sdj1/tmpfile.{1..16} /mnt/sdk1/tmpfile.{1..16} > iozone.txt
-s 1g Size of 1G per file
-r 4m 4MB block size
-i # 0=write/rewrite, 1=read/re-read, 2=random-read/write
-I Direct-IO modern
-t N Number of threads (numberOfDrives*16)
-F <> list of files (the above command tests with 16 files per drive)