1
0
mirror of https://github.com/minio/docs.git synced 2025-07-30 07:03:26 +03:00

DOCS-1084: Fix deployment menus for Windows, macos, containers, Add s… (#1108)

…cript for xfs settings to Linux"

Closes #1084 

Accomplishing a few things here:

1. General tidy-up of the installation pages so Windows/Container/MacOS
only display the appropriate deployment topos
2. Better admonitions/guidance on when to use what distribution
3. Added a section on Storage + XFS Settings to resolve an ongoing
customer request


---------

Co-authored-by: Daryl White <53910321+djwfyi@users.noreply.github.com>
Co-authored-by: Andrea Longo <feorlen@users.noreply.github.com>
This commit is contained in:
Ravind Kumar
2024-01-23 14:12:11 -05:00
committed by GitHub
parent 2d00cf06e5
commit 31bd45f20f
8 changed files with 208 additions and 62 deletions

View File

@ -50,10 +50,14 @@ You can deploy MinIO using one of the following topologies:
Scalable for Petabyte+ workloads - both storage capacity and performance
.. cond:: macos
.. cond:: macos or windows
Use MacOS-based MinIO deployments for early development and evaluation.
MinIO strongly recommends Linux (RHEL, Ubuntu) for long-term development and production environments.
.. note::
Use |platform|-based MinIO deployments for early development and evaluation.
MinIO provides no guarantee of support for :abbr:`SNMD (Single-Node Multi-Drive)` or :abbr:`MNMD (Multi-Node Multi-Drive)` topologies on |platform|.
MinIO strongly recommends :minio-docs:`Linux (RHEL, Ubuntu) <minio/linux/index.html>` or :minio-docs:`Kubernetes (Upstream, OpenShift) <minio/kubernetes/upstream/index.html>` for long-term development and production environments.
Site Replication
----------------
@ -66,6 +70,10 @@ Site replication expands the features of bucket replication to include IAM, secu
:start-after: start-mc-admin-replicate-what-replicates
:end-before: end-mc-admin-replicate-what-replicates
.. cond:: macos or windows
MinIO does not recommend using |platform| hosts for site replication outside of early development, evaluation, or general experimentation.
For production, use :minio-docs:`Linux <minio/linux/operations/install-deploy-manage/multi-site-replication.html>` or :minio-docs:`Kubernetes <minio/kubernetes/upstream/operations/install-deploy-manage/multi-site-replication.html>`.
What Does Not Replicate?
~~~~~~~~~~~~~~~~~~~~~~~~
@ -109,7 +117,7 @@ hello@min.io for additional support and guidance. You can build MinIO from
for your platform and architecture combo. MinIO generally does not recommend
source-based installations in production environments.
.. cond:: linux or macos
.. cond:: linux
.. toctree::
:titlesonly:
@ -119,3 +127,21 @@ source-based installations in production environments.
/operations/install-deploy-manage/deploy-minio-single-node-multi-drive
/operations/install-deploy-manage/deploy-minio-multi-node-multi-drive
/operations/install-deploy-manage/multi-site-replication
.. cond:: windows
.. toctree::
:titlesonly:
:hidden:
/operations/install-deploy-manage/deploy-minio-single-node-single-drive
.. cond:: macos
.. toctree::
:titlesonly:
:hidden:
/operations/install-deploy-manage/deploy-minio-single-node-single-drive
/operations/install-deploy-manage/deploy-minio-single-node-multi-drive
/operations/install-deploy-manage/multi-site-replication

View File

@ -41,8 +41,33 @@ Multi-Node Multi-Drive (MNMD or "Distributed")
Multiple MinIO servers with at least four drives across all servers.
The distributed |MNMD| topology supports production-grade object storage with drive and node-level availability and resiliency.
This documentation provides instructions for |SNSD| and |SNMD| for supporting local development and evaluation of MinIO on a single host machine **only**.
For |MNMD| deployments, use the MinIO Kubernetes Operator to deploy and manage MinIO tenants in a containerized and orchestrated environment.
.. note::
This documentation provides instructions for |SNSD| and |SNMD| for supporting local development and evaluation of MinIO on a single host machine **only**.
For |MNMD| deployments, use the MinIO Kubernetes Operator to :minio-docs:`deploy and manage MinIO tenants in a containerized and orchestrated environment <minio/kubernetes/upstream/operations/installation.html>`.
Site Replication
----------------
:ref:`Site replication <minio-site-replication-overview>` links multiple MinIO deployments together and keeps the buckets, objects, and Identity and Access Management (IAM) settings in sync across all connected sites.
.. include:: /includes/common-replication.rst
:start-after: start-mc-admin-replicate-what-replicates
:end-before: end-mc-admin-replicate-what-replicates
.. important::
MinIO does not recommend using |platform| hosts for site replication outside of early development, evaluation, or general experimentation.
For production, use :minio-docs:`Kubernetes <minio/kubernetes/upstream/operations/install-deploy-manage/multi-site-replication.html>` for an orchestrated container environment.
What Does Not Replicate?
~~~~~~~~~~~~~~~~~~~~~~~~
Not everything replicates across sites.
.. include:: /includes/common-replication.rst
:start-after: start-mc-admin-replicate-what-does-not-replicate
:end-before: end-mc-admin-replicate-what-does-not-replicate
.. _minio-installation-platform-support:

View File

@ -0,0 +1,78 @@
MinIO uses an update-then-restart methodology for upgrading a deployment to a newer release:
1. Update the MinIO binary with the newer release.
2. Restart the deployment using :mc-cmd:`mc admin service restart`.
This procedure does not require taking downtime and is non-disruptive to ongoing operations.
This page documents methods for upgrading using the update-then-restart method for both ``systemctl`` and user-managed MinIO deployments.
Deployments using Ansible, Terraform, or other management tools can use the procedures here as guidance for implementation within the existing automation framework.
Considerations
--------------
Upgrades Are Non-Disruptive
~~~~~~~~~~~~~~~~~~~~~~~~~~~
MinIO's upgrade-then-restart procedure does *not* require taking downtime or scheduling a maintenance period.
MinIO restarts are fast, such that restarting all server processes in parallel typically completes in a few seconds.
MinIO operations are atomic and strictly consistent, such that applications using MinIO or S3 SDKs can rely on the built-in :aws-docs:`transparent retry <general/latest/gr/api-retries.html>` without further client-side logic.
This ensures upgrades are non-disruptive to ongoing operations.
"Rolling" or serial "one-at-a-time" upgrade methods do not provide any advantage over the recommended "parallel" procedure, and can introduce unnecessary complexity to the upgrade procedure.
For virtualized environments which *require* rolling updates, you should modify the recommended procedure as follows:
1. Update the MinIO Binary in the virtual machine or container one at a time.
2. Restart the MinIO deployment using :mc-cmd:`mc admin service restart`.
3. Update the virtual machine/container configuration to use the matching newer MinIO image.
4. Perform the rolling restart of each machine/container with the updated image.
Check Release Notes
~~~~~~~~~~~~~~~~~~~
MinIO publishes :minio-git:`Release Notes <minio/releases>` for your reference as part of identifying the changes applied in each release.
Review the associated release notes between your current MinIO version and the newer release so you have a complete view of any changes.
Pay particular attention to any releases that are *not* backwards compatible.
You cannot trivially downgrade from any such release.
Update Using Homebrew
---------------------
For Homebrew installations, you can use homebrew to update the cask:
.. code-block:: shell
:class: copyable
brew upgrade minio/stable/minio
Restart the MinIO process to complete the update.
Update using Binary Replacement
-------------------------------
.. tab-set::
.. tab-item:: Binary - arm64
Open a Terminal, then use the following commands to download the latest stable MinIO binary, set it to executable, and install it to the system ``$PATH``:
.. code-block:: shell
:class: copyable
curl -O https://dl.min.io/server/minio/release/darwin-arm64/minio
chmod +x ./minio
sudo mv ./minio /usr/local/bin/
.. tab-item:: Binary - amd64
Open a Terminal, then use the following commands to download the latest stable MinIO binary, set it to executable, and install it to the system ``$PATH``:
.. code-block:: shell
:class: copyable
curl -O https://dl.min.io/server/minio/release/darwin-amd64/minio
chmod +x ./minio
sudo mv ./minio /usr/local/bin/
Restart the MinIO process to complete the update.

View File

@ -12,51 +12,41 @@
You cannot run the executable from the Explorer or by double clicking the file.
Instead, you call the executable to launch the server.
2) Create the ``systemd`` Service File
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2) Prepare the Data Path for MinIO
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. include:: /includes/linux/common-installation.rst
:start-after: start-install-minio-systemd-desc
:end-before: end-install-minio-systemd-desc
Ensure the data path is empty and contains no existing files, including any hidden or Windows system files.
3) Create the Environment Variable File
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If specifying a drive not dedicated for use by MinIO, consider creating a dedicated folder for storing MinIO data such as ``D:/Minio``.
.. include:: /includes/common/common-deploy.rst
:start-after: start-common-deploy-create-environment-file-single-drive
:end-before: end-common-deploy-create-environment-file-single-drive
4) Start the MinIO Service
3) Start the MinIO Server
~~~~~~~~~~~~~~~~~~~~~~~~~~
Issue the following command on the local host to start the MinIO |SNSD| deployment as a service:
Open the Command Prompt or PowerShell and issue the following command to start the MinIO |SNSD| deployment in that session:
.. include:: /includes/linux/common-installation.rst
:start-after: start-install-minio-start-service-desc
:end-before: end-install-minio-start-service-desc
.. code-block:: shell
:class: copyable
The ``journalctl`` output should resemble the following:
minio.exe server D:/minio --console-address ":9001"
The output should resemble the following:
.. code-block:: shell
Status: 1 Online, 0 Offline.
API: http://192.168.2.100:9000 http://127.0.0.1:9000
RootUser: myminioadmin
RootPass: minio-secret-key-change-me
Console: http://192.168.2.100:9001 http://127.0.0.1:9001
RootUser: myminioadmin
RootPass: minio-secret-key-change-me
Console: http://192.168.2.100:9001 http://127.0.0.1:9001
Command-line: https://min.io/docs/minio/linux/reference/minio-mc.html
$ mc alias set myminio http://10.0.2.100:9000 myminioadmin minio-secret-key-change-me
$ mc alias set myminio http://10.0.2.100:9000 minioadmin minioadmin
Documentation: https://min.io/docs/minio/linux/index.html
The ``API`` block lists the network interfaces and port on which clients can access the MinIO S3 API.
The ``Console`` block lists the network interfaces and port on which clients can access the MinIO Web Console.
5) Connect to the MinIO Service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
4) Connect to the MinIO Server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. include:: /includes/common/common-deploy.rst
:start-after: start-common-deploy-connect-to-minio-deployment

View File

@ -124,6 +124,7 @@ For more about connecting to ``play``, see :ref:`MinIO Console play Login <minio
:titlesonly:
:hidden:
/operations/installation
/operations/concepts
/operations/monitoring
/operations/external-iam

View File

@ -202,41 +202,62 @@ Storage
.. cond:: not k8s
MinIO recommends selecting the type of drive based on your performance objectives.
The following table highlights the general use case for each drive type based on cost and performance:
MinIO recommends using flash-based storage (NVMe or SSD) for all workload types and scales.
Workloads that require high performance should prefer NVMe over SSD.
.. list-table::
:header-rows: 1
:widths: auto
:width: 100%
MinIO deployments using HDD-based storage are best suited as cold-tier targets for :ref:`Object Transition ("Tiering") <minio-lifecycle-management-tiering>` of aged data.
HDD storage typically does not provide the necessary performance to meet the expectations of modern workloads, and any cost efficiencies at scale are offset by the performance constraints of the medium.
* - Type
- Cost
- Performance
- Tier
* - NVMe
- High
- High
- Hot
* - SSD
- Balanced
- Balanced
- Hot/Warm
* - HDD
- Low
- Low
- Cold/Archival
Format Drives as XFS
++++++++++++++++++++
Format drives as XFS and present them to MinIO as a :abbr:`JBOD (Just a Bunch of Disks)` array with no RAID or other pooling configurations.
Ensure a consistent drive type (NVMe, SSD, HDD) for the underyling storage.
MinIO does not distinguish between storage types.
Mixing storage types provides no benefit to MinIO.
MinIO **strongly recommends** disabling `retry-on-error <https://docs.kernel.org/admin-guide/xfs.html?highlight=xfs#error-handling>`__ behavior using the ``max_retries`` configuration for the following error classes:
- ``EIO`` Error when reading or writing
- ``ENOSPC`` Error no space left on device
- ``default`` All other errors
Use the same capacity of drive across all nodes in each MinIO :ref:`server pool <minio-intro-server-pool>`.
The default ``max_retries`` setting typically directs the filesystem to retry-on-error indefinitely instead of propagating the error.
MinIO can handle XFS errors appropriately, such that the retry-on-error behavior introduces at most unnecessary latency or performance degradation.
The following script iterates through all drives at the specified mount path and sets the XFS ``max_retries`` setting to ``0`` or "fail immediately on error" for the recommended error classes.
The script ignores any drives not mounted, either manually or through ``/etc/fstab``.
Modify the ``/mnt/drive`` line to match the pattern used for your MinIO drives.
.. code-block:: bash
:class: copyable
#!/bin/bash
for i in $(df -h | grep /mnt/drive | awk '{ print $1 }'); do
mountPath="$(df -h | grep $i | awk '{ print $6 }')"
deviceName="$(basename $i)"
echo "Modifying xfs max_retries and retry_timeout_seconds for drive $i mounted at $mountPath"
echo 0 > /sys/fs/xfs/$deviceName/error/metadata/EIO/max_retries
echo 0 > /sys/fs/xfs/$deviceName/error/metadata/ENOSPC/max_retries
echo 0 > /sys/fs/xfs/$deviceName/error/metadata/default/max_retries
done
exit 0
You must run this script on all MinIO nodes and configure the script to re-run on reboot, as Linux Operating Systems do not typically persist these changes.
You can use a ``cron`` job with the ``@reboot`` timing to run the above script whenever the node restarts and ensure all drives have retry-on-error disabled.
Use ``crontab -e`` to create the following job, modifying the script path to match that on each node:
.. code-block:: shell
:class: copyable
@reboot /opt/minio/xfs-retry-settings.sh
Use Consistent Drive Type and Capacity
++++++++++++++++++++++++++++++++++++++
Ensure a consistent drive type (NVMe, SSD, HDD) for the underlying storage in a MinIO deployment.
MinIO does not distinguish between storage types and does not support configuring "hot" or "warm" drives within a single deployment.
Mixing drive types typically results in performance degradation, as the slowest drives in the deployment become a bottleneck regardless of the capabilities of the faster drives.
Use the same capacity and type of drive across all nodes in each MinIO :ref:`server pool <minio-intro-server-pool>`.
MinIO limits the maximum usable size per drive to the smallest size in the deployment.
For example, if a deployment has 15 10TB drives and 1 1TB drive, MinIO limits the per-drive capacity to 1TB.

View File

@ -26,6 +26,10 @@ Deployments using an external IDP must use the same configuration across sites.
For more information on site replication architecture and deployment concepts, see :ref:`Deployment Architecture: Replicated MinIO Deployments <minio-deployment-architecture-replicated>`.
.. cond:: macos or windows or container
MinIO does not recommend using |platform| hosts for site replication outside of early development, evaluation, or general experimentation.
For production, use :minio-docs:`Linux <minio/linux/operations/install-deploy-manage/multi-site-replication.html>` or :minio-docs:`Kubernetes <minio/kubernetes/upstream/operations/install-deploy-manage/multi-site-replication.html>`
Overview
--------

View File

@ -29,6 +29,9 @@ excludes:
---
tag: macos
excludes:
- 'operations/install-deploy-manage/deploy-minio-multi-node-multi-drive.rst'
- 'operations/install-deploy-manage/expand-minio-deployment.rst'
- 'operations/install-deploy-manage/decommission-server-pool.rst'
- 'operations/install-deploy-manage/deploy-minio-tenant.rst'
- 'operations/install-deploy-manage/deploy-operator-helm.rst'
- 'operations/install-deploy-manage/modify-minio-tenant.rst'
@ -50,13 +53,11 @@ excludes:
---
tag: windows
excludes:
- 'operations/installation.rst'
- 'operations/install-deploy-manage/expand-minio-deployment.rst'
- 'operations/install-deploy-manage/upgrade-minio-deployment.rst'
- 'operations/install-deploy-manage/decommission-server-pool.rst'
- 'operations/install-deploy-manage/migrate-fs-gateway.rst'
- 'operations/manage-existing-deployments.rst'
- 'operations/install-deploy-manage/deploy-minio-single-node-single-drive.rst'
- 'operations/install-deploy-manage/deploy-minio-single-node-multi-drive.rst'
- 'operations/install-deploy-manage/deploy-minio-multi-node-multi-drive.rst'
- 'operations/install-deploy-manage/deploy-operator-helm.rst'