1
0
mirror of https://github.com/minio/docs.git synced 2025-07-28 19:42:10 +03:00

Docs Multiplatform Slice

This commit is contained in:
Ravind Kumar
2022-05-06 16:44:42 -04:00
parent df33ddee6a
commit b99c20a16f
134 changed files with 3689 additions and 2200 deletions

View File

@ -0,0 +1,344 @@
.. _minio-decommissioning:
==========================
Decommission a Server Pool
==========================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 1
Starting with :minio-release:`RELEASE.2022-01-25T19-56-04Z`, MinIO supports
decommissioning and removing a :ref:`server pools <minio-intro-server-pool>`
from a deployment. Decommissioning is designed for removing an older server pool
whose hardware is no longer sufficient or performant compared to the pools in
the deployment. MinIO automatically migrates data from the decommissioned pool
to the remaining pools in the deployment based on the ratio of free space
available in each pool.
During the decommissioning process, MinIO routes read operations (e.g. ``GET``,
``LIST``, ``HEAD``) normally. MinIO routes write operations (e.g. ``PUT``,
versioned ``DELETE``) to the remaining "active" pools in the deployment.
Versioned objects maintain their ordering throughout the migration process.
The procedure on this page decommissions and removes a server pool from
a :ref:`distributed <deploy-minio-distributed>` MinIO deployment with
*at least* two server pools.
.. admonition:: Decommissioning is Permanent
:class: important
Once MinIO begins decommissioning a pool, it marks that pool as *permanently*
inactive ("draining"). Cancelling or otherwise interrupting the
decommissioning procedure does **not** restore the pool to an active
state.
Decommissioning is a major administrative operation that requires care
in planning and execution, and is not a trivial or 'daily' task.
`MinIO SUBNET <https://min.io/pricing?jmp=docs>`__ users can
`log in <https://subnet.min.io/>`__ and create a new issue related to
decommissioning. Coordination with MinIO Engineering via SUBNET can ensure
successful decommissioning, including performance testing and health
diagnostics.
Community users can seek support on the `MinIO Community Slack
<https://minio.slack.com>`__. Community Support is best-effort only and has
no SLAs around responsiveness.
.. _minio-decommissioning-prereqs:
Prerequisites
-------------
Networking and Firewalls
~~~~~~~~~~~~~~~~~~~~~~~~
Each node should have full bidirectional network access to every other node in
the deployment. For containerized or orchestrated infrastructures, this may
require specific configuration of networking and routing components such as
ingress or load balancers. Certain operating systems may also require setting
firewall rules. For example, the following command explicitly opens the default
MinIO server API port ``9000`` on servers using ``firewalld``:
.. code-block:: shell
:class: copyable
firewall-cmd --permanent --zone=public --add-port=9000/tcp
firewall-cmd --reload
If you set a static :ref:`MinIO Console <minio-console>` port (e.g. ``:9001``)
you must *also* grant access to that port to ensure connectivity from external
clients.
MinIO **strongly recomends** using a load balancer to manage connectivity to the
cluster. The Load Balancer should use a "Least Connections" algorithm for
routing requests to the MinIO deployment, since any MinIO node in the deployment
can receive, route, or process client requests.
The following load balancers are known to work well with MinIO:
- `NGINX <https://www.nginx.com/products/nginx/load-balancing/>`__
- `HAProxy <https://cbonte.github.io/haproxy-dconv/2.3/intro.html#3.3.5>`__
Configuring firewalls or load balancers to support MinIO is out of scope for
this procedure.
Deployment Must Have Sufficient Storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The decommissioning process migrates objects from the target pool to other
pools in the deployment. The total available storage on the deployment
*must* exceed the total storage of the decommissioned pool.
For example, consider a deployment with the following distribution of
used and free storage:
.. list-table::
:stub-columns: 1
:widths: 30 30 30
:width: 100%
* - Pool 1
- 100TB Used
- 200TB Total
* - Pool 2
- 100TB Used
- 200TB Total
* - Pool 3
- 100TB Used
- 200TB Total
Decommissioning Pool 1 requires distributing the 100TB of used storage
across the remaining pools. Pool 2 and Pool 3 each have 100TB of unused
storage space and can safely absorb the data stored on Pool 1.
However, if Pool 1 were full (e.g. 200TB of used space), decommissioning would
completely fill the remaining pools and potentially prevent any further write
operations.
Decommissioning Does Not Support Tiering
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MinIO does not support decommissioning pools in deployments with
:ref:`tiering <minio-lifecycle-management-tiering>` configured. The MinIO
server rejects decommissioning attempts if any bucket in the deployment
has a tiering configuration.
Considerations
--------------
Decommissioning Ignores Delete Markers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MinIO does *not* migrate objects whose only remaining version is a
:ref:`delete markers <minio-bucket-versioning-delete>`. This avoids creating
empty metadata on the remaining server pools for objects already considered
fully deleted.
Decommissioning is Resumable
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MinIO resumes decommissioning if interrupted by transient issues such as
deployment restarts or network failures.
For manually cancelled or failed decommissioning attempts, MinIO
resumes only after you manually re-initiate the decommissioning operation.
The pool remains in the decommissioning state *regardless* of the interruption.
A pool can *never* return to active status after decommissioning begins.
Decommissioning is Non-Disruptive
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Removing a decommissioned server pool requires restarting *all* MinIO
nodes in the deployment at around the same time.
.. include:: /includes/common-installation.rst
:start-after: start-nondisruptive-upgrade-desc
:end-before: end-nondisruptive-upgrade-desc
.. _minio-decommissioning-server-pool:
Decommission a Server Pool
--------------------------
1) Review the MinIO Deployment Topology
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :mc-cmd:`mc admin decommission` command returns a list of all
pools in the MinIO deployment:
.. code-block:: shell
:class: copyable
mc admin decommission status myminio
The command returns output similar to the following:
.. code-block:: shell
┌─────┬────────────────────────────────────────────────────────────────┬──────────────────────────────────┬────────┐
│ ID │ Pools │ Capacity │ Status │
│ 1st │ https://minio-{01...04}.example.com:9000/mnt/disk{1...4}/minio │ 10 TiB (used) / 10 TiB (total) │ Active │
│ 2nd │ https://minio-{05...08}.example.com:9000/mnt/disk{1...4}/minio │ 60 TiB (used) / 100 TiB (total) │ Active │
│ 3rd │ https://minio-{09...12}.example.com:9000/mnt/disk{1...4}/minio │ 40 TiB (used) / 100 TiB (total) │ Active │
└─────┴────────────────────────────────────────────────────────────────┴──────────────────────────────────┴────────┘
The example deployment above has three pools. Each pool has four servers
with four drives each.
Identify the target pool for decommissioning and review the current capacity.
The remaining pools in the deployment *must* have sufficient total
capacity to migrate all object stored in the decommissioned pool.
In the example above, the deployment has 210TiB total storage with 110TiB used.
The first pool (``minio-{01...04}``) is the decommissioning target, as it was
provisioned when the MinIO deployment was created and is completely full. The
remaining newer pools can absorb all objects stored on the first pool without
significantly impacting total available storage.
2) Start the Decommissioning Process
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. admonition:: Decommissioning is Permanent
:class: warning
Once MinIO begins decommissioning a pool, it marks that pool as *permanently*
inactive ("draining"). Cancelling or otherwise interrupting the
decommissioning procedure does **not** restore the pool to an active
state.
Review and validate that you are decommissioning the correct pool
*before* running the following command.
Use the :mc-cmd:`mc admin decommission start` command to begin decommissioning
the target pool. Specify the :ref:`alias <alias>` of the deployment and the
full description of the pool to decommission, including all hosts, disks, and file paths.
.. code-block:: shell
:class: copyable
mc admin decommission start myminio/ https://minio-{01...04}.example.net:9000/mnt/disk{1...4}/minio
The example command begins decommissioning the matching server pool on the
``myminio`` deployment.
During the decommissioning process, MinIO continues routing read operations
(``GET``, ``LIST``, ``HEAD``) operations to the pool for those objects not
yet migrated. MinIO routes all new write operations (``PUT``) to the
remaining pools in the deployment.
Load balancers, reverse proxy, or other network control components which
manage connections to the deployment do not need to modify their configurations
at this time.
3) Monitor the Decommissioning Process
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Use the :mc-cmd:`mc admin decommission status` command to monitor the
decommissioning process.
.. code-block:: shell
:class: copyable
mc admin decommission status myminio
The command returns output similar to the following:
.. code-block:: shell
┌─────┬────────────────────────────────────────────────────────────────┬──────────────────────────────────┬──────────┐
│ ID │ Pools │ Capacity │ Status │
│ 1st │ https://minio-{01...04}.example.com:9000/mnt/disk{1...4}/minio │ 10 TiB (used) / 10 TiB (total) │ Draining │
│ 2nd │ https://minio-{05...08}.example.com:9000/mnt/disk{1...4}/minio │ 60 TiB (used) / 100 TiB (total) │ Active │
│ 3rd │ https://minio-{09...12}.example.com:9000/mnt/disk{1...4}/minio │ 40 TiB (used) / 100 TiB (total) │ Active │
└─────┴────────────────────────────────────────────────────────────────┴──────────────────────────────────┴──────────┘
You can retrieve more detailed information by specifying the description of
the server pool to the command:
.. code-block:: shell
:class: copyable
mc admin decommission status myminio https://minio-{01...04}.example.com:9000/mnt/disk{1...4}/minio
The command returns output similar to the following:
.. code-block:: shell
Decommissioning rate at 100MiB/sec [1TiB/10TiB]
Started: 30 minutes ago
:mc-cmd:`mc admin decommission status` marks the :guilabel:`Status` as
:guilabel:`Complete` once decommissioning is completed. You can move on to
the next step once decommissioning is completed.
If :guilabel:`Status` reads as failed, you can re-run the
:mc-cmd:`mc admin decommission start` command to resume the process.
For persistent failures, use :mc-cmd:`mc admin console` or review
the ``systemd`` logs (e.g. ``journalctl -u minio``) to identify more specific
errors.
4) Remove the Decommissioned Pool from the Deployment Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once decommissioning completes, you can safely remove the pool from the
deployment configuration. Modify the startup command for each remaining MinIO
server in the deployment and remove the decommissioned pool.
The ``.deb`` or ``.rpm`` packages install a
`systemd <https://www.freedesktop.org/wiki/Software/systemd/>`__ service file to
``/etc/systemd/system/minio.service``. For binary installations, this
procedure assumes the file was created manually as per the
:ref:`deploy-minio-distributed` procedure.
The ``minio.service`` file uses an environment file located at
``/etc/default/minio`` for sourcing configuration settings, including the
startup. Specifically, the ``MINIO_VOLUMES`` variable sets the startup
command:
.. code-block:: shell
:class: copyable
cat /etc/default/minio | grep "MINIO_VOLUMES"
The command returns output similar to the following:
.. code-block:: shell
MINIO_VOLUMES="https://minio-{1...4}.example.net:9000/mnt/disk{1...4}/minio https://minio-{5...8}.example.net:9000/mnt/disk{1...4}/minio https://minio-{9...12}.example.net:9000/mnt/disk{1...4}/minio"
Edit the environment file and remove the decommissioned pool from the
``MINIO_VOLUMES`` value.
5) Update Network Control Plane
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Update any load balancers, reverse proxies, or other network control planes
to remove the decommissioned server pool from the connection configuration for
the MinIO deployment.
Specific instructions for configuring network control plane components is
out of scope for this procedure.
6) Restart the MinIO Deployment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Issue the following commands on each node **simultaneously** in the deployment
to restart the MinIO service:
.. include:: /includes/linux/common-installation.rst
:start-after: start-install-minio-restart-service-desc
:end-before: end-install-minio-restart-service-desc
.. include:: /includes/common-installation.rst
:start-after: start-nondisruptive-upgrade-desc
:end-before: end-nondisruptive-upgrade-desc
Once the deployment is online, use :mc-cmd:`mc admin info` to confirm the
uptime of all remaining servers in the deployment.

View File

@ -0,0 +1,14 @@
.. _minio-k8s-delete-minio-tenant:
=====================
Delete a MinIO Tenant
=====================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 1
Stub: TODO

View File

@ -0,0 +1,572 @@
.. _deploy-minio-distributed:
====================================
Deploy MinIO: Multi-Node Multi-Drive
====================================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 1
Overview
--------
A distributed MinIO deployment consists of 4 or more drives/volumes managed by
one or more :mc:`minio server` process, where the processes manage pooling the
compute and storage resources into a single aggregated object storage resource.
Each MinIO server has a complete picture of the distributed topology, such that
an application can connect to any node in the deployment and perform S3
operations.
Distributed deployments implicitly enable :ref:`erasure coding
<minio-erasure-coding>`, MinIO's data redundancy and availability feature that
allows deployments to automatically reconstruct objects on-the-fly despite the
loss of multiple drives or nodes in the cluster. Erasure coding provides
object-level healing with less overhead than adjacent technologies such as RAID
or replication.
Depending on the configured :ref:`erasure code parity <minio-ec-parity>`, a
distributed deployment with ``m`` servers and ``n`` disks per server can
continue serving read and write operations with only ``m/2`` servers or
``m*n/2`` drives online and accessible.
Distributed deployments also support the following features:
- :ref:`Server-Side Object Replication <minio-bucket-replication-serverside>`
- :ref:`Write-Once Read-Many Locking <minio-bucket-locking>`
- :ref:`Object Versioning <minio-bucket-versioning>`
.. _deploy-minio-distributed-prereqs:
Prerequisites
-------------
Networking and Firewalls
~~~~~~~~~~~~~~~~~~~~~~~~
Each node should have full bidirectional network access to every other node in
the deployment. For containerized or orchestrated infrastructures, this may
require specific configuration of networking and routing components such as
ingress or load balancers. Certain operating systems may also require setting
firewall rules. For example, the following command explicitly opens the default
MinIO server API port ``9000`` for servers running firewalld :
.. code-block:: shell
:class: copyable
firewall-cmd --permanent --zone=public --add-port=9000/tcp
firewall-cmd --reload
All MinIO servers in the deployment *must* use the same listen port.
If you set a static :ref:`MinIO Console <minio-console>` port (e.g. ``:9001``)
you must *also* grant access to that port to ensure connectivity from external
clients.
MinIO **strongly recomends** using a load balancer to manage connectivity to the
cluster. The Load Balancer should use a "Least Connections" algorithm for
routing requests to the MinIO deployment, since any MinIO node in the deployment
can receive, route, or process client requests.
The following load balancers are known to work well with MinIO:
- `NGINX <https://www.nginx.com/products/nginx/load-balancing/>`__
- `HAProxy <https://cbonte.github.io/haproxy-dconv/2.3/intro.html#3.3.5>`__
Configuring firewalls or load balancers to support MinIO is out of scope for
this procedure.
Sequential Hostnames
~~~~~~~~~~~~~~~~~~~~
MinIO *requires* using expansion notation ``{x...y}`` to denote a sequential
series of MinIO hosts when creating a server pool. MinIO therefore *requires*
using sequentially-numbered hostnames to represent each
:mc:`minio server` process in the deployment.
Create the necessary DNS hostname mappings *prior* to starting this procedure.
For example, the following hostnames would support a 4-node distributed
deployment:
- ``minio1.example.com``
- ``minio2.example.com``
- ``minio3.example.com``
- ``minio4.example.com``
You can specify the entire range of hostnames using the expansion notation
``minio{1...4}.example.com``.
Configuring DNS to support MinIO is out of scope for this procedure.
.. _deploy-minio-distributed-prereqs-storage:
Local JBOD Storage with Sequential Mounts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. |deployment| replace:: deployment
.. include:: /includes/common-installation.rst
:start-after: start-local-jbod-desc
:end-before: end-local-jbod-desc
.. admonition:: Network File System Volumes Break Consistency Guarantees
:class: note
MinIO's strict **read-after-write** and **list-after-write** consistency
model requires local disk filesystems.
MinIO cannot provide consistency guarantees if the underlying storage
volumes are NFS or a similar network-attached storage volume.
For deployments that *require* using network-attached storage, use
NFSv4 for best results.
Considerations
--------------
Homogeneous Node Configurations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MinIO strongly recommends selecting substantially similar hardware
configurations for all nodes in the deployment. Ensure the hardware (CPU,
memory, motherboard, storage adapters) and software (operating system, kernel
settings, system services) is consistent across all nodes.
Deployment may exhibit unpredictable performance if nodes have heterogeneous
hardware or software configurations. Workloads that benefit from storing aged
data on lower-cost hardware should instead deploy a dedicated "warm" or "cold"
MinIO deployment and :ref:`transition <minio-lifecycle-management-tiering>`
data to that tier.
Erasure Coding Parity
~~~~~~~~~~~~~~~~~~~~~
MinIO :ref:`erasure coding <minio-erasure-coding>` is a data redundancy and
availability feature that allows MinIO deployments to automatically reconstruct
objects on-the-fly despite the loss of multiple drives or nodes in the cluster.
Erasure Coding provides object-level healing with less overhead than adjacent
technologies such as RAID or replication. Distributed deployments implicitly
enable and rely on erasure coding for core functionality.
Erasure Coding splits objects into data and parity blocks, where parity blocks
support reconstruction of missing or corrupted data blocks. The number of parity
blocks in a deployment controls the deployment's relative data redundancy.
Higher levels of parity allow for higher tolerance of drive loss at the cost of
total available storage.
MinIO defaults to ``EC:4`` , or 4 parity blocks per
:ref:`erasure set <minio-ec-erasure-set>`. You can set a custom parity
level by setting the appropriate
:ref:`MinIO Storage Class environment variable
<minio-server-envvar-storage-class>`. Consider using the MinIO
`Erasure Code Calculator <https://min.io/product/erasure-code-calculator>`__ for
guidance in selecting the appropriate erasure code parity level for your
cluster.
Capacity-Based Planning
~~~~~~~~~~~~~~~~~~~~~~~
MinIO generally recommends planning capacity such that
:ref:`server pool expansion <expand-minio-distributed>` is only required after
2+ years of deployment uptime.
For example, consider an application suite that is estimated to produce 10TB of
data per year. The MinIO deployment should provide *at minimum*:
``10TB + 10TB + 10TB = 30TB``
MinIO recommends adding buffer storage to account for potential growth in
stored data (e.g. 40TB of total usable storage). As a rule-of-thumb, more
capacity initially is preferred over frequent just-in-time expansion to meet
capacity requirements.
Since MinIO :ref:`erasure coding <minio-erasure-coding>` requires some
storage for parity, the total **raw** storage must exceed the planned **usable**
capacity. Consider using the MinIO `Erasure Code Calculator
<https://min.io/product/erasure-code-calculator>`__ for guidance in planning
capacity around specific erasure code settings.
Recommended Operating Systems
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. cond:: linux
This tutorial assumes all hosts running MinIO use a
:ref:`recommended Linux operating system <minio-installation-platform-support>`
such as RHEL8+ or Ubuntu 18.04+.
.. cond:: macos
This tutorial assumes all hosts running MinIO use a non-EOL macOS version (10.14+).
.. cond:: Windows
This tutorial assumes all hosts running MinIO use a non-EOL Windows distribution.
Support for running distributed MinIO deployments on Windows is *experimental*.
Pre-Existing Data
~~~~~~~~~~~~~~~~~
When starting a new MinIO server in a distributed environment, the storage devices must not have existing data.
Once you start the MinIO server, all interactions with the data must be done through the S3 API.
Use the :ref:`MinIO Client <minio-client>`, the :ref:`MinIO Console <minio-console>`, or one of the MinIO :ref:`Software Development Kits <minio-drivers>` to work with the buckets and objects.
.. warning::
Modifying files on the backend drives can result in data corruption or data loss.
.. _deploy-minio-distributed-baremetal:
Deploy Distributed MinIO
------------------------
The following procedure creates a new distributed MinIO deployment consisting
of a single :ref:`Server Pool <minio-intro-server-pool>`.
All commands provided below use example values. Replace these values with
those appropriate for your deployment.
Review the :ref:`deploy-minio-distributed-prereqs` before starting this
procedure.
1) Install the MinIO Binary on Each Node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. cond:: linux
.. include:: /includes/linux/common-installation.rst
:start-after: start-install-minio-binary-desc
:end-before: end-install-minio-binary-desc
.. cond:: macos
.. include:: /includes/macos/common-installation.rst
:start-after: start-install-minio-binary-desc
:end-before: end-install-minio-binary-desc
2) Create the ``systemd`` Service File
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. include:: /includes/linux/common-installation.rst
:start-after: start-install-minio-systemd-desc
:end-before: end-install-minio-systemd-desc
3) Create the Service Environment File
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Create an environment file at ``/etc/default/minio``. The MinIO
service uses this file as the source of all
:ref:`environment variables <minio-server-environment-variables>` used by
MinIO *and* the ``minio.service`` file.
The following examples assumes that:
- The deployment has a single server pool consisting of four MinIO server hosts
with sequential hostnames.
.. code-block:: shell
minio1.example.com minio3.example.com
minio2.example.com minio4.example.com
- All hosts have four locally-attached disks with sequential mount-points:
.. code-block:: shell
/mnt/disk1/minio /mnt/disk3/minio
/mnt/disk2/minio /mnt/disk4/minio
- The deployment has a load balancer running at ``https://minio.example.net``
that manages connections across all four MinIO hosts.
Modify the example to reflect your deployment topology:
.. code-block:: shell
:class: copyable
# Set the hosts and volumes MinIO uses at startup
# The command uses MinIO expansion notation {x...y} to denote a
# sequential series.
#
# The following example covers four MinIO hosts
# with 4 drives each at the specified hostname and drive locations.
# The command includes the port that each MinIO server listens on
# (default 9000)
MINIO_VOLUMES="https://minio{1...4}.example.net:9000/mnt/disk{1...4}/minio"
# Set all MinIO server options
#
# The following explicitly sets the MinIO Console listen address to
# port 9001 on all network interfaces. The default behavior is dynamic
# port selection.
MINIO_OPTS="--console-address :9001"
# Set the root username. This user has unrestricted permissions to
# perform S3 and administrative API operations on any resource in the
# deployment.
#
# Defer to your organizations requirements for superadmin user name.
MINIO_ROOT_USER=minioadmin
# Set the root password
#
# Use a long, random, unique string that meets your organizations
# requirements for passwords.
MINIO_ROOT_PASSWORD=minio-secret-key-CHANGE-ME
# Set to the URL of the load balancer for the MinIO deployment
# This value *must* match across all MinIO servers. If you do
# not have a load balancer, set this value to to any *one* of the
# MinIO hosts in the deployment as a temporary measure.
MINIO_SERVER_URL="https://minio.example.net:9000"
You may specify other :ref:`environment variables
<minio-server-environment-variables>` or server commandline options as required
by your deployment. All MinIO nodes in the deployment should include the same
environment variables with the same values for each variable.
4) Add TLS/SSL Certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. include:: /includes/common-installation.rst
:start-after: start-install-minio-tls-desc
:end-before: end-install-minio-tls-desc
5) Run the MinIO Server Process
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Issue the following commands on each node in the deployment to start the
MinIO service:
.. include:: /includes/linux/common-installation.rst
:start-after: start-install-minio-start-service-desc
:end-before: end-install-minio-start-service-desc
6) Open the MinIO Console
~~~~~~~~~~~~~~~~~~~~~~~~~
.. include:: /includes/common-installation.rst
:start-after: start-install-minio-console-desc
:end-before: end-install-minio-console-desc
7) Next Steps
~~~~~~~~~~~~~
- Create an :ref:`alias <minio-mc-alias>` for accessing the deployment using
:mc:`mc`.
- :ref:`Create users and policies to control access to the deployment
<minio-authentication-and-identity-management>`.
.. _deploy-minio-distributed-recommendations:
Deployment Recommendations
--------------------------
Minimum Nodes per Deployment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For all production deployments, MinIO recommends a *minimum* of 4 nodes per
:ref:`server pool <minio-intro-server-pool>` with 4 drives per server.
With the default :ref:`erasure code parity <minio-erasure-coding>` setting of
``EC:4``, this topology can continue serving read and write operations
despite the loss of up to 4 drives *or* one node.
The minimum recommendation reflects MinIO's experience with assisting enterprise
customers in deploying on a variety of IT infrastructures while maintaining the
desired SLA/SLO. While MinIO may run on less than the minimum recommended
topology, any potential cost savings come at the risk of decreased reliability.
Server Hardware
~~~~~~~~~~~~~~~
MinIO is hardware agnostic and runs on a variety of hardware architectures
ranging from ARM-based embedded systems to high-end x64 and POWER9 servers.
The following recommendations match MinIO's
`Reference Hardware <https://min.io/product/reference-hardware>`__ for
large-scale data storage:
.. list-table::
:stub-columns: 1
:widths: 20 80
:width: 100%
* - Processor
- Dual Intel Xeon Scalable Gold CPUs with 8 cores per socket.
* - Memory
- 128GB of Memory per pod
* - Network
- Minimum of 25GbE NIC and supporting network infrastructure between nodes.
MinIO can make maximum use of drive throughput, which can fully saturate
network links between MinIO nodes or clients. Large clusters may require
100GbE network infrastructure to fully utilize MinIO's per-node
performance potential.
* - Drives
- SATA/SAS NVMe/SSD with a minimum of 8 drives per server.
Drives should be :abbr:`JBOD (Just a Bunch of Disks)` arrays with
no RAID or similar technologies. MinIO recommends XFS formatting for
best performance.
Use the same type of disk (NVMe, SSD, or HDD) with the same capacity
across all nodes in the deployment. MinIO does not distinguish drive
types when using the underlying storage and does not benefit from mixed
storage types. Additionally. MinIO limits the size used per disk to the
smallest drive in the deployment. For example, if the deployment has 15
10TB disks and 1 1TB disk, MinIO limits the per-disk capacity to 1TB.
Networking
~~~~~~~~~~
MinIO recommends high speed networking to support the maximum possible
throughput of the attached storage (aggregated drives, storage controllers,
and PCIe busses). The following table provides general guidelines for the
maximum storage throughput supported by a given NIC:
.. list-table::
:header-rows: 1
:width: 100%
:widths: 40 60
* - NIC bandwidth (Gbps)
- Estimated Aggregated Storage Throughput (GBps)
* - 10GbE
- 1GBps
* - 25GbE
- 2.5GBps
* - 50GbE
- 5GBps
* - 100GbE
- 10GBps
CPU Allocation
~~~~~~~~~~~~~~
MinIO can perform well with consumer-grade processors. MinIO can take advantage
of CPUs which support AVX-512 SIMD instructions for increased performance of
certain operations.
MinIO benefits from allocating CPU based on the expected per-host network
throughput. The following table provides general guidelines for allocating CPU
for use by based on the total network bandwidth supported by the host:
.. list-table::
:header-rows: 1
:width: 100%
:widths: 40 60
* - Host NIC Bandwidth
- Recommended Pod vCPU
* - 10GbE or less
- 8 vCPU per pod.
* - 25GbE
- 16 vCPU per pod.
* - 50GbE
- 32 vCPU per pod.
* - 100GbE
- 64 vCPU per pod.
Memory Allocation
~~~~~~~~~~~~~~~~~
MinIO benefits from allocating memory based on the total storage of each host.
The following table provides general guidelines for allocating memory for use
by MinIO server processes based on the total amount of local storage on the
host:
.. list-table::
:header-rows: 1
:width: 100%
:widths: 40 60
* - Total Host Storage
- Recommended Host Memory
* - Up to 1 Tebibyte (Ti)
- 8GiB
* - Up to 10 Tebibyte (Ti)
- 16GiB
* - Up to 100 Tebibyte (Ti)
- 32GiB
* - Up to 1 Pebibyte (Pi)
- 64GiB
* - More than 1 Pebibyte (Pi)
- 128GiB
.. _minio-requests-per-node:
Requests Per Node
~~~~~~~~~~~~~~~~~
You can calculate the maximum number of concurrent requests per host with this formula:
:math:`totalRam / ramPerRequest`
To calculate the amount of RAM used for each request, use this formula:
:math:`((2MiB + 128KiB) * driveCount) + (2 * 10MiB) + (2 * 1 MiB)`
10MiB is the default erasure block size v1.
1 MiB is the default erasure block size v2.
The following table lists the maximum concurrent requests on a node based on the number of host drives and the *free* system RAM:
.. list-table::
:header-rows: 1
:width: 100%
* - Number of Drives
- 32 GiB of RAM
- 64 GiB of RAM
- 128 GiB of RAM
- 256 GiB of RAM
- 512 GiB of RAM
* - 4 Drives
- 1,074
- 2,149
- 4,297
- 8,595
- 17,190
* - 8 Drives
- 840
- 1,680
- 3,361
- 6,722
- 13,443
* - 16 Drives
- 585
- 1,170
- 2.341
- 4,681
- 9,362

View File

@ -0,0 +1,370 @@
=====================================
Deploy MinIO: Single-Node Multi-Drive
=====================================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 1
The procedures on this page cover deploying MinIO in :guilabel:`Standalone Mode` with multiple local volumes or folders.
This deployment supports and enables :ref:`erasure coding <minio-erasure-coding>` and its dependent features.
For extended development or production environments, *or* to access :ref:`advanced MinIO functionality <minio-installation-comparison>` deploy MinIO in :guilabel:`Distributed Mode`.
See :ref:`deploy-minio-distributed` for more information.
Prerequisites
-------------
.. _deploy-minio-standalone-multidrive:
Local JBOD Storage with Sequential Mounts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. |deployment| replace:: deployment
.. include:: /includes/common-installation.rst
:start-after: start-local-jbod-single-node-desc
:end-before: end-local-jbod-single-node-desc
.. admonition:: Network File System Volumes Break Consistency Guarantees
:class: note
MinIO's strict **read-after-write** and **list-after-write** consistency
model requires local disk filesystems.
MinIO cannot provide consistency guarantees if the underlying storage
volumes are NFS or a similar network-attached storage volume.
For deployments that *require* using network-attached storage, use
NFSv4 for best results.
Deploy Standalone Multi-Drive MinIO
-----------------------------------
The following procedure deploys MinIO in :guilabel:`Standalone Mode` consisting
of a single MinIO server and a single drive or storage volume. Standalone
deployments are best suited for evaluation and initial development environments.
.. admonition:: Network File System Volumes Break Consistency Guarantees
:class: note
MinIO's strict **read-after-write** and **list-after-write** consistency
model requires local disk filesystems (``xfs``, ``ext4``, etc.).
MinIO cannot provide consistency guarantees if the underlying storage
volumes are NFS or a similar network-attached storage volume.
For deployments that *require* using network-attached storage, use
NFSv4 for best results.
1) Download the MinIO Server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. cond:: linux
.. include:: /includes/linux/common-installation.rst
:start-after: start-install-minio-binary-desc
:end-before: end-install-minio-binary-desc
.. cond:: macos
.. include:: /includes/macos/common-installation.rst
:start-after: start-install-minio-binary-desc
:end-before: end-install-minio-binary-desc
2) Download and Run MinIO Server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. cond:: linux
.. include:: /includes/linux/common-installation.rst
:start-after: start-run-minio-binary-desc
:end-before: end-run-minio-binary-desc
.. cond:: macos
.. include:: /includes/macos/common-installation.rst
:start-after: start-run-minio-binary-desc
:end-before: end-run-minio-binary-desc
3) Add TLS Certificates
~~~~~~~~~~~~~~~~~~~~~~~
MinIO supports enabling :ref:`Transport Layer Security (TLS) <minio-TLS>` 1.2+
automatically upon detecting a x.509 private key (``private.key``) and public
certificate (``public.crt``) in the MinIO ``certs`` directory:
.. cond:: linux
.. code-block:: shell
${HOME}/.minio/certs
.. cond:: macos
.. code-block:: shell
${HOME}/.minio/certs
.. cond:: windows
.. code-block:: shell
``%%USERPROFILE%%\.minio\certs``
You can override the certificate directory using the
:mc-cmd:`minio server --certs-dir` commandline argument.
4) Run the MinIO Server with Non-Default Credentials
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Issue the following command to start the :mc:`minio server` with non-default
credentials. The table following this command breaks down each portion of the
command:
.. code-block:: shell
:class: copyable
export MINIO_ROOT_USER=minio-admin
export MINIO_ROOT_PASSWORD=minio-secret-key-CHANGE-ME
#export MINIO_SERVER_URL=https://minio.example.net
minio server /mnt/disk-{1...4} --console-address ":9090"
The example command breaks down as follows:
.. list-table::
:widths: 40 60
:width: 100%
* - :envvar:`MINIO_ROOT_USER`
- The access key for the :ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - :envvar:`MINIO_ROOT_PASSWORD`
- The corresponding secret key to use for the
:ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - :envvar:`MINIO_SERVER_URL`
- The URL hostname the MinIO Console uses for connecting to the MinIO
server. This variable is *required* if specifying TLS certificates
which **do not** contain the IP address of the MinIO Server host
as a :rfc:`Subject Alternative Name <5280#section-4.2.1.6>`.
Specify a hostname covered by one of the TLS certificate SAN entries.
You may specify other :ref:`environment variables
<minio-server-environment-variables>` as required by your deployment.
5) Open the MinIO Console
~~~~~~~~~~~~~~~~~~~~~~~~~
Open your browser to the DNS name or IP address corresponding to the
container and the :ref:`MinIO Console <minio-console>` port. For example,
``https://127.0.0.1:9090``.
Log in with the :guilabel:`MINIO_ROOT_USER` and :guilabel:`MINIO_ROOT_PASSWORD`
from the previous step.
.. image:: /images/minio-console/minio-console.png
:width: 600px
:alt: MinIO Console Dashboard displaying Monitoring Data
:align: center
You can use the MinIO Console for general administration tasks like
Identity and Access Management, Metrics and Log Monitoring, or
Server Configuration. Each MinIO server includes its own embedded MinIO
Console.
Applications should use the ``https://HOST-ADDRESS:9000`` to perform S3
operations against the MinIO server.
.. _deploy-minio-standalone-multidrive-container:
Deploy Standalone Multi-Drive MinIO in a Container
--------------------------------------------------
The following procedure deploys a single MinIO container with multiple drives.
The procedure uses `Podman <https://podman.io/>`__ for running the MinIO
container in rootfull mode. Configuring for rootless mode is out of scope for
this procedure.
.. admonition:: Network File System Volumes Break Consistency Guarantees
:class: note
MinIO's strict **read-after-write** and **list-after-write** consistency
model requires local disk filesystems (``xfs``, ``ext4``, etc.).
MinIO cannot provide consistency guarantees if the underlying storage
volumes are NFS or a similar network-attached storage volume.
For deployments that *require* using network-attached storage, use
NFSv4 for best results.
1) Create a Configuration File to store Environment Variables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MinIO reads configuration values from environment variables. MinIO supports
reading these environment variables from ``/run/secrets/config.env``. Save
the ``config.env`` file as a :podman-docs:`Podman secret <secret.html>` and
specify it as part of running the container.
Create a file ``config.env`` using your preferred text editor and enter the
following environment variables:
.. code-block:: shell
:class: copyable
export MINIO_ROOT_USER=minio-admin
export MINIO_ROOT_PASSWORD=minio-secret-key-CHANGE-ME
#export MINIO_SERVER_URL=https://minio.example.net
Create the Podman secret using the ``config.env`` file:
.. code-block:: shell
:class: copyable
sudo podman secret create config.env config.env
The following table details each environment variable set in ``config.env``:
.. list-table::
:widths: 40 60
:width: 100%
* - :envvar:`MINIO_ROOT_USER`
- The access key for the :ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - :envvar:`MINIO_ROOT_PASSWORD`
- The corresponding secret key to use for the
:ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - :envvar:`MINIO_SERVER_URL`
- The URL hostname the MinIO Console uses for connecting to the MinIO
server. This variable is *required* if specifying TLS certificates
which **do not** contain the IP address of the MinIO Server host
as a :rfc:`Subject Alternative Name <5280#section-4.2.1.6>`.
Specify a hostname covered by one of the TLS certificate SAN entries.
You may specify other :ref:`environment variables
<minio-server-environment-variables>` as required by your deployment.
2) Add TLS Certificates
~~~~~~~~~~~~~~~~~~~~~~~
MinIO supports enabling :ref:`Transport Layer Security (TLS) <minio-TLS>` 1.2+
automatically upon detecting a x.509 private key (``private.key``) and public
certificate (``public.crt``) in the MinIO ``certs`` directory:
Create a Podman secret pointing to the x.509
``private.key`` and ``public.crt`` to use for the container.
.. code-block:: shell
:class: copyable
sudo podman secret create private.key /path/to/private.key
sudo podman secret create public.crt /path/to/public.crt
You can optionally skip this step to deploy without TLS enabled. MinIO
strongly recommends *against* non-TLS deployments outside of early development.
3) Run the MinIO Container
~~~~~~~~~~~~~~~~~~~~~~~~~~
Issue the following command to start the MinIO server in a container:
.. code-block:: shell
:class: copyable
sudo podman run -p 9000:9000 -p 9090:9090 \
-v /mnt/disk-1:/mnt/disk-1 \
-v /mnt/disk-2:/mnt/disk-2 \
-v /mnt/disk-3:/mnt/disk-3 \
-v /mnt/disk-4:/mnt/disk-4 \
--secret private.key \
--secret public.crt \
--secret config.env \
minio/minio server /mnt/disk-{1...4} \
--console-address ":9090" \
--certs-dir "/run/secrets/"
The example command breaks down as follows:
.. list-table::
:widths: 40 60
:width: 100%
* - ``-p 9000:9000, -p 9090:9090``
- Exposes the container internal port ``9000`` and ``9090`` through
the node port ``9000`` and ``9090`` respectively.
Port ``9000`` is the default MinIO server listen port.
Port ``9090`` is the :ref:`MinIO Console <minio-console>` listen port
specified by the ``--console-address`` argument.
* - ``-v /mnt/disk-n:/mnt/disk-n``
- Mounts a local volume to the container at the specified path.
The ``/mnt/disk-{1...4}`` uses MinIO expansion notation to denote a sequential series of drives between 1 and 4 inclusive.
* - ``--secret ...``
- Mounts a secret to the container. The specified secrets correspond to
the following:
- The x.509 private and public key the MinIO server process uses for
enabling TLS.
- The ``config.env`` file from which MinIO looks for configuration
environment variables.
* - ``/data``
- The path to the container volume in which the ``minio`` server stores
all information related to the deployment.
See :mc-cmd:`minio server DIRECTORIES` for more information on
configuring the backing storage for the :mc:`minio server` process.
* - ``--console-address ":9090"``
- The static port on which the embedded MinIO Console listens for incoming
connections.
Omit to allow MinIO to select a dynamic port for the MinIO Console.
With dynamic port selection, browsers opening the root node hostname
``https://minio1.example.com:9000`` are automatically redirected to the
Console.
* - ``--cert /run/secrets/``
- Directs the MinIO server to use the ``/run/secrets/`` folder for
retrieving x.509 certificates to use for enabling TLS.
4) Open the MinIO Console
~~~~~~~~~~~~~~~~~~~~~~~~~
Open your browser to the DNS name or IP address corresponding to the
container and the :ref:`MinIO Console <minio-console>` port. For example,
``https://127.0.0.1:9090``.
Log in with the :guilabel:`MINIO_ROOT_USER` and :guilabel:`MINIO_ROOT_PASSWORD`
from the previous step.
.. image:: /images/minio-console/minio-console.png
:width: 600px
:alt: MinIO Console Dashboard displaying Monitoring Data
:align: center
You can use the MinIO Console for general administration tasks like
Identity and Access Management, Metrics and Log Monitoring, or
Server Configuration. Each MinIO server includes its own embedded MinIO
Console.
Applications should use the ``https://HOST-ADDRESS:9000`` to perform S3
operations against the MinIO server.

View File

@ -0,0 +1,418 @@
======================================
Deploy MinIO: Single-Node Single-Drive
======================================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 1
The procedures on this page cover deploying MinIO in a Single-Node Single-Drive (SNSD) configuration for early development and evaluation.
This mode was previously called :guilabel:`Standalone Mode` or 'filesystem' mode.
Starting with :minio-release:`RELEASE.2022-06-02T02-11-04Z`, MinIO implements a zero-parity erasure coded backend for single-node single-drive deployments.
This feature allows access to :ref:`erasure coding dependent features <minio-erasure-coding>` without the requirement of multiple drives.
MinIO only starts in |SNSD| mode if the storage volume or path is empty *or* only contain files generated by a previous |SNSD| deployment.
See :ref:`minio-snsd-pre-existing-data <minio-snsd-pre-existing-data>` for more complete documentation on MinIO startup behavior in |SNSD| mode.
For extended development or production environments, deploy MinIO in :guilabel:`Distributed Mode`. See :ref:`deploy-minio-distributed` for more information.
.. _minio-snsd-pre-existing-data:
Pre-Existing Data
-----------------
MinIO startup behavior depends on the the contents of the specified storage volume or path.
The server checks for both MinIO-internal backend data and the structure of existing folders and files.
The following table lists the possible storage volume states and MinIO behavior:
.. list-table::
:header-rows: 1
:widths: 40 60
* - Storage Volume State
- Behavior
* - Empty with **no** files, folders, or MinIO backend data
- MinIO starts in |SNSD| mode and creates the zero-parity backend
* - Existing |SNSD| zero-parity objects and MinIO backend data
- MinIO resumes in |SNSD| mode
* - Existing filesystem folders, files, and MinIO backend data
- MinIO resumes in the legacy filesystem ("Standalone") mode with no erasure-coding features
* - Existing filesystem folders, files, but **no** MinIO backend data
- MinIO returns an error and does not start
.. _deploy-minio-standalone:
Deploy Single-Node Single-Drive MinIO
-------------------------------------
The following procedure deploys MinIO consisting of a single MinIO server and a single drive or storage volume.
.. admonition:: Network File System Volumes Break Consistency Guarantees
:class: note
MinIO's strict **read-after-write** and **list-after-write** consistency
model requires local disk filesystems (``xfs``, ``ext4``, etc.).
MinIO cannot provide consistency guarantees if the underlying storage
volumes are NFS or a similar network-attached storage volume.
For deployments that *require* using network-attached storage, use
NFSv4 for best results.
1) Download the MinIO Server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. cond:: linux
.. include:: /includes/linux/common-installation.rst
:start-after: start-install-minio-binary-desc
:end-before: end-install-minio-binary-desc
.. cond:: macos
.. include:: /includes/macos/common-installation.rst
:start-after: start-install-minio-binary-desc
:end-before: end-install-minio-binary-desc
2) Download and Run MinIO Server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. cond:: linux
.. include:: /includes/linux/common-installation.rst
:start-after: start-run-minio-binary-desc
:end-before: end-run-minio-binary-desc
.. cond:: macos
.. include:: /includes/macos/common-installation.rst
:start-after: start-run-minio-binary-desc
:end-before: end-run-minio-binary-desc
3) Add TLS Certificates
~~~~~~~~~~~~~~~~~~~~~~~
MinIO supports enabling :ref:`Transport Layer Security (TLS) <minio-TLS>` 1.2+
automatically upon detecting a x.509 private key (``private.key``) and public
certificate (``public.crt``) in the MinIO ``certs`` directory:
.. cond:: linux
.. code-block:: shell
${HOME}/.minio/certs
.. cond:: macos
.. code-block:: shell
${HOME}/.minio/certs
.. cond:: windows
.. code-block:: shell
``%%USERPROFILE%%\.minio\certs``
You can override the certificate directory using the
:mc-cmd:`minio server --certs-dir` commandline argument.
4) Run the MinIO Server with Non-Default Credentials
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Issue the following command to start the :mc:`minio server` with non-default
credentials. The table following this command breaks down each portion of the
command:
.. cond:: linux
.. code-block:: shell
:class: copyable
export MINIO_ROOT_USER=minio-admin
export MINIO_ROOT_PASSWORD=minio-secret-key-CHANGE-ME
#export MINIO_SERVER_URL=https://minio.example.net
minio server /data --console-address ":9090"
.. cond:: macos
.. code-block:: shell
:class: copyable
export MINIO_ROOT_USER=minio-admin
export MINIO_ROOT_PASSWORD=minio-secret-key-CHANGE-ME
#export MINIO_SERVER_URL=https://minio.example.net
minio server /data --console-address ":9090"
.. cond:: windows
.. code-block:: powershell
:class: copyable
PS C:\minio> MINIO_ROOT_USER = 'minio-admin'
PS C:\minio> MINIO_ROOT_PASSWORD = 'minio-secret-key-CHANGE-ME'
PS C:\minio> MINIO_SERVER_URL = 'https://minio.example.net'
The example command breaks down as follows:
.. list-table::
:widths: 40 60
:width: 100%
* - :envvar:`MINIO_ROOT_USER`
- The access key for the :ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - :envvar:`MINIO_ROOT_PASSWORD`
- The corresponding secret key to use for the
:ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - :envvar:`MINIO_SERVER_URL`
- The URL hostname the MinIO Console uses for connecting to the MinIO
server. This variable is *required* if specifying TLS certificates
which **do not** contain the IP address of the MinIO Server host
as a :rfc:`Subject Alternative Name <5280#section-4.2.1.6>`.
Specify a hostname covered by one of the TLS certificate SAN entries.
* - ``/data``
- The path to each disk on the host machine.
See :mc-cmd:`minio server DIRECTORIES` for more information on
configuring the backing storage for the :mc:`minio server` process.
MinIO writes objects to the specified directory as is and without
:ref:`minio-erasure-coding`. Any other application accessing that
directory can read and modify stored objects.
* - ``--console-address ":9090"``
- The static port on which the embedded MinIO Console listens for incoming
connections.
Omit to allow MinIO to select a dynamic port for the MinIO Console.
With dynamic port selection, browsers opening the root node hostname
``https://minio1.example.com:9000`` are automatically redirected to the
Console.
You may specify other :ref:`environment variables
<minio-server-environment-variables>` as required by your deployment.
5) Open the MinIO Console
~~~~~~~~~~~~~~~~~~~~~~~~~
Open your browser to the DNS name or IP address corresponding to the
container and the :ref:`MinIO Console <minio-console>` port. For example,
``https://127.0.0.1:9090``.
Log in with the :guilabel:`MINIO_ROOT_USER` and :guilabel:`MINIO_ROOT_PASSWORD`
from the previous step.
.. image:: /images//minio-console/minio-console.png
:width: 600px
:alt: MinIO Console Dashboard displaying Monitoring Data
:align: center
You can use the MinIO Console for general administration tasks like
Identity and Access Management, Metrics and Log Monitoring, or
Server Configuration. Each MinIO server includes its own embedded MinIO
Console.
Applications should use the ``https://HOST-ADDRESS:9000`` to perform S3
operations against the MinIO server.
.. _deploy-minio-standalone-container:
Deploy Containerized Single-Node Single-Drive MinIO
---------------------------------------------------
The following procedure deploys a single MinIO container with a single drive.
The procedure uses `Podman <https://podman.io/>`__ for running the MinIO
container in rootfull mode. Configuring for rootless mode is out of scope for
this procedure.
.. admonition:: Network File System Volumes Break Consistency Guarantees
:class: note
MinIO's strict **read-after-write** and **list-after-write** consistency
model requires local disk filesystems (``xfs``, ``ext4``, etc.).
MinIO cannot provide consistency guarantees if the underlying storage
volumes are NFS or a similar network-attached storage volume.
For deployments that *require* using network-attached storage, use
NFSv4 for best results.
1) Create a Configuration File to store Environment Variables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MinIO reads configuration values from environment variables. MinIO supports
reading these environment variables from ``/run/secrets/config.env``. Save
the ``config.env`` file as a :podman-docs:`Podman secret <secret.html>` and
specify it as part of running the container.
Create a file ``config.env`` using your preferred text editor and enter the
following environment variables:
.. code-block:: shell
:class: copyable
export MINIO_ROOT_USER=minio-admin
export MINIO_ROOT_PASSWORD=minio-secret-key-CHANGE-ME
#export MINIO_SERVER_URL=https://minio.example.net
Create the Podman secret using the ``config.env`` file:
.. code-block:: shell
:class: copyable
sudo podman secret create config.env config.env
The following table details each environment variable set in ``config.env``:
.. list-table::
:widths: 40 60
:width: 100%
* - :envvar:`MINIO_ROOT_USER`
- The access key for the :ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - :envvar:`MINIO_ROOT_PASSWORD`
- The corresponding secret key to use for the
:ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - :envvar:`MINIO_SERVER_URL`
- The URL hostname the MinIO Console uses for connecting to the MinIO
server. This variable is *required* if specifying TLS certificates
which **do not** contain the IP address of the MinIO Server host
as a :rfc:`Subject Alternative Name <5280#section-4.2.1.6>`.
Specify a hostname covered by one of the TLS certificate SAN entries.
You may specify other :ref:`environment variables
<minio-server-environment-variables>` as required by your deployment.
2) Add TLS Certificates
~~~~~~~~~~~~~~~~~~~~~~~
MinIO supports enabling :ref:`Transport Layer Security (TLS) <minio-TLS>` 1.2+
automatically upon detecting a x.509 private key (``private.key``) and public
certificate (``public.crt``) in the MinIO ``certs`` directory:
Create a Podman secret pointing to the x.509
``private.key`` and ``public.crt`` to use for the container.
.. code-block:: shell
:class: copyable
sudo podman secret create private.key /path/to/private.key
sudo podman secret create public.crt /path/to/public.crt
You can optionally skip this step to deploy without TLS enabled. MinIO
strongly recommends *against* non-TLS deployments outside of early development.
3) Run the MinIO Container
~~~~~~~~~~~~~~~~~~~~~~~~~~
Issue the following command to start the MinIO server in a container:
.. code-block:: shell
:class: copyable
sudo podman run -p 9000:9000 -p 9090:9090 \
-v /data:/data \
--secret private.key \
--secret public.crt \
--secret config.env \
minio/minio server /data \
--console-address ":9090" \
--certs-dir "/run/secrets/"
The example command breaks down as follows:
.. list-table::
:widths: 40 60
:width: 100%
* - ``-p 9000:9000, -p 9090:9090``
- Exposes the container internal port ``9000`` and ``9090`` through
the node port ``9000`` and ``9090`` respectively.
Port ``9000`` is the default MinIO server listen port.
Port ``9090`` is the :ref:`MinIO Console <minio-console>` listen port
specified by the ``--console-address`` argument.
* - ``-v /data:/data``
- Mounts a local volume to the container at the specified path.
* - ``--secret ...``
- Mounts a secret to the container. The specified secrets correspond to
the following:
- The x.509 private and public key the MinIO server process uses for
enabling TLS.
- The ``config.env`` file from which MinIO looks for configuration
environment variables.
* - ``/data``
- The path to the container volume in which the ``minio`` server stores
all information related to the deployment.
See :mc-cmd:`minio server DIRECTORIES` for more information on
configuring the backing storage for the :mc:`minio server` process.
* - ``--console-address ":9090"``
- The static port on which the embedded MinIO Console listens for incoming
connections.
Omit to allow MinIO to select a dynamic port for the MinIO Console.
With dynamic port selection, browsers opening the root node hostname
``https://minio1.example.com:9000`` are automatically redirected to the
Console.
* - ``--cert /run/secrets/``
- Directs the MinIO server to use the ``/run/secrets/`` folder for
retrieving x.509 certificates to use for enabling TLS.
4) Open the MinIO Console
~~~~~~~~~~~~~~~~~~~~~~~~~
Open your browser to the DNS name or IP address corresponding to the
container and the :ref:`MinIO Console <minio-console>` port. For example,
``https://127.0.0.1:9090``.
Log in with the :guilabel:`MINIO_ROOT_USER` and :guilabel:`MINIO_ROOT_PASSWORD`
from the previous step.
.. image:: /images//minio-console/minio-console.png
:width: 600px
:alt: MinIO Console Dashboard displaying Monitoring Data
:align: center
You can use the MinIO Console for general administration tasks like
Identity and Access Management, Metrics and Log Monitoring, or
Server Configuration. Each MinIO server includes its own embedded MinIO
Console.
Applications should use the ``https://HOST-ADDRESS:9000`` to perform S3
operations against the MinIO server.

View File

@ -0,0 +1,18 @@
.. _minio-k8s-deploy-minio-tenant:
.. The following label handles links from content to distributed MinIO in K8s context
.. _deploy-minio-distributed:
=====================
Deploy a MinIO Tenant
=====================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 1
Stub: TODO

View File

@ -0,0 +1,412 @@
.. _expand-minio-distributed:
=====================================
Expand a Distributed MinIO Deployment
=====================================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 1
A distributed MinIO deployment consists of 4 or more drives/volumes managed by
one or more :mc:`minio server` process, where the processes manage pooling the
compute and storage resources into a single aggregated object storage resource.
Each MinIO server has a complete picture of the distributed topology, such that
an application can connect to any node in the deployment and perform S3
operations.
MinIO supports expanding an existing distributed deployment by adding a new
:ref:`Server Pool <minio-intro-server-pool>`. Each Pool expands the total
available storage capacity of the cluster while maintaining the overall
:ref:`availability <minio-erasure-coding>` of the cluster. Each Pool is its
own failure domain, where the loss of one or more disks or nodes in that pool
does not effect the availability of other pools in the deployment.
The procedure on this page expands an existing
:ref:`distributed <deploy-minio-distributed>` MinIO deployment with an
additional server pool.
.. _expand-minio-distributed-prereqs:
Prerequisites
-------------
Networking and Firewalls
~~~~~~~~~~~~~~~~~~~~~~~~
Each node should have full bidirectional network access to every other node in
the deployment. For containerized or orchestrated infrastructures, this may
require specific configuration of networking and routing components such as
ingress or load balancers. Certain operating systems may also require setting
firewall rules. For example, the following command explicitly opens the default
MinIO server API port ``9000`` on servers using ``firewalld``:
.. code-block:: shell
:class: copyable
firewall-cmd --permanent --zone=public --add-port=9000/tcp
firewall-cmd --reload
All MinIO servers in the deployment *must* use the same listen port.
If you set a static :ref:`MinIO Console <minio-console>` port (e.g. ``:9001``)
you must *also* grant access to that port to ensure connectivity from external
clients.
MinIO **strongly recomends** using a load balancer to manage connectivity to the
cluster. The Load Balancer should use a "Least Connections" algorithm for
routing requests to the MinIO deployment, since any MinIO node in the deployment
can receive, route, or process client requests.
The following load balancers are known to work well with MinIO:
- `NGINX <https://www.nginx.com/products/nginx/load-balancing/>`__
- `HAProxy <https://cbonte.github.io/haproxy-dconv/2.3/intro.html#3.3.5>`__
Configuring firewalls or load balancers to support MinIO is out of scope for
this procedure.
Sequential Hostnames
~~~~~~~~~~~~~~~~~~~~
MinIO *requires* using expansion notation ``{x...y}`` to denote a sequential
series of MinIO hosts when creating a server pool. MinIO therefore *requires*
using sequentially-numbered hostnames to represent each
:mc:`minio server` process in the pool.
Create the necessary DNS hostname mappings *prior* to starting this procedure.
For example, the following hostnames would support a 4-node distributed
server pool:
- ``minio5.example.com``
- ``minio6.example.com``
- ``minio7.example.com``
- ``minio8.example.com``
You can specify the entire range of hostnames using the expansion notation
``minio{5...8}.example.com``.
Configuring DNS to support MinIO is out of scope for this procedure.
Local JBOD Storage with Sequential Mounts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. |deployment| replace:: server pool
.. include:: /includes/common-installation.rst
:start-after: start-local-jbod-desc
:end-before: end-local-jbod-desc
.. admonition:: Network File System Volumes Break Consistency Guarantees
:class: note
MinIO's strict **read-after-write** and **list-after-write** consistency
model requires local disk filesystems (``xfs``, ``ext4``, etc.).
MinIO cannot provide consistency guarantees if the underlying storage
volumes are NFS or a similar network-attached storage volume.
For deployments that *require* using network-attached storage, use
NFSv4 for best results.
Minimum Drives for Erasure Code Parity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MinIO requires each pool satisfy the deployment :ref:`erasure code
<minio-erasure-coding>` settings. Specifically the new pool topology must
support a minimum of ``2 x EC:N`` drives per
:ref:`erasure set <minio-ec-erasure-set>`, where ``EC:N`` is the
:ref:`Standard <minio-ec-storage-class>` parity storage class of the
deployment. This requirement ensures the new server pool can satisfy the
expected :abbr:`SLA (Service Level Agreement)` of the deployment.
You can use the
`MinIO Erasure Code Calculator
<https://min.io/product/erasure-code-calculator?ref=docs>`__ to check the
:guilabel:`Erasure Code Stripe Size (K+M)` of your new pool. If the highest
listed value is at least ``2 x EC:N``, the pool supports the deployment's
erasure parity settings.
Considerations
--------------
Writing Files
~~~~~~~~~~~~~
MinIO does not rebalance objects across the new server pools.
Instead, MinIO performs new write operations to the pool with the most free
storage weighted by the amount of free space on the pool divided by the free space across all available pools.
The formula to determine the probability of a write operation on a particular pool is
:math:`FreeSpaceOnPoolA / FreeSpaceOnAllPools`
Consider a situation where a group of two pools has a total of 10 TiB of free space distributed as:
- Pool A has 3 TiB of free space
- Pool B has 2 TiB of free space
- Pool C has 5 TiB of free space
MinIO calculates the probability of a write operation to each of the pools as:
- Pool A: 30% chance (:math:`3TiB / 10TiB`)
- Pool B: 20% chance (:math:`2TiB / 10TiB`)
- Pool C: 50% chance (:math:`5TiB / 10TiB`)
In addition to the free space calculation, if a write option (with parity) would bring a disk
usage above 99% or a known free inode count below 1000, MinIO does not write to the pool.
Likewise, MinIO does not write to pools in a decommissioning process.
Homogeneous Node Configurations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MinIO strongly recommends selecting substantially similar hardware
configurations for all nodes in the new server pool. Ensure the hardware (CPU,
memory, motherboard, storage adapters) and software (operating system, kernel
settings, system services) is consistent across all nodes in the pool.
The new pool may exhibit unpredictable performance if nodes have heterogeneous
hardware or software configurations. Workloads that benefit from storing aged
data on lower-cost hardware should instead deploy a dedicated "warm" or "cold"
MinIO deployment and :ref:`transition <minio-lifecycle-management-tiering>`
data to that tier.
The new server pool does **not** need to be substantially similar in hardware
and software configuration to any existing server pool, though this may allow
for simplified cluster management and more predictable performance across pools.
See :ref:`deploy-minio-distributed-recommendations` for more guidance on
selecting hardware for MinIO deployments.
Expansion is Non-Disruptive
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Adding a new server pool requires restarting *all* MinIO nodes in the
deployment at around same time.
.. include:: /includes/common-installation.rst
:start-after: start-nondisruptive-upgrade-desc
:end-before: end-nondisruptive-upgrade-desc
Capacity-Based Planning
~~~~~~~~~~~~~~~~~~~~~~~
MinIO generally recommends planning capacity such that
:ref:`server pool expansion <expand-minio-distributed>` is only required after
2+ years of deployment uptime.
For example, consider an application suite that is estimated to produce 10TB of
data per year. The current deployment is running low on free storage and
therefore requires expansion to meet the ongoing storage demands of the
application. The new server pool should provide *at minimum*
``10TB + 10TB + 10TB = 30TB``
MinIO recommends adding buffer storage to account for potential growth in stored
data (e.g. 40TB of total usable storage). The total planned *usable* storage in
the deployment would therefore be ~80TB. As a rule-of-thumb, more capacity
initially is preferred over frequent just-in-time expansion to meet capacity
requirements.
Since MinIO :ref:`erasure coding <minio-erasure-coding>` requires some
storage for parity, the total **raw** storage must exceed the planned **usable**
capacity. Consider using the MinIO `Erasure Code Calculator
<https://min.io/product/erasure-code-calculator>`__ for guidance in planning
capacity around specific erasure code settings.
Recommended Operating Systems
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This tutorial assumes all hosts running MinIO use a
:ref:`recommended Linux operating system <minio-installation-platform-support>`
such as RHEL8+ or Ubuntu 18.04+.
For other operating systems such as Windows or OSX, visit
`https://min.io/download <https://min.io/download?ref=docs>`__ and select the
tab associated to your operating system. Follow the displayed instructions to
install the MinIO server binary on each node. Defer to the OS best practices for
starting MinIO as a service (e.g. not attached to the terminal/shell session).
Support for running MinIO in distributed mode on Windows hosts is
**experimental**. Contact MinIO at hello@min.io if your infrastructure requires
deployment onto Windows hosts.
.. _expand-minio-distributed-baremetal:
Expand a Distributed MinIO Deployment
-------------------------------------
The following procedure adds a :ref:`Server Pool <minio-intro-server-pool>`
to an existing MinIO deployment. Each Pool expands the total available
storage capacity of the cluster while maintaining the overall
:ref:`availability <minio-erasure-coding>` of the cluster.
All commands provided below use example values. Replace these values with
those appropriate for your deployment.
Review the :ref:`expand-minio-distributed-prereqs` before starting this
procedure.
1) Install the MinIO Binary on Each Node in the New Server Pool
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. cond:: linux
.. include:: /includes/linux/common-installation.rst
:start-after: start-install-minio-binary-desc
:end-before: end-install-minio-binary-desc
.. cond:: macos
.. include:: /includes/macos/common-installation.rst
:start-after: start-install-minio-binary-desc
:end-before: end-install-minio-binary-desc
2) Add TLS/SSL Certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. include:: /includes/common-installation.rst
:start-after: start-install-minio-tls-desc
:end-before: end-install-minio-tls-desc
3) Create the ``systemd`` Service File
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. include:: /includes/linux/common-installation.rst
:start-after: start-install-minio-systemd-desc
:end-before: end-install-minio-systemd-desc
4) Create the Service Environment File
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Create an environment file at ``/etc/default/minio``. The MinIO
service uses this file as the source of all
:ref:`environment variables <minio-server-environment-variables>` used by
MinIO *and* the ``minio.service`` file.
The following examples assumes that:
- The deployment has a single server pool consisting of four MinIO server hosts
with sequential hostnames.
.. code-block:: shell
minio1.example.com minio3.example.com
minio2.example.com minio4.example.com
Each host has 4 locally attached drives with
sequential mount points:
.. code-block:: shell
/mnt/disk1/minio /mnt/disk3/minio
/mnt/disk2/minio /mnt/disk4/minio
- The new server pool consists of eight new MinIO hosts with sequential
hostnames:
.. code-block:: shell
minio5.example.com minio9.example.com
minio6.example.com minio10.example.com
minio7.example.com minio11.example.com
minio8.example.com minio12.example.com
- All hosts have eight locally-attached disks with sequential mount-points:
.. code-block:: shell
/mnt/disk1/minio /mnt/disk5/minio
/mnt/disk2/minio /mnt/disk6/minio
/mnt/disk3/minio /mnt/disk7/minio
/mnt/disk4/minio /mnt/disk8/minio
- The deployment has a load balancer running at ``https://minio.example.net``
that manages connections across all MinIO hosts. The load balancer should
not be routing requests to the new hosts at this step, but should have
the necessary configuration updates planned.
Modify the example to reflect your deployment topology:
.. code-block:: shell
:class: copyable
# Set the hosts and volumes MinIO uses at startup
# The command uses MinIO expansion notation {x...y} to denote a
# sequential series.
#
# The following example starts the MinIO server with two server pools.
#
# The space delimiter indicates a seperate server pool
#
# The second set of hostnames and volumes is the newly added pool.
# The pool has sufficient stripe size to meet the existing erasure code
# parity of the deployment (2 x EC:4)
#
# The command includes the port on which the MinIO servers listen for each
# server pool.
MINIO_VOLUMES="https://minio{1...4}.example.net:9000/mnt/disk{1...4}/minio https://minio{5...12}.example.net:9000/mnt/disk{1...8}/minio"
# Set all MinIO server options
#
# The following explicitly sets the MinIO Console listen address to
# port 9001 on all network interfaces. The default behavior is dynamic
# port selection.
MINIO_OPTS="--console-address :9001"
# Set the root username. This user has unrestricted permissions to
# perform S3 and administrative API operations on any resource in the
# deployment.
#
# Defer to your organizations requirements for superadmin user name.
MINIO_ROOT_USER=minioadmin
# Set the root password
#
# Use a long, random, unique string that meets your organizations
# requirements for passwords.
MINIO_ROOT_PASSWORD=minio-secret-key-CHANGE-ME
# Set to the URL of the load balancer for the MinIO deployment
# This value *must* match across all MinIO servers. If you do
# not have a load balancer, set this value to to any *one* of the
# MinIO hosts in the deployment as a temporary measure.
MINIO_SERVER_URL="https://minio.example.net:9000"
You may specify other :ref:`environment variables
<minio-server-environment-variables>` or server commandline options as required
by your deployment. All MinIO nodes in the deployment should include the same
environment variables with the matching values.
5) Restart the MinIO Deployment with Expanded Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Issue the following commands on each node **simultaneously** in the deployment
to restart the MinIO service:
.. include:: /includes/linux/common-installation.rst
:start-after: start-install-minio-restart-service-desc
:end-before: end-install-minio-restart-service-desc
.. include:: /includes/common-installation.rst
:start-after: start-nondisruptive-upgrade-desc
:end-before: end-nondisruptive-upgrade-desc
6) Next Steps
~~~~~~~~~~~~~
- Update any load balancers, reverse proxies, or other network control planes
to route client requests to the new hosts in the MinIO distributed deployment.
While MinIO automatically manages routing internally, having the control
planes handle initial connection management may reduce network hops and
improve efficiency.
- Review the :ref:`MinIO Console <minio-console>` to confirm the updated
cluster topology and monitor performance.

View File

@ -0,0 +1,14 @@
.. _minio-k8s-expand-minio-tenant:
=====================
Expand a MinIO Tenant
=====================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 1
Stub: TODO

View File

@ -0,0 +1,22 @@
.. _minio-k8s-modify-minio-tenant:
=====================
Modify a MinIO Tenant
=====================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 1
Stub: TODO
.. Following link is intended for K8s only
.. _minio-decommissioning:
Decommission a Tenant Server Pool
---------------------------------
STUB: ToDo

View File

@ -0,0 +1,524 @@
.. _minio-site-replication-overview:
=========================
Site Replication Overview
=========================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 1
Site replication configures multiple independent MinIO deployments as a cluster of replicas called peer sites.
Site replication assumes the use of either the included MinIO identity provider (IDP) *or* an external IDP.
All configured deployments must use the same IDP.
Deployments using an external IDP must use the same configuration across sites.
Overview
--------
.. _minio-site-replication-what-replicates:
What Replicates Across All Sites
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. include:: /includes/common-replication.rst
:start-after: start-mc-admin-replicate-what-replicates
:end-before: end-mc-admin-replicate-what-replicates
What Does Not Replicate Across Sites
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. include:: /includes/common-replication.rst
:start-after: start-mc-admin-replicate-what-does-not-replicate
:end-before: end-mc-admin-replicate-what-does-not-replicate
Initial Site Replication Process
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
After enabling site replication, identity and access management (IAM) settings sync in the following order:
.. tab-set::
.. tab-item:: MinIO IDP
#. Policies
#. User accounts (for local users)
#. Groups
#. Service accounts
Service accounts for ``root`` do not sync.
#. Policy mapping for synced user accounts
#. Policy mapping for `Security Token Service (STS) users <https://docs.min.io/docs/minio-sts-quickstart-guide.html>`__
.. tab-item:: OIDC
#. Policies
#. Service accounts associated to OIDC accounts with a valid :ref:`MinIO Policy <minio-policy>`. ``root`` service accounts do not sync.
#. Policy mapping for synced user accounts
#. Policy mapping for `Security Token Service (STS) users <https://docs.min.io/docs/minio-sts-quickstart-guide.html>`__
.. tab-item:: LDAP
#. Policies
#. Groups
#. Service accounts associated to LDAP accounts with a valid :ref:`MinIO Policy <minio-policy>`. ``root`` service accounts do not sync.
#. Policy mapping for synced user accounts
#. Policy mapping for `Security Token Service (STS) users <https://docs.min.io/docs/minio-sts-quickstart-guide.html>`__
After the initial synchronization of data across peer sites, MinIO continually replicates and synchronizes :ref:`replicable data <minio-site-replication-what-replicates>` among all sites as they occur on any site.
Site Healing
~~~~~~~~~~~~
Any MinIO deployment in the site replication configuration can resynchronize damaged :ref:`replica-eligible data <minio-site-replication-what-replicates>` from the peer with the most updated ("latest") version of that data.
Prerequisites
-------------
One Site with Data at Setup
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Only *one* site can have data at the time of setup.
The other sites must be empty of buckets and objects.
After configuring site replication, any data on the first deployment replicates to the other sites.
All Sites Must Use the Same IDP
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All sites must use the same :ref:`Identity Provider <minio-authentication-and-identity-management>`.
Site replication supports the included MinIO IDP, OIDC, or LDAP.
Access to the Same Encryption Service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For :ref:`SSE-S3 <minio-encryption-sse-s3>` or :ref:`SSE-KMS <minio-encryption-sse-kms>` encryption via Key Management Service (KMS), all sites must have access to a central KMS deployment.
You can achieve this with a central KES server or multiple KES servers (say one per site) connected via a central supported :ref:`key vault server <minio-sse>`.
Tutorials
---------
Configure Site Replication
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. tab-set::
.. tab-item:: Console
#. :ref:`Deploy <deploy-minio-distributed>` two or more separate MinIO sites, using the same Identity Provider for each site
Only one site can have any buckets or objects on it.
The other site(s) must be empty.
#. In a browser, access the Console for the site with data (if any)
For example, ``https://<addressforsite>:9000``
Replace ``<addressforsite>`` with the IP address or URL for the MinIO deployment.
#. Select **Settings**, then **Site Replication**
.. image:: /images/minio-console/console-settings-site-replication.png
:width: 400px
:alt: MinIO Console menu with the Settings heading expanded to show Site Repilication
:align: center
#. Select :guilabel:`Add Sites +`
.. image:: /images/minio-console/console-settings-site-replication-add.png
:width: 600px
:alt: MinIO Console's Add Sites for Replication screen
:align: center
#. Complete the requested information for the site:
:Access Key: `(required)` The user name for ``root`` to use for signing in to the site.
:Secret Key: `(required)` The password for ``root`` to use for signing in to the site.
:Site Name: A name or other identifying text to associate to the site.
:Endpoint: `(required)` The URL or IP address and port to use to access the site.
To add additional sites beyond two, select the ``+`` button to the side of one of the Site entries.
To remove a site previously added, select the ``-`` button to the side of the site.
Site replication adds a :mc-cmd:`~mc admin user svcacct` under the ``root`` user to perform replication activities.
#. Select **Save**
#. Select :guilabel:`Replication Status` button to verify replication has completed across peer sites.
Any :ref:`replicable data <minio-site-replication-what-replicates>` that exists should show as successfully synced.
For more on reviewing site replication, see the :ref:`Site Replication Status tutorial <minio-site-replication-status-tutorial>`.
.. tab-item:: Command Line
The following steps create a new site replication configuration for three :ref:`distributed deployments <deploy-minio-distributed>`.
One of the sites contains :ref:`replicable data <minio-site-replication-what-replicates>`.
The three sites use aliases, ``minio1``, ``minio2``, and ``minio3``, and only ``minio1`` contains any data.
#. :ref:`Deploy <deploy-minio-distributed>` three or more separate MinIO sites, using the same :ref:`IDP <minio-authentication-and-identity-management>`
Start with empty sites *or* have no more than one site with any :ref:`replicable data <minio-site-replication-what-replicates>`.
#. Configure an alias for each site
For example, for three MinIO sites, you might create aliases ``minio1``, ``minio2``, and ``minio3``.
Use :mc-cmd:`mc alias set`
.. code-block:: shell
mc alias set minio1 https://minio1.example.com:9000 adminuser adminpassword
mc alias set minio2 https://minio2.example.com:9000 adminuser adminpassword
mc alias set minio3 https://minio3.example.com:9000 adminuser adminpassword
or define environment variables
.. code-block:: shell
export MC_HOST_minio1=https://adminuser:adminpassword@minio1.example.com
export MC_HOST_minio2=https://adminuser:adminpassword@minio2.example.com
export MC_HOST_minio3=https://adminuser:adminpassword@minio3.example.com
#. Add site replication configuration
.. code-block:: shell
mc admin replicate add minio1 minio2 minio3
If all sites are empty, the order of the aliases does not matter.
If one of the sites contains any :ref:`replicable data <minio-site-replication-what-replicates>`, you must list it first.
No more than one site can contain any replicable data.
#. Query the site replication configuration to verify
.. code-block:: shell
mc admin replicate info minio1
You can use the alias for any peer site in the site replication configuration.
#. Query the site replication status to confirm any initial data has replicated to all peer sites.
.. code-block:: shell
mc admin replicate status minio1
You can use the alias for any of the peer sites in the site replication configuration.
The output should say that all :ref:`replicable data <minio-site-replication-what-replicates>` is in sync.
The output could resemble the following:
.. code-block:: shell
Bucket replication status:
● 1/1 Buckets in sync
Policy replication status:
● 5/5 Policies in sync
User replication status:
No Users present
Group replication status:
No Groups present
For more on reviewing site replication, see the :ref:`Site Replication Status tutorial <minio-site-replication-status-tutorial>`.
Expand Site Replication
~~~~~~~~~~~~~~~~~~~~~~~
You can add more sites to an existing site replication configuration.
The new site must meet the following requirements:
- Site is fully deployed and accessible by hostname or IP
- Shares the IDP configuration as all other sites in the configuration
- Uses the same root user credentials as other configured sites
- Contains no bucket or object data
.. tab-set::
.. tab-item:: Console
#. Deploy a new, empty MinIO site
#. In a browser, access the Console for one of the exisitng replicated sites
For example, ``https://<addressforsite>:9000``
#. Select **Settings**, then **Site Replication**
.. image:: /images/minio-console/console-site-replication-list-of-sites.png
:width: 600px
:alt: MinIO Console Site Replication with three sites listed
:align: center
#. Select :guilabel:`Add Sites +`
.. image:: /images/minio-console/console-settings-site-replication-add.png
:width: 600px
:alt: MinIO Console's Add Sites for Replication screen
:align: center
#. Make the following entries:
:Access Key: `(required)` The user name to use for signing in to each site. Should be the same across all sites.
:Secret Key: `(required)` The password for the user name to use for signing in to each site. Should be the same across all sites.
:Site Name: An alias to use for the site name.
:Endpoint: `(required)` The URL or IP address and port to use to access the site.
To add additional sites beyond two, select the ``+`` button to the side of the last Site entry.
#. Select :guilabel:`Save`
.. tab-item:: Command Line
#. Deploy three or more separate MinIO sites, using the same external IDP
Only one site can have any buckets or objects on it.
The other sites must be empty.
#. Configure an alias for each site
To check the existing aliases, use :mc-cmd:`mc alias list`.
For example, for three MinIO sites, you might create aliases ``minio1``, ``minio2``, and ``minio3``.
Use :mc-cmd:`mc alias set`
.. code-block:: shell
mc alias set minio1 https://minio1.example.com:9000 adminuser adminpassword
mc alias set minio2 https://minio2.example.com:9000 adminuser adminpassword
mc alias set minio3 https://minio3.example.com:9000 adminuser adminpassword
or define environment variables
.. code-block:: shell
export MC_HOST_minio1=https://adminuser:adminpassword@minio1.example.com
export MC_HOST_minio2=https://adminuser:adminpassword@minio2.example.com
export MC_HOST_minio3=https://adminuser:adminpassword@minio3.example.com
#. Add site replication configuration
List all existing replicated sites first, then list the new site(s) to add.
In this example, ``minio1``, ``minio2``, and ``minio3`` are already configured for replication.
The command adds minio4 and minio5 as new sites to add to the replication.
``minio4`` and ``minio5`` must be empty.
.. code-block:: shell
mc admin replicate add minio1 minio2 minio3 minio4 minio5
#. Query the site replication configuration to verify
.. code-block:: shell
mc admin replicate info minio1
Modify a Site's Endpoint
~~~~~~~~~~~~~~~~~~~~~~~~
If a peer site changes its hostname, you can modify the replication configuration to reflect the new hostname.
.. tab-set::
.. tab-item:: Console
#. In a browser, access the Console for one of the replicated sites
For example, ``https://<addressforsite>:9000``
#. Select **Settings**, then **Site Replication**
#. Select the pencil **Edit** icon to the side of the site to update
.. image:: /images/minio-console/console-site-replication-edit-button.png
:width: 600px
:alt: MinIO Console's List of Replicated Sites screen with the edit buttons highlighted
:align: center
#. Make the following entries:
:New Endpoint: `(required)` The new endpoint address and port to use.
.. image:: /images/minio-console/console-settings-site-replication-edit-endpoint.png
:width: 600px
:alt: Example of the MinIO Console's Edit Replication Endpoint screen
:align: center
#. Select **Update**
.. tab-item:: Command Line
#. Obtain the site's Deployment ID with :mc-cmd:`mc admin replicate info`
.. code-block:: shell
mc admin replicate info <ALIAS>
#. Update the site's endpoint with :mc-cmd:`mc admin replicate edit`
.. code-block:: shell
mc admin replicate edit ALIAS --deployment-id [DEPLOYMENT-ID] --endpoint [NEW-ENDPOINT]
Replace [DEPLOYMENT-ID] with the deployment ID of the site to update.
Replace [NEW-ENDPOINT] with the new endpoint for the site.
Remove a Site from Replication
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can remove a site from replication at any time.
You can re-add the site at a later date, but you must first completely wipe bucket and object data from the site.
.. tab-set::
.. tab-item:: Console
#. In a browser, access the Console for one of the replicated sites
For example, ``https://<addressforsite>:9000``
#. Select **Settings**, then **Site Replication**
#. Select the trash can Delete icon to the side of the site to update
.. image:: /images/minio-console/console-site-replication-delete-button.png
:width: 600px
:alt: MinIO Console's List of Replicated Sites screen with the delete buttons highlighted
:align: center
#. Confirm the site deletion at the prompt by selecting **Delete**
.. image:: /images/minio-console/console-settings-site-replication-confirm-delete.png
:width: 600px
:alt: Example of the MinIO Console's Edit Replication Endpoint screen
:align: center
.. tab-item:: Command Line
Use :mc-cmd:`mc admin replicate remove`
.. code-block:: shell
mc admin replicate remove <ALIAS> --all --force
The ``-all`` flag removes the site as a peer from all participating sites.
The ``--force`` flag is required to removes the site from the site replication configuration.
.. _minio-site-replication-status-tutorial:
Review Replication Status
~~~~~~~~~~~~~~~~~~~~~~~~~
MinIO provides information on replication across the sites for users, groups, policies, or buckets.
The summary information includes the number of **Synced** and **Failed** items for each category.
.. tab-set::
.. tab-item:: Console
#. In a browser, access the Console for one of the replicated sites
For example, ``https://<addressforsite>:9000``
#. Select **Settings**, then **Site Replication**
#. Select :guilabel:`Replication Status`
.. image:: /images/minio-console/console-settings-site-replication-status-summary.png
:width: 600px
:alt: MinIO Console's Replication status from all Sites screen
:align: center
#. `(Optional)` View the replication status for a specific item
Select the type of item to view in the :guilabel:`View Replication Status for a:` dropdown
Specify the name of the specific Bucket, Group, Policy, or User to view
.. image:: /images/minio-console/console-settings-site-replication-status-item.png
:width: 600px
:alt: Example of replication status for a particular bucket item
:align: center
#. `(Optional)` Update the information by selecting :guilabel:`Refresh`
.. tab-item:: Command Line
Use :mc-cmd:`mc admin replicate status`
.. code-block:: shell
mc admin replicate status <ALIAS> --<flag> <value>
For example:
- ``mc admin replicate status minio3 --bucket images``
Displays the replication status for the ``images`` bucket on the ``minio3`` site.
The output resembles the following:
.. code-block::
● Bucket config replication summary for: images
Bucket | MINIO2 | MINIO3 | MINIO4
Tags | | |
Policy | | |
Quota | | |
Retention | | |
Encryption | | |
Replication | ✔ | ✔ | ✔
- ``mc admin replicate status minio3 --all``
Displays the replication status summary for all replication sites of which ``minio3`` is part.
The output resembles the following:
.. code-block::
Bucket replication status:
● 1/1 Buckets in sync
Policy replication status:
● 5/5 Policies in sync
User replication status:
● 1/1 Users in sync
Group replication status:
● 0/2 Groups in sync
Group | MINIO2 | MINIO3 | MINIO4
ittechs | ✗ in-sync | | ✗ in-sync
managers | ✗ in-sync | | ✗ in-sync

View File

@ -0,0 +1,187 @@
.. _minio-upgrade:
==========================
Upgrade a MinIO Deployment
==========================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 2
.. admonition:: Test Upgrades Before Applying To Production
:class: important
MinIO **strongly discourages** performing blind updates to production
clusters. You should *always* test upgrades in a lower environment
(dev/QA/staging) *before* applying upgrades to production deployments.
Exercise particular caution if upgrading to a :minio-git:`release
<minio/releases>` that has backwards breaking changes. MinIO includes
warnings in release notes for any version known to not support
downgrades.
For MinIO deployments that are significantly behind latest stable
(6+ months), consider using
`MinIO SUBNET <https://min.io/pricing?ref=docs>`__ for additional support
and guidance during the upgrade procedure.
Upgrade Checklist
-----------------
Review all items on the following checklist before performing an upgrade on
your MinIO deployments:
.. list-table::
:stub-columns: 1
:widths: 25 75
:width: 100%
* - Test in Lower Environments
- Test all upgrades in a lower environment such as a dedicated
testing, development, or QA deployment.
**Never** perform blind upgrades on production deployments.
* - Upgrade Only When Necessary
- MinIO follows a rapid development model where there may be multiple
releases in a week. There is no requirement to follow these updates
if your deployment is otherwise stable and functional.
Upgrade only if there is a specific feature, bug fix, or other
requirement necessary for your workload. Review the
:minio-git:`Release Notes <minio/releases>` for each Server release
between your current MinIO version and the target version.
* - Upgrades require Simultaneous Restart
- Ensure your preferred method of node management supports operating on
all nodes simultaneously.
.. include:: /includes/common-installation.rst
:start-after: start-nondisruptive-upgrade-desc
:end-before: end-nondisruptive-upgrade-desc
.. _minio-upgrade-systemctl:
Update ``systemctl``-Managed MinIO Deployments
----------------------------------------------
Deployments managed using ``systemctl``, such as those created
using the MinIO :ref:`DEB/RPM packages <deploy-minio-distributed-baremetal>`,
require manual update and simultaneous restart of all nodes in the
MinIO deployment.
1. **Update the MinIO Binary on Each Node**
.. include:: /includes/linux/common-installation.rst
:start-after: start-upgrade-minio-binary-desc
:end-before: end-upgrade-minio-binary-desc
2. **Restart the Deployment**
Run ``systemctl restart minio`` simultaneously across all nodes in the
deployment. Utilize your preferred method for coordinated execution of
terminal/shell commands.
.. include:: /includes/common-installation.rst
:start-after: start-nondisruptive-upgrade-desc
:end-before: end-nondisruptive-upgrade-desc
.. _minio-upgrade-mc-admin-update:
Update MinIO Deployments using ``mc admin update``
--------------------------------------------------
.. include:: /includes/common-installation.rst
:start-after: start-nondisruptive-upgrade-desc
:end-before: end-nondisruptive-upgrade-desc
The :mc-cmd:`mc admin update` command updates all MinIO server binaries in
the target MinIO deployment before restarting all nodes simultaneously.
:mc-cmd:`mc admin update` is intended for baremetal (non-orchestrated)
deployments using manual management of server binaries.
- For deployments managed using ``systemctl``, see
:ref:`minio-upgrade-systemctl`.
- For Kubernetes or other containerized environments, defer to the native
mechanisms for updating container images across a deployment.
:mc-cmd:`mc admin update` requires write access to the directory in which
the MinIO binary is saved (e.g. ``/usr/local/bin``).
The following command updates a MinIO deployment with the specified
:ref:`alias <alias>` to the latest stable release:
.. code-block:: shell
:class: copyable
mc admin update ALIAS
You should upgrade your :mc:`mc` binary to match or closely follow the
MinIO server release. You can use the :mc:`mc update` command to update the
binary to the latest stable release:
.. code-block:: shell
:class: copyable
mc update
You can specify a URL resolving to a specific MinIO server binary version.
Airgapped or internet-isolated deployments may utilize this feature for updating
from an internally-accessible server:
.. code-block:: shell
:class: copyable
mc admin update ALIAS https://minio-mirror.example.com/minio
Update MinIO Manually
---------------------
The following steps manually download the MinIO binary and restart the
deployment. These steps are intended for fully manual baremetal deployments
without ``systemctl`` or similar process management. These steps may also
apply to airgapped or similarly internet-isolated deployments which
cannot use :mc-cmd:`mc admin update` to retrieve the binary over the network.
1. **Add the MinIO Binary to each node in the deployment**
Follow your organizations preferred procedure for adding a new binary
to the node. The following command downloads the latest stable MinIO
binary:
.. code-block:: shell
:class: copyable
wget https://dl.min.io/server/minio/release/linux-amd64/minio
2. **Overwrite the existing MinIO binary with the newer version**
The following command sets the binary to executable and copies it to
``/usr/local/bin``. Replace this path with the location of the existing
MinIO binary:
.. code-block:: shell
:class: copyable
chmod +x minio
sudo mv minio /usr/local/bin/
3. **Restart the deployment**
Once all nodes have the updated binary, restart all nodes simultaneously
using the :mc-cmd:`mc admin service` command:
.. code-block:: shell
:class: copyable
mc admin service ALIAS
Replace ``ALIAS`` with the :ref:`alias <alias>` for the target deployment.
.. include:: /includes/common-installation.rst
:start-after: start-nondisruptive-upgrade-desc
:end-before: end-nondisruptive-upgrade-desc

View File

@ -0,0 +1,14 @@
.. _minio-k8s-upgrade-minio-operator:
======================
Upgrade MinIO Operator
======================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 1
Stub: TODO

View File

@ -0,0 +1,14 @@
.. _minio-k8s-upgrade-minio-tenant:
======================
Upgrade a MinIO Tenant
======================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 1
Stub: TODO