mirror of
https://github.com/minio/docs.git
synced 2025-08-08 01:43:18 +03:00
Attempting to reduce docs to single platform (#1258)
## We are going to make the following changes to the Object Store docs as part of a larger QC/Content pass: ### Left Navigation We want to modify the left navigation flow to be a natural progression from a basic setup to more advanced. For example: - Core Concepts - Deployment Architecture - Availability and Resiliency - Erasure Coding and Object Healing - Object Scanner - Site Replication and Failover - Thresholds and Limits - Installation - Deployment Checklist - Deploy MinIO on Kubernetes - Deploy MinIO on Red Hat Linux - Deploy MinIO on Ubuntu Linux - Deploy MinIO for Development (MacOS, Windows, Container) - Security and Encryption (Conceptual Overview) - Network Encryption (TLS) (Conceptual overview) - Enable Network Encryption using Single Domain - Enable Network Encryption using Multiple Domains - Enable Network Encryption using certmanager (Kubernetes only) - Data Encryption (SSE) (Conceptual overview) - Enable SSE using AIStor Key Management Server - Enable SSE using KES (Summary page + linkouts) - External Identity Management (Conceptual Overview) - Enable External Identity management using OpenID - Enable External Identity management using AD/LDAP - Backup and Recovery - Create a Multi-Site Replication Configuration - Recovery after Hardware Failure - Recover after drive failure - Recover after node failure - Recover after site failure - Monitoring and Alerts - Metrics and Alerting (v3 reference) - Monitoring and Alerting using Prometheus - Monitoring and Alerting using InfluxDB - Monitoring and Alerting using Grafana - Metrics V2 Reference - Publish Server and Audit Logs to External Services - MinIO Healthcheck API The Administration, Developer, and Reference sections will remain as-is for now. http://192.241.195.202:9000/staging/singleplat/mindocs/index.html # Goals Maintaining multiple platforms is getting to be too much, and based on analytics the actual number of users taking advantage of it is minimal. Furthermore, the majority of traffic is to installation pages. Therefore we're going to try to collapse back into a single MinIO Object Storage product, and use simple navigation and on-page selectors to handle Baremetal vs Kubernetes. This may also help to eventually stage us to migrate to Hugo + Markdown --------- Co-authored-by: Daryl White <53910321+djwfyi@users.noreply.github.com> Co-authored-by: Rushan <rushenn@minio.io> Co-authored-by: rushenn <rushenn123@gmail.com>
This commit is contained in:
@@ -1,555 +0,0 @@
|
||||
.. _minio-decommissioning:
|
||||
|
||||
=========================
|
||||
Decommission Server Pools
|
||||
=========================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
MinIO supports decommissioning and removing :ref:`server pools <minio-intro-server-pool>` from a deployment with two or more pools.
|
||||
To decommission, there must be at least one remaining pool with sufficient available space to receive the objects from the decommissioned pools.
|
||||
|
||||
Starting with ``RELEASE.2023-01-18T04-36-38Z``, MinIO supports queueing :ref:`multiple pools <minio-decommission-multiple-pools>` in a single decommission command.
|
||||
Each listed pool immediately enters a read-only status, but draining occurs one pool at a time.
|
||||
|
||||
Decommissioning is designed for removing an older server pool whose hardware is no longer sufficient or performant compared to the pools in the deployment.
|
||||
MinIO automatically migrates data from the decommissioned pools to the remaining pools in the deployment based on the ratio of free space available in each pool.
|
||||
|
||||
During the decommissioning process, MinIO routes read operations (e.g. ``GET``, ``LIST``, ``HEAD``) normally.
|
||||
MinIO routes write operations (e.g. ``PUT``, versioned ``DELETE``) to the remaining "active" pools in the deployment.
|
||||
Versioned objects maintain their ordering throughout the migration process.
|
||||
|
||||
The procedures on this page decommission and remove one or more server pools from a :ref:`distributed <deploy-minio-distributed>` MinIO deployment with *at least* two server pools.
|
||||
|
||||
.. admonition:: Decommissioning is Permanent
|
||||
:class: important
|
||||
|
||||
Once MinIO begins decommissioning a pool, it marks that pool as *permanently* inactive ("draining").
|
||||
Cancelling or otherwise interrupting the decommissioning procedure does **not** restore the pool to an active state.
|
||||
Use extra caution when decommissioning multiple pools.
|
||||
|
||||
Decommissioning is a major administrative operation that requires care in planning and execution, and is not a trivial or 'daily' task.
|
||||
|
||||
`MinIO SUBNET <https://min.io/pricing?jmp=docs>`__ users can `log in <https://subnet.min.io/>`__ and create a new issue related to decommissioning.
|
||||
Coordination with MinIO Engineering via SUBNET can ensure successful decommissioning, including performance testing and health diagnostics.
|
||||
|
||||
Community users can seek support on the `MinIO Community Slack <https://slack.min.io>`__.
|
||||
Community Support is best-effort only and has no SLAs around responsiveness.
|
||||
|
||||
|
||||
.. _minio-decommissioning-prereqs:
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Back Up Cluster Settings First
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the :mc:`mc admin cluster bucket export` and :mc:`mc admin cluster iam export` commands to take a snapshot of the bucket metadata and IAM configurations respectively prior to starting decommissioning.
|
||||
You can use these snapshots to restore bucket/IAM settings to recover from user or process errors as necessary.
|
||||
|
||||
Networking and Firewalls
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Each node should have full bidirectional network access to every other node in
|
||||
the deployment. For containerized or orchestrated infrastructures, this may
|
||||
require specific configuration of networking and routing components such as
|
||||
ingress or load balancers. Certain operating systems may also require setting
|
||||
firewall rules. For example, the following command explicitly opens the default
|
||||
MinIO server API port ``9000`` on servers using ``firewalld``:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
firewall-cmd --permanent --zone=public --add-port=9000/tcp
|
||||
firewall-cmd --reload
|
||||
|
||||
If you set a static :ref:`MinIO Console <minio-console>` port (e.g. ``:9001``)
|
||||
you must *also* grant access to that port to ensure connectivity from external
|
||||
clients.
|
||||
|
||||
MinIO **strongly recomends** using a load balancer to manage connectivity to the
|
||||
cluster. The Load Balancer should use a "Least Connections" algorithm for
|
||||
routing requests to the MinIO deployment, since any MinIO node in the deployment
|
||||
can receive, route, or process client requests.
|
||||
|
||||
The following load balancers are known to work well with MinIO:
|
||||
|
||||
- `NGINX <https://www.nginx.com/products/nginx/load-balancing/>`__
|
||||
- `HAProxy <https://cbonte.github.io/haproxy-dconv/2.3/intro.html#3.3.5>`__
|
||||
|
||||
Configuring firewalls or load balancers to support MinIO is out of scope for
|
||||
this procedure.
|
||||
|
||||
Deployment Must Have Sufficient Storage
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The decommissioning process migrates objects from the target pool to other pools in the deployment.
|
||||
The total available storage on the deployment *must* exceed the total storage of the decommissioned pool.
|
||||
|
||||
Use the `Erasure Code Calculator <https://min.io/product/erasure-code-calculator>`__ to determine the usable storage capacity.
|
||||
Then reduce that by the size of the objects already on the deployment.
|
||||
|
||||
For example, consider a deployment with the following distribution of used and free storage:
|
||||
|
||||
.. list-table::
|
||||
:stub-columns: 1
|
||||
:widths: 30 30 30
|
||||
:width: 100%
|
||||
|
||||
* - Pool 1
|
||||
- 100TB Used
|
||||
- 200TB Total
|
||||
|
||||
* - Pool 2
|
||||
- 100TB Used
|
||||
- 200TB Total
|
||||
|
||||
* - Pool 3
|
||||
- 100TB Used
|
||||
- 200TB Total
|
||||
|
||||
Decommissioning Pool 1 requires distributing the 100TB of used storage across the remaining pools.
|
||||
Pool 2 and Pool 3 each have 100TB of unused storage space and can safely absorb the data stored on Pool 1.
|
||||
|
||||
However, if Pool 1 were full (e.g. 200TB of used space), decommissioning would
|
||||
completely fill the remaining pools and potentially prevent any further write
|
||||
operations.
|
||||
|
||||
Considerations
|
||||
--------------
|
||||
|
||||
Replacing a Server Pool
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
For hardware upgrade cycles where you replace old pool hardware with a new pool, you should :ref:`add the new pool through expansion <expand-minio-distributed>` before starting the decommissioning of the old pool.
|
||||
Adding the new pool first allows the decommission process to transfer objects in a balanced way across all available pools, both existing and new.
|
||||
|
||||
Complete any planned :ref:`hardware expansion <expand-minio-distributed>` prior to decommissioning older hardware pools.
|
||||
|
||||
Decommissioning requires that a cluster's topology remain stable throughout the pool draining process.
|
||||
Do **not** attempt to perform expansion and decommission changes in a single step.
|
||||
|
||||
Decommissioning is Resumable
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MinIO resumes decommissioning if interrupted by transient issues such as
|
||||
deployment restarts or network failures.
|
||||
|
||||
For manually cancelled or failed decommissioning attempts, MinIO
|
||||
resumes only after you manually re-initiate the decommissioning operation.
|
||||
|
||||
The pool remains in the decommissioning state *regardless* of the interruption.
|
||||
A pool can *never* return to active status after decommissioning begins.
|
||||
|
||||
Decommissioning is Non-Disruptive
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Removing a decommissioned server pool requires restarting *all* MinIO
|
||||
nodes in the deployment at around the same time.
|
||||
|
||||
.. include:: /includes/common-installation.rst
|
||||
:start-after: start-nondisruptive-upgrade-desc
|
||||
:end-before: end-nondisruptive-upgrade-desc
|
||||
|
||||
Decommissioning Ignores Expired Objects and Trailing ``DeleteMarker``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Starting with :minio-release:`RELEASE.2023-05-27T05-56-19Z`, decommissioning ignores objects where the only remaining version is a ``DeleteMarker``.
|
||||
This avoids creating empty metadata on the remaining server pool(s) for objects that are effectively fully deleted.
|
||||
|
||||
Starting with :minio-release:`RELEASE.2023-06-23T20-26-00Z`, decommissioning also ignores object versions which have expired based on the configured :ref:`lifecycle rules <minio-lifecycle-management-expiration>` for the parent bucket.
|
||||
Starting with :minio-release:`RELEASE.2023-06-29T05-12-28Z`, you can monitor ignored delete markers and expired objects during the decommission process with :mc-cmd:`mc admin trace --call decommission <mc admin trace --call>`.
|
||||
|
||||
Once the decommissioning process completes, you can safely shut down that pool.
|
||||
Since the only remaining data was scheduled for deletion *or* was only a ``DeleteMarker``, you can safely clear or destroy those drives as per your internal procedures.
|
||||
|
||||
Behavior
|
||||
--------
|
||||
|
||||
Final Listing Check
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
At the end of the decommission process, MinIO checks for a list of items on the pool.
|
||||
If the list returns empty, MinIO marks the decommission as successfully completed.
|
||||
If any objects return, MinIO returns an error that the decommission process failed.
|
||||
|
||||
If the decommission fails, customers should open a |SUBNET| issue for further assistance before retrying the decommission.
|
||||
Community users without a SUBNET subscription can retry the decommission process or seek additional support through the `MinIO Community Slack <https://slack.min.io/>`__.
|
||||
MinIO provides Community Support at best-effort only and provides no :abbr:`SLA (Service Level Agreement)` around responsiveness.
|
||||
|
||||
Decommissioning a Server with Tiering Enabled
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. versionchanged:: RELEASE.2023-03-20T20-16-18Z
|
||||
|
||||
For deployments with tiering enabled and active, decommissioning moves the object references to a new active pool.
|
||||
Applications can continue issuing GET requests against those objects where MinIO handles transparently retrieving them from the remote tier.
|
||||
|
||||
In older MinIO versions, tiering configurations prevent decommissioning.
|
||||
|
||||
.. _minio-decommissioning-server-pool:
|
||||
|
||||
Decommission a Server Pool
|
||||
--------------------------
|
||||
|
||||
1) Review the MinIO Deployment Topology
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The :mc:`mc admin decommission` command returns a list of all
|
||||
pools in the MinIO deployment:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin decommission status myminio
|
||||
|
||||
The command returns output similar to the following:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
┌─────┬────────────────────────────────────────────────────────────────┬──────────────────────────────────┬────────┐
|
||||
│ ID │ Pools │ Capacity │ Status │
|
||||
│ 1st │ https://minio-{01...04}.example.com:9000/mnt/disk{1...4}/minio │ 10 TiB (used) / 10 TiB (total) │ Active │
|
||||
│ 2nd │ https://minio-{05...08}.example.com:9000/mnt/disk{1...4}/minio │ 60 TiB (used) / 100 TiB (total) │ Active │
|
||||
│ 3rd │ https://minio-{09...12}.example.com:9000/mnt/disk{1...4}/minio │ 40 TiB (used) / 100 TiB (total) │ Active │
|
||||
└─────┴────────────────────────────────────────────────────────────────┴──────────────────────────────────┴────────┘
|
||||
|
||||
The example deployment above has three pools. Each pool has four servers
|
||||
with four drives each.
|
||||
|
||||
Identify the target pool for decommissioning and review the current capacity.
|
||||
The remaining pools in the deployment *must* have sufficient total
|
||||
capacity to migrate all object stored in the decommissioned pool.
|
||||
|
||||
In the example above, the deployment has 210TiB total storage with 110TiB used.
|
||||
The first pool (``minio-{01...04}``) is the decommissioning target, as it was
|
||||
provisioned when the MinIO deployment was created and is completely full. The
|
||||
remaining newer pools can absorb all objects stored on the first pool without
|
||||
significantly impacting total available storage.
|
||||
|
||||
2) Start the Decommissioning Process
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. admonition:: Decommissioning is Permanent
|
||||
:class: warning
|
||||
|
||||
Once MinIO begins decommissioning a pool, it marks that pool as *permanently*
|
||||
inactive ("draining"). Cancelling or otherwise interrupting the
|
||||
decommissioning procedure does **not** restore the pool to an active
|
||||
state.
|
||||
|
||||
Review and validate that you are decommissioning the correct pool
|
||||
*before* running the following command.
|
||||
|
||||
Use the :mc-cmd:`mc admin decommission start` command to begin decommissioning
|
||||
the target pool. Specify the :ref:`alias <alias>` of the deployment and the
|
||||
full description of the pool to decommission, including all hosts, disks, and file paths.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin decommission start myminio/ https://minio-{01...04}.example.net:9000/mnt/disk{1...4}/minio
|
||||
|
||||
The example command begins decommissioning the matching server pool on the
|
||||
``myminio`` deployment.
|
||||
|
||||
During the decommissioning process, MinIO continues routing read operations
|
||||
(``GET``, ``LIST``, ``HEAD``) to the pool for those objects not
|
||||
yet migrated. MinIO routes all new write operations (``PUT``) to the
|
||||
remaining pools in the deployment.
|
||||
|
||||
Load balancers, reverse proxy, or other network control components which
|
||||
manage connections to the deployment do not need to modify their configurations
|
||||
at this time.
|
||||
|
||||
3) Monitor the Decommissioning Process
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the :mc-cmd:`mc admin decommission status` command to monitor the
|
||||
decommissioning process.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin decommission status myminio
|
||||
|
||||
The command returns output similar to the following:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
┌─────┬────────────────────────────────────────────────────────────────┬──────────────────────────────────┬──────────┐
|
||||
│ ID │ Pools │ Capacity │ Status │
|
||||
│ 1st │ https://minio-{01...04}.example.com:9000/mnt/disk{1...4}/minio │ 10 TiB (used) / 10 TiB (total) │ Draining │
|
||||
│ 2nd │ https://minio-{05...08}.example.com:9000/mnt/disk{1...4}/minio │ 60 TiB (used) / 100 TiB (total) │ Active │
|
||||
│ 3rd │ https://minio-{09...12}.example.com:9000/mnt/disk{1...4}/minio │ 40 TiB (used) / 100 TiB (total) │ Active │
|
||||
└─────┴────────────────────────────────────────────────────────────────┴──────────────────────────────────┴──────────┘
|
||||
|
||||
You can retrieve more detailed information by specifying the description of
|
||||
the server pool to the command:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin decommission status myminio https://minio-{01...04}.example.com:9000/mnt/disk{1...4}/minio
|
||||
|
||||
The command returns output similar to the following:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
Decommissioning rate at 100MiB/sec [1TiB/10TiB]
|
||||
Started: 30 minutes ago
|
||||
|
||||
:mc-cmd:`mc admin decommission status` marks the :guilabel:`Status` as
|
||||
:guilabel:`Complete` once decommissioning is completed. You can move on to
|
||||
the next step once decommissioning is completed.
|
||||
|
||||
If :guilabel:`Status` reads as failed, you can re-run the
|
||||
:mc-cmd:`mc admin decommission start` command to resume the process.
|
||||
For persistent failures, use :mc:`mc admin logs` or review
|
||||
the ``systemd`` logs (e.g. ``journalctl -u minio``) to identify more specific
|
||||
errors.
|
||||
|
||||
4) Remove the Decommissioned Pool from the Deployment Configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
As each pool completes decommissioning, you can safely remove it from the
|
||||
deployment configuration. Modify the startup command for each remaining MinIO
|
||||
server in the deployment and remove the decommissioned pool.
|
||||
|
||||
The ``.deb`` or ``.rpm`` packages install a
|
||||
`systemd <https://www.freedesktop.org/wiki/Software/systemd/>`__ service file to
|
||||
``/lib/systemd/system/minio.service``. For binary installations, this
|
||||
procedure assumes the file was created manually as per the
|
||||
:ref:`deploy-minio-distributed` procedure.
|
||||
|
||||
The ``minio.service`` file uses an environment file located at
|
||||
``/etc/default/minio`` for sourcing configuration settings, including the
|
||||
startup. Specifically, the ``MINIO_VOLUMES`` variable sets the startup
|
||||
command:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
cat /etc/default/minio | grep "MINIO_VOLUMES"
|
||||
|
||||
The command returns output similar to the following:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
MINIO_VOLUMES="https://minio-{1...4}.example.net:9000/mnt/disk{1...4}/minio https://minio-{5...8}.example.net:9000/mnt/disk{1...4}/minio https://minio-{9...12}.example.net:9000/mnt/disk{1...4}/minio"
|
||||
|
||||
Edit the environment file and remove the decommissioned pool from the
|
||||
``MINIO_VOLUMES`` value.
|
||||
|
||||
5) Update Network Control Plane
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Update any load balancers, reverse proxies, or other network control planes
|
||||
to remove the decommissioned server pool from the connection configuration for
|
||||
the MinIO deployment.
|
||||
|
||||
Specific instructions for configuring network control plane components is
|
||||
out of scope for this procedure.
|
||||
|
||||
6) Restart the MinIO Deployment
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Issue the following commands on each node **simultaneously** in the deployment
|
||||
to restart the MinIO service:
|
||||
|
||||
.. include:: /includes/linux/common-installation.rst
|
||||
:start-after: start-install-minio-restart-service-desc
|
||||
:end-before: end-install-minio-restart-service-desc
|
||||
|
||||
.. include:: /includes/common-installation.rst
|
||||
:start-after: start-nondisruptive-upgrade-desc
|
||||
:end-before: end-nondisruptive-upgrade-desc
|
||||
|
||||
Once the deployment is online, use :mc:`mc admin info` to confirm the
|
||||
uptime of all remaining servers in the deployment.
|
||||
|
||||
.. _minio-decommission-multiple-pools:
|
||||
|
||||
Decommission Multiple Server Pools
|
||||
----------------------------------
|
||||
|
||||
.. versionchanged:: RELEASE.2023-01-18T04-36-38Z
|
||||
|
||||
You can start the decommission process for multiple server pools when issuing a decommission command.
|
||||
|
||||
After entering the command:
|
||||
|
||||
- MinIO immediately stops write access to all pools to be decommissioned.
|
||||
- Decommissioning happens one pool at a time.
|
||||
- Each pool completes the decommission draining process before MinIO begins draining the next pool.
|
||||
|
||||
To decommission multiple server pools from one command, add the full description of each server pool to decommission as a comma-separated list.
|
||||
|
||||
All other considerations about decommissioning apply when performing the process on multiple servers.
|
||||
|
||||
- Decommissioning is permanent.
|
||||
- Once you mark the pools as decommissioned, you **cannot** restore them.
|
||||
- Confirm you select the intended pools.
|
||||
|
||||
1) Review the MinIO Deployment Topology
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The :mc:`mc admin decommission` command returns a list of all pools in the MinIO deployment:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin decommission status myminio
|
||||
|
||||
The command returns output similar to the following:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
┌─────┬────────────────────────────────────────────────────────────────┬──────────────────────────────────┬────────┐
|
||||
│ ID │ Pools │ Capacity │ Status │
|
||||
│ 1st │ https://minio-{01...04}.example.com:9000/mnt/disk{1...4}/minio │ 10 TiB (used) / 10 TiB (total) │ Active │
|
||||
│ 2nd │ https://minio-{05...08}.example.com:9000/mnt/disk{1...4}/minio │ 95 TiB (used) / 100 TiB (total) │ Active │
|
||||
│ 3rd │ https://minio-{09...12}.example.com:9000/mnt/disk{1...4}/minio │ 40 TiB (used) / 500 TiB (total) │ Active │
|
||||
│ 4th │ https://minio-{13...16}.example.com:9000/mnt/disk{1...4}/minio │ 0 TiB (used) / 500 TiB (total) │ Active │
|
||||
└─────┴────────────────────────────────────────────────────────────────┴──────────────────────────────────┴────────┘
|
||||
|
||||
The example deployment above has three pools.
|
||||
Each pool has four servers with four drives each.
|
||||
|
||||
Identify the target pool for decommissioning and review the current capacity.
|
||||
The remaining pools in the deployment *must* have sufficient total capacity to migrate all object stored in the decommissioned pool.
|
||||
|
||||
In the example above, the deployment has 1110TiB total storage with 145TiB used.
|
||||
|
||||
- The first pool (``minio-{01...04}``) is the first decommissioning target, as it was provisioned when the MinIO deployment was created and is completely full.
|
||||
- The second pool (``minio-{05...08}``) is the second decommissioning target, as it was also provisioned when the MinIO deployment was created and is nearly full.
|
||||
- The fourth pool (``minio-{13...16}``) is a newly added pool with new hardware from a completed server expansion.
|
||||
|
||||
The third and fourth pools can absorb all objects stored on the first pool without significantly impacting total available storage.
|
||||
|
||||
.. important::
|
||||
|
||||
Complete any server expansion to add new storage resources _before_ beginning a decommission process.
|
||||
|
||||
2) Start the Decommissioning Process
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. admonition:: Decommissioning is Permanent
|
||||
:class: warning
|
||||
|
||||
Once MinIO begins decommissioning the pools, it marks those pools as *permanently* inactive ("draining").
|
||||
Cancelling or otherwise interrupting the decommissioning procedure does **not** restore the pools to an active state.
|
||||
|
||||
Review and validate that you are decommissioning the correct pools *before* running the following command.
|
||||
|
||||
Use the :mc-cmd:`mc admin decommission start` command to begin decommissioning the target pool.
|
||||
Specify the :ref:`alias <alias>` of the deployment and a comma-separated list of the full description of each pool to decommission, including all hosts, disks, and file paths.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin decommission start myminio/ https://minio-{01...04}.example.net:9000/mnt/disk{1...4}/minio,https://minio-{05...08}.example.net:9000/mnt/disk{1...4}/minio
|
||||
|
||||
The example command begins decommissioning the two listed matching server pools on the ``myminio`` deployment.
|
||||
|
||||
During the decommissioning process, MinIO continues routing read operations (``GET``, ``LIST``, ``HEAD``) operations to the pools for those objects not yet migrated.
|
||||
MinIO routes all new write operations (``PUT``) to the remaining pools in the deployment not scheduled for decommissioning.
|
||||
|
||||
Draining of decommissioned pools happens one pool at a time, completing the decommission of each pool in sequence.
|
||||
Draining does _not_ happen concurrently for all decommissioning pools.
|
||||
|
||||
Load balancers, reverse proxy, or other network control components which manage connections to the deployment do not need to modify their configurations at this time.
|
||||
|
||||
3) Monitor the Decommissioning Process
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the :mc-cmd:`mc admin decommission status` command to monitor the decommissioning process.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin decommission status myminio
|
||||
|
||||
The command returns output similar to the following:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
┌─────┬────────────────────────────────────────────────────────────────┬──────────────────────────────────┬──────────┐
|
||||
│ ID │ Pools │ Capacity │ Status │
|
||||
│ 1st │ https://minio-{01...04}.example.com:9000/mnt/disk{1...4}/minio │ 10 TiB (used) / 10 TiB (total) │ Draining │
|
||||
│ 2nd │ https://minio-{05...08}.example.com:9000/mnt/disk{1...4}/minio │ 95 TiB (used) / 100 TiB (total) │ Pending │
|
||||
│ 3rd │ https://minio-{09...12}.example.com:9000/mnt/disk{1...4}/minio │ 40 TiB (used) / 500 TiB (total) │ Active │
|
||||
│ 4th │ https://minio-{13...16}.example.com:9000/mnt/disk{1...4}/minio │ 0 TiB (used) / 500 TiB (total) │ Active │
|
||||
└─────┴────────────────────────────────────────────────────────────────┴──────────────────────────────────┴──────────┘
|
||||
|
||||
You can retrieve more detailed information by specifying the description of the server pool to the command:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin decommission status myminio https://minio-{01...04}.example.com:9000/mnt/disk{1...4}/minio
|
||||
|
||||
The command returns output similar to the following:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
Decommissioning rate at 100MiB/sec [1TiB/10TiB]
|
||||
Started: 30 minutes ago
|
||||
|
||||
:mc-cmd:`mc admin decommission status` marks the :guilabel:`Status` as :guilabel:`Complete` once decommissioning is completed.
|
||||
You can move on to the next step once MinIO completes decommissioning for all pools.
|
||||
|
||||
If :guilabel:`Status` reads as failed, you can re-run the :mc-cmd:`mc admin decommission start` command to resume the process.
|
||||
For persistent failures, use :mc:`mc admin logs` or review the ``systemd`` logs (e.g. ``journalctl -u minio``) to identify more specific errors.
|
||||
|
||||
4) Remove the Decommissioned Pools from the Deployment Configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Once decommissioning completes, you can safely remove the pools from the deployment configuration.
|
||||
Modify the startup command for each remaining MinIO server in the deployment and remove the decommissioned pool.
|
||||
|
||||
The ``.deb`` or ``.rpm`` packages install a `systemd <https://www.freedesktop.org/wiki/Software/systemd/>`__ service file to ``/lib/systemd/system/minio.service``.
|
||||
For binary installations, this procedure assumes the file was created manually as per the :ref:`deploy-minio-distributed` procedure.
|
||||
|
||||
The ``minio.service`` file uses an environment file located at ``/etc/default/minio`` for sourcing configuration settings, including the startup.
|
||||
Specifically, the ``MINIO_VOLUMES`` variable sets the startup command:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
cat /etc/default/minio | grep "MINIO_VOLUMES"
|
||||
|
||||
The command returns output similar to the following:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
MINIO_VOLUMES="https://minio-{1...4}.example.net:9000/mnt/disk{1...4}/minio https://minio-{5...8}.example.net:9000/mnt/disk{1...4}/minio https://minio-{9...12}.example.net:9000/mnt/disk{1...4}/minio"
|
||||
|
||||
Edit the environment file and remove the decommissioned pools from the ``MINIO_VOLUMES`` value.
|
||||
|
||||
5) Update Network Control Plane
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Update any load balancers, reverse proxies, or other network control planes to remove the decommissioned server pools from the connection configuration for the MinIO deployment.
|
||||
|
||||
Specific instructions for configuring network control plane components is out of scope for this procedure.
|
||||
|
||||
6) Restart the MinIO Deployment
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Issue the following commands on each node **simultaneously** in the deployment to restart the MinIO service:
|
||||
|
||||
.. include:: /includes/linux/common-installation.rst
|
||||
:start-after: start-install-minio-restart-service-desc
|
||||
:end-before: end-install-minio-restart-service-desc
|
||||
|
||||
.. include:: /includes/common-installation.rst
|
||||
:start-after: start-nondisruptive-upgrade-desc
|
||||
:end-before: end-nondisruptive-upgrade-desc
|
||||
|
||||
Once the deployment is online, use :mc:`mc admin info` to confirm the uptime of all remaining servers in the deployment.
|
@@ -1,72 +0,0 @@
|
||||
.. _minio-k8s-delete-minio-tenant:
|
||||
|
||||
=====================
|
||||
Delete a MinIO Tenant
|
||||
=====================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
MinIO Kubernetes Operator
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This procedures on this page *requires* a valid installation of the MinIO Kubernetes Operator and assumes the local host has a matching installation of the MinIO Kubernetes Operator.
|
||||
This procedure assumes the latest stable Operator, version |operator-version-stable|.
|
||||
|
||||
See :ref:`deploy-operator-kubernetes` for complete documentation on deploying the MinIO Operator.
|
||||
|
||||
|
||||
Tenant Persistent Volume Claims
|
||||
-------------------------------
|
||||
|
||||
The delete behavior of each Persistent Volume Claims (``PVC``) generated by the Tenant depends on the :kube-docs:`Reclaim Policy <concepts/storage/persistent-volumes/#reclaim-policy>` of its bound Persistent Volume (``PV``):
|
||||
|
||||
- For ``recycle`` or ``delete`` policies, the command deletes the ``PVC``.
|
||||
|
||||
- For ``retain``, the command retains the ``PVC``.
|
||||
|
||||
.. warning::
|
||||
|
||||
Deletion of the underlying ``PV``, whether automatic or manual, results in the loss of any objects stored on the MinIO Tenant.
|
||||
|
||||
Perform all due diligence in ensuring the safety of stored data *prior* to deleting the Tenant.
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Kustomization
|
||||
|
||||
You can delete a Kustomization-installed Tenant by deleting the namespace:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
kubectl delete namespace TENANT-NAMESPACE
|
||||
|
||||
Replace ``TENANT-NAMESPACE`` with the name of the namespace to remove.
|
||||
|
||||
.. important::
|
||||
|
||||
Ensure you have specified the correct namespace for removal before running the command.
|
||||
Namespace removal occurs at the Kubernetes layer, such that the MinIO Operator cannot interfere with nor undo the operation.
|
||||
|
||||
.. tab-item:: Helm
|
||||
|
||||
You can delete a Helm-installed namespace by using the ``helm uninstall`` command:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
helm uninstall --namespace MINIO-TENANT TENANT-NAME minio-operator/tenant
|
||||
|
||||
The command above assumes use of the MinIO Operator Chart repository.
|
||||
If you installed the Chart manually or by using a different repository name, specify that chart or name in the command.
|
||||
|
||||
Replace ``TENANT-NAME`` and ``TENANT-NAMESPACE`` with the name and namespace of the Tenant respectively.
|
||||
You can use ``helm list -n TENANT-NAMESPACE`` to validate the Tenant name.
|
@@ -1,330 +0,0 @@
|
||||
.. _deploy-minio-distributed:
|
||||
.. _minio-mnmd:
|
||||
|
||||
====================================
|
||||
Deploy MinIO: Multi-Node Multi-Drive
|
||||
====================================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
The procedures on this page cover deploying MinIO in a Multi-Node Multi-Drive (MNMD) or "Distributed" configuration.
|
||||
|MNMD| deployments provide enterprise-grade performance, availability, and scalability and are the recommended topology for all production workloads.
|
||||
|
||||
|MNMD| deployments support :ref:`erasure coding <minio-ec-parity>` configurations which tolerate the loss of up to half the nodes or drives in the deployment while continuing to serve read operations.
|
||||
Use the MinIO `Erasure Code Calculator <https://min.io/product/erasure-code-calculator?ref=docs>`__ when planning and designing your MinIO deployment to explore the effect of erasure code settings on your intended topology.
|
||||
|
||||
.. _deploy-minio-distributed-prereqs:
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Networking and Firewalls
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Each node should have full bidirectional network access to every other node in
|
||||
the deployment. For containerized or orchestrated infrastructures, this may
|
||||
require specific configuration of networking and routing components such as
|
||||
ingress or load balancers. Certain operating systems may also require setting
|
||||
firewall rules. For example, the following command explicitly opens the default
|
||||
MinIO server API port ``9000`` for servers running firewalld :
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
firewall-cmd --permanent --zone=public --add-port=9000/tcp
|
||||
firewall-cmd --reload
|
||||
|
||||
All MinIO servers in the deployment *must* use the same listen port.
|
||||
|
||||
If you set a static :ref:`MinIO Console <minio-console>` port (e.g. ``:9001``)
|
||||
you must *also* grant access to that port to ensure connectivity from external
|
||||
clients.
|
||||
|
||||
MinIO **strongly recomends** using a load balancer to manage connectivity to the
|
||||
cluster. The Load Balancer should use a "Least Connections" algorithm for
|
||||
routing requests to the MinIO deployment, since any MinIO node in the deployment
|
||||
can receive, route, or process client requests.
|
||||
|
||||
The following load balancers are known to work well with MinIO:
|
||||
|
||||
- `NGINX <https://www.nginx.com/products/nginx/load-balancing/>`__
|
||||
- `HAProxy <https://cbonte.github.io/haproxy-dconv/2.3/intro.html#3.3.5>`__
|
||||
|
||||
Configuring firewalls or load balancers to support MinIO is out of scope for
|
||||
this procedure.
|
||||
The :ref:`integrations-nginx-proxy` reference provides a baseline configuration for using NGINX as a reverse proxy with basic load balancing configured.
|
||||
|
||||
Sequential Hostnames
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MinIO *requires* using expansion notation ``{x...y}`` to denote a sequential series of MinIO hosts when creating a server pool.
|
||||
MinIO supports using either a sequential series of hostnames *or* IP addresses to represent each :mc:`minio server` process in the deployment.
|
||||
|
||||
This procedure assumes use of sequential hostnames due to the lower overhead of management, especially in larger distributed clusters.
|
||||
|
||||
Create the necessary DNS hostname mappings *prior* to starting this procedure.
|
||||
For example, the following hostnames would support a 4-node distributed deployment:
|
||||
|
||||
- ``minio-01.example.com``
|
||||
- ``minio-02.example.com``
|
||||
- ``minio-03.example.com``
|
||||
- ``minio-04.example.com``
|
||||
|
||||
You can specify the entire range of hostnames using the expansion notation ``minio-0{1...4}.example.com``.
|
||||
|
||||
.. dropdown:: Non-Sequential Hostnames or IP Addresses
|
||||
|
||||
MinIO does not support non-sequential hostnames or IP addresses for distributed deployments.
|
||||
You can instead use ``/etc/hosts`` on each node to set a simple DNS scheme that supports expansion notation.
|
||||
For example:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
# /etc/hosts
|
||||
|
||||
198.0.2.10 minio-01.example.net
|
||||
198.51.100.3 minio-02.example.net
|
||||
198.0.2.43 minio-03.example.net
|
||||
198.51.100.12 minio-04.example.net
|
||||
|
||||
The above hosts configuration supports expansion notation of ``minio-0{1...4}.example.net``, mapping the sequential hostnames to the desired IP addresses.
|
||||
|
||||
.. _deploy-minio-distributed-prereqs-storage:
|
||||
|
||||
Storage Requirements
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. |deployment| replace:: deployment
|
||||
|
||||
.. include:: /includes/common-installation.rst
|
||||
:start-after: start-storage-requirements-desc
|
||||
:end-before: end-storage-requirements-desc
|
||||
|
||||
.. include:: /includes/common-admonitions.rst
|
||||
:start-after: start-exclusive-drive-access
|
||||
:end-before: end-exclusive-drive-access
|
||||
|
||||
Memory Requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. versionchanged:: RELEASE.2024-01-28T22-35-53Z
|
||||
|
||||
MinIO pre-allocates 2GiB of system memory at startup.
|
||||
|
||||
MinIO recommends a *minimum* of 32GiB of memory per host.
|
||||
See :ref:`minio-hardware-checklist-memory` for more guidance on memory allocation in MinIO.
|
||||
|
||||
Time Synchronization
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Multi-node systems must maintain synchronized time and date to maintain stable internode operations and interactions.
|
||||
Make sure all nodes sync to the same time server regularly.
|
||||
Operating systems vary for methods used to synchronize time and date, such as with ``ntp``, ``timedatectl``, or ``timesyncd``.
|
||||
|
||||
Check the documentation for your operating system for how to set up and maintain accurate and identical system clock times across nodes.
|
||||
|
||||
Considerations
|
||||
--------------
|
||||
|
||||
Erasure Coding Parity
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MinIO :ref:`erasure coding <minio-erasure-coding>` is a data redundancy and availability feature that allows MinIO deployments to automatically reconstruct objects on-the-fly despite the loss of multiple drives or nodes in the cluster.
|
||||
|
||||
MinIO defaults to ``EC:4``, or 4 parity blocks per :ref:`erasure set <minio-ec-erasure-set>`.
|
||||
You can set a custom parity level by setting the appropriate :ref:`MinIO Storage Class environment variable <minio-server-envvar-storage-class>`.
|
||||
Consider using the MinIO `Erasure Code Calculator <https://min.io/product/erasure-code-calculator>`__ for guidance in selecting the appropriate erasure code parity level for your cluster.
|
||||
|
||||
.. important::
|
||||
|
||||
While you can change erasure parity settings at any time, objects written with a given parity do **not** update to the new parity settings.
|
||||
MinIO only applies the changed parity to newly written objects.
|
||||
Existing objects retain the parity value in place at the time of their creation.
|
||||
|
||||
Capacity-Based Planning
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MinIO recommends planning storage capacity sufficient to store **at least** 2 years of data before reaching 70% usage.
|
||||
Performing :ref:`server pool expansion <expand-minio-distributed>` more frequently or on a "just-in-time" basis generally indicates an architecture or planning issue.
|
||||
|
||||
For example, consider an application suite expected to produce at least 100 TiB of data per year and a 3 year target before expansion.
|
||||
By ensuring the deployment has ~500TiB of usable storage up front, the cluster can safely meet the 70% threshold with additional buffer for growth in data storage output per year.
|
||||
|
||||
Since MinIO :ref:`erasure coding <minio-erasure-coding>` requires some storage for parity, the total **raw** storage must exceed the planned **usable** capacity.
|
||||
Consider using the MinIO `Erasure Code Calculator <https://min.io/product/erasure-code-calculator>`__ for guidance in planning capacity around specific erasure code settings.
|
||||
|
||||
Recommended Operating Systems
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. cond:: linux
|
||||
|
||||
This tutorial assumes all hosts running MinIO use a
|
||||
:ref:`recommended Linux operating system <minio-installation-platform-support>`
|
||||
such as RHEL8+ or Ubuntu 18.04+.
|
||||
|
||||
.. cond:: macos
|
||||
|
||||
This tutorial assumes all hosts running MinIO use a non-EOL macOS version (10.14+).
|
||||
|
||||
.. cond:: Windows
|
||||
|
||||
This tutorial assumes all hosts running MinIO use a non-EOL Windows distribution.
|
||||
|
||||
Support for running distributed MinIO deployments on Windows is *experimental*.
|
||||
|
||||
Pre-Existing Data
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
When starting a new MinIO server in a distributed environment, the storage devices must not have existing data.
|
||||
|
||||
Once you start the MinIO server, all interactions with the data must be done through the S3 API.
|
||||
Use the :ref:`MinIO Client <minio-client>`, the :ref:`MinIO Console <minio-console>`, or one of the MinIO :ref:`Software Development Kits <minio-drivers>` to work with the buckets and objects.
|
||||
|
||||
.. warning::
|
||||
|
||||
Modifying files on the backend drives can result in data corruption or data loss.
|
||||
|
||||
.. _deploy-minio-distributed-baremetal:
|
||||
|
||||
Deploy Distributed MinIO
|
||||
------------------------
|
||||
|
||||
The following procedure creates a new distributed MinIO deployment consisting
|
||||
of a single :ref:`Server Pool <minio-intro-server-pool>`.
|
||||
|
||||
All commands provided below use example values. Replace these values with
|
||||
those appropriate for your deployment.
|
||||
|
||||
Review the :ref:`deploy-minio-distributed-prereqs` before starting this
|
||||
procedure.
|
||||
|
||||
1) Install the MinIO Binary on Each Node
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. cond:: linux
|
||||
|
||||
.. include:: /includes/linux/common-installation.rst
|
||||
:start-after: start-install-minio-binary-desc
|
||||
:end-before: end-install-minio-binary-desc
|
||||
|
||||
.. cond:: macos
|
||||
|
||||
.. include:: /includes/macos/common-installation.rst
|
||||
:start-after: start-install-minio-binary-desc
|
||||
:end-before: end-install-minio-binary-desc
|
||||
|
||||
2) Create the ``systemd`` Service File
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: /includes/linux/common-installation.rst
|
||||
:start-after: start-install-minio-systemd-desc
|
||||
:end-before: end-install-minio-systemd-desc
|
||||
|
||||
3) Create the Service Environment File
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Create an environment file at ``/etc/default/minio``. The MinIO
|
||||
service uses this file as the source of all
|
||||
:ref:`environment variables <minio-server-environment-variables>` used by
|
||||
MinIO *and* the ``minio.service`` file.
|
||||
|
||||
The following examples assumes that:
|
||||
|
||||
- The deployment has a single server pool consisting of four MinIO server hosts
|
||||
with sequential hostnames.
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
minio1.example.com minio3.example.com
|
||||
minio2.example.com minio4.example.com
|
||||
|
||||
- All hosts have four locally-attached drives with sequential mount-points:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
/mnt/disk1/minio /mnt/disk3/minio
|
||||
/mnt/disk2/minio /mnt/disk4/minio
|
||||
|
||||
- The deployment has a load balancer running at ``https://minio.example.net``
|
||||
that manages connections across all four MinIO hosts.
|
||||
|
||||
Modify the example to reflect your deployment topology:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
# Set the hosts and volumes MinIO uses at startup
|
||||
# The command uses MinIO expansion notation {x...y} to denote a
|
||||
# sequential series.
|
||||
#
|
||||
# The following example covers four MinIO hosts
|
||||
# with 4 drives each at the specified hostname and drive locations.
|
||||
# The command includes the port that each MinIO server listens on
|
||||
# (default 9000)
|
||||
|
||||
MINIO_VOLUMES="https://minio{1...4}.example.net:9000/mnt/disk{1...4}/minio"
|
||||
|
||||
# Set all MinIO server options
|
||||
#
|
||||
# The following explicitly sets the MinIO Console listen address to
|
||||
# port 9001 on all network interfaces. The default behavior is dynamic
|
||||
# port selection.
|
||||
|
||||
MINIO_OPTS="--console-address :9001"
|
||||
|
||||
# Set the root username. This user has unrestricted permissions to
|
||||
# perform S3 and administrative API operations on any resource in the
|
||||
# deployment.
|
||||
#
|
||||
# Defer to your organizations requirements for superadmin user name.
|
||||
|
||||
MINIO_ROOT_USER=minioadmin
|
||||
|
||||
# Set the root password
|
||||
#
|
||||
# Use a long, random, unique string that meets your organizations
|
||||
# requirements for passwords.
|
||||
|
||||
MINIO_ROOT_PASSWORD=minio-secret-key-CHANGE-ME
|
||||
|
||||
You may specify other :ref:`environment variables
|
||||
<minio-server-environment-variables>` or server commandline options as required
|
||||
by your deployment. All MinIO nodes in the deployment should include the same
|
||||
environment variables with the same values for each variable.
|
||||
|
||||
4) Add TLS/SSL Certificates
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: /includes/common-installation.rst
|
||||
:start-after: start-install-minio-tls-desc
|
||||
:end-before: end-install-minio-tls-desc
|
||||
|
||||
5) Run the MinIO Server Process
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Issue the following commands on each node in the deployment to start the
|
||||
MinIO service:
|
||||
|
||||
.. include:: /includes/linux/common-installation.rst
|
||||
:start-after: start-install-minio-start-service-desc
|
||||
:end-before: end-install-minio-start-service-desc
|
||||
|
||||
6) Open the MinIO Console
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: /includes/common-installation.rst
|
||||
:start-after: start-install-minio-console-desc
|
||||
:end-before: end-install-minio-console-desc
|
||||
|
||||
7) Next Steps
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
- Create an :ref:`alias <minio-mc-alias>` for accessing the deployment using
|
||||
:mc:`mc`.
|
||||
|
||||
- :ref:`Create users and policies to control access to the deployment
|
||||
<minio-authentication-and-identity-management>`.
|
@@ -1,67 +0,0 @@
|
||||
.. _minio-snmd:
|
||||
|
||||
=====================================
|
||||
Deploy MinIO: Single-Node Multi-Drive
|
||||
=====================================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 2
|
||||
|
||||
The procedures on this page cover deploying MinIO in a Single-Node Multi-Drive (SNMD) configuration.
|
||||
|SNMD| deployments provide drive-level reliability and failover/recovery with performance and scaling limitations imposed by the single node.
|
||||
|
||||
.. cond:: linux or macos or windows
|
||||
|
||||
For production environments, MinIO strongly recommends deploying with the :ref:`Multi-Node Multi-Drive (Distributed) <minio-mnmd>` topology for enterprise-grade performance, availability, and scalability.
|
||||
|
||||
.. cond:: container
|
||||
|
||||
For production environments, MinIO strongly recommends using the MinIO Kubernetes Operator to deploy Multi-Node Multi-Drive (MNMD) or "Distributed" Tenants.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Storage Requirements
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. |deployment| replace:: deployment
|
||||
|
||||
.. include:: /includes/common-installation.rst
|
||||
:start-after: start-storage-requirements-desc
|
||||
:end-before: end-storage-requirements-desc
|
||||
|
||||
.. include:: /includes/common-admonitions.rst
|
||||
:start-after: start-exclusive-drive-access
|
||||
:end-before: end-exclusive-drive-access
|
||||
|
||||
Memory Requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. versionchanged:: RELEASE.2024-01-28T22-35-53Z
|
||||
|
||||
MinIO pre-allocates 2GiB of system memory at startup.
|
||||
|
||||
MinIO recommends a *minimum* of 32GiB of memory per host.
|
||||
See :ref:`minio-hardware-checklist-memory` for more guidance on memory allocation in MinIO.
|
||||
|
||||
.. _deploy-minio-standalone-multidrive:
|
||||
|
||||
Deploy Single-Node Multi-Drive MinIO
|
||||
------------------------------------
|
||||
|
||||
The following procedure deploys MinIO consisting of a single MinIO server and a multiple drives or storage volumes.
|
||||
|
||||
.. cond:: linux
|
||||
|
||||
.. include:: /includes/linux/steps-deploy-minio-single-node-multi-drive.rst
|
||||
|
||||
.. cond:: macos
|
||||
|
||||
.. include:: /includes/macos/steps-deploy-minio-single-node-multi-drive.rst
|
||||
|
||||
.. cond:: container
|
||||
|
||||
.. include:: /includes/container/steps-deploy-minio-single-node-multi-drive.rst
|
@@ -1,129 +0,0 @@
|
||||
.. _minio-snsd:
|
||||
|
||||
======================================
|
||||
Deploy MinIO: Single-Node Single-Drive
|
||||
======================================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 2
|
||||
|
||||
The procedures on this page cover deploying MinIO in a Single-Node Single-Drive (SNSD) configuration for early development and evaluation.
|
||||
|SNSD| deployments use a zero-parity erasure coded backend that provides no added reliability or availability beyond what the underlying storage volume implements.
|
||||
These deployments are best suited for local testing and evaluation, or for small-scale data workloads that do not have availability or performance requirements.
|
||||
|
||||
.. cond:: container
|
||||
|
||||
For extended development or production environments in orchestrated environments, use the MinIO Kubernetes Operator to deploy a Tenant on multiple worker nodes.
|
||||
|
||||
.. cond:: linux
|
||||
|
||||
For extended development or production environments, deploy MinIO in a :ref:`Multi-Node Multi-Drive (Distributed) <minio-mnmd>` topology
|
||||
|
||||
.. important::
|
||||
|
||||
:minio-release:`RELEASE.2022-10-29T06-21-33Z` fully removes the `deprecated Gateway/Filesystem <https://blog.min.io/deprecation-of-the-minio-gateway/>`__ backends.
|
||||
MinIO returns an error if it starts up and detects existing Filesystem backend files.
|
||||
|
||||
To migrate from an FS-backend deployment, use :mc:`mc mirror` or :mc:`mc cp` to copy your data over to a new MinIO |SNSD| deployment.
|
||||
You should also recreate any necessary users, groups, policies, and bucket configurations on the |SNSD| deployment.
|
||||
|
||||
.. _minio-snsd-pre-existing-data:
|
||||
|
||||
Pre-Existing Data
|
||||
-----------------
|
||||
|
||||
MinIO startup behavior depends on the the contents of the specified storage volume or path.
|
||||
The server checks for both MinIO-internal backend data and the structure of existing folders and files.
|
||||
The following table lists the possible storage volume states and MinIO behavior:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 40 60
|
||||
|
||||
* - Storage Volume State
|
||||
- Behavior
|
||||
|
||||
* - Empty with **no** files, folders, or MinIO backend data
|
||||
|
||||
- MinIO starts in |SNSD| mode and creates the zero-parity backend
|
||||
|
||||
* - Existing |SNSD| zero-parity objects and MinIO backend data
|
||||
- MinIO resumes in |SNSD| mode
|
||||
|
||||
* - Existing filesystem folders, files, but **no** MinIO backend data
|
||||
- MinIO returns an error and does not start
|
||||
|
||||
* - Existing filesystem folders, files, and legacy "FS-mode" backend data
|
||||
- MinIO returns an error and does not start
|
||||
|
||||
.. versionchanged:: RELEASE.2022-10-29T06-21-33Z
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Storage Requirements
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following requirements summarize the :ref:`minio-hardware-checklist-storage` section of MinIO's hardware recommendations:
|
||||
|
||||
Use Local Storage
|
||||
Direct-Attached Storage (DAS) has significant performance and consistency advantages over networked storage (:abbr:`NAS (Network Attached Storage)`, :abbr:`SAN (Storage Area Network)`, :abbr:`NFS (Network File Storage)`).
|
||||
MinIO strongly recommends flash storage (NVMe, SSD) for primary or "hot" data.
|
||||
|
||||
Use XFS-Formatting for Drives
|
||||
MinIO strongly recommends provisioning XFS formatted drives for storage.
|
||||
MinIO uses XFS as part of internal testing and validation suites, providing additional confidence in performance and behavior at all scales.
|
||||
|
||||
Persist Drive Mounting and Mapping Across Reboots
|
||||
Use ``/etc/fstab`` to ensure consistent drive-to-mount mapping across node reboots.
|
||||
|
||||
Non-Linux Operating Systems should use the equivalent drive mount management tool.
|
||||
|
||||
.. include:: /includes/common-admonitions.rst
|
||||
:start-after: start-exclusive-drive-access
|
||||
:end-before: end-exclusive-drive-access
|
||||
|
||||
Memory Requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. versionchanged:: RELEASE.2024-01-28T22-35-53Z
|
||||
|
||||
MinIO pre-allocates 2GiB of system memory at startup.
|
||||
|
||||
MinIO recommends a *minimum* of 32GiB of memory per host.
|
||||
See :ref:`minio-hardware-checklist-memory` for more guidance on memory allocation in MinIO.
|
||||
|
||||
.. _deploy-minio-standalone:
|
||||
|
||||
Deploy Single-Node Single-Drive MinIO
|
||||
-------------------------------------
|
||||
|
||||
The following procedure deploys MinIO consisting of a single MinIO server and a single drive or storage volume.
|
||||
|
||||
.. admonition:: Network File System Volumes Break Consistency Guarantees
|
||||
:class: note
|
||||
|
||||
MinIO's strict **read-after-write** and **list-after-write** consistency
|
||||
model requires local drive filesystems.
|
||||
|
||||
MinIO cannot provide consistency guarantees if the underlying storage
|
||||
volumes are NFS or a similar network-attached storage volume.
|
||||
|
||||
.. cond:: linux
|
||||
|
||||
.. include:: /includes/linux/steps-deploy-minio-single-node-single-drive.rst
|
||||
|
||||
.. cond:: macos
|
||||
|
||||
.. include:: /includes/macos/steps-deploy-minio-single-node-single-drive.rst
|
||||
|
||||
.. cond:: container
|
||||
|
||||
.. include:: /includes/container/steps-deploy-minio-single-node-single-drive.rst
|
||||
|
||||
.. cond:: windows
|
||||
|
||||
.. include:: /includes/windows/steps-deploy-minio-single-node-single-drive.rst
|
@@ -1,387 +0,0 @@
|
||||
.. _deploy-tenant-helm:
|
||||
|
||||
======================================
|
||||
Deploy a MinIO Tenant with Helm Charts
|
||||
======================================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
Helm is a tool for automating the deployment of applications to Kubernetes clusters.
|
||||
A `Helm chart <https://helm.sh/docs/topics/charts/>`__ is a set of YAML files, templates, and other files that define the deployment details.
|
||||
The following procedure uses a Helm Chart to deploy a Tenant managed by the MinIO Operator.
|
||||
|
||||
This procedure requires the Kubernetes cluster have a valid :ref:`Operator <deploy-operator-kubernetes>` deployment.
|
||||
You cannot use the MinIO Operator Tenant chart to deploy a Tenant independent of the Operator.
|
||||
|
||||
.. important::
|
||||
|
||||
The MinIO Operator Tenant Chart is *distinct* from the community-managed :minio-git:`MinIO Chart <minio/tree/master/helm/minio>`.
|
||||
|
||||
The Community Helm Chart is built, maintained, and supported by the community.
|
||||
MinIO does not guarantee support for any given bug, feature request, or update referencing that chart.
|
||||
|
||||
The :ref:`Operator Tenant Chart <minio-tenant-chart-values>` is officially maintained and supported by MinIO.
|
||||
MinIO strongly recommends the official Helm Chart for :ref:`Operator <minio-operator-chart-values>` and :ref:`Tenants <minio-tenant-chart-values>` for production environments.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
You must meet the following requirements to install a MinIO Tenant with Helm:
|
||||
|
||||
- An existing Kubernetes cluster
|
||||
- The ``kubectl`` CLI tool on your local host with version matching the cluster.
|
||||
- `Helm <https://helm.sh/docs/intro/install/>`__ version 3.8 or greater.
|
||||
- `yq <https://github.com/mikefarah/yq/#install>`__ version 4.18.1 or greater.
|
||||
- An existing :ref:`MinIO Operator installation <deploy-operator-kubernetes>`.
|
||||
|
||||
This procedure assumes your Kubernetes cluster access grants you broad administrative permissions.
|
||||
|
||||
For more about Tenant installation requirements, including supported Kubernetes versions and TLS certificates, see the :ref:`Tenant deployment prerequisites <deploy-minio-distributed-prereqs-storage>`.
|
||||
|
||||
This procedure assumes familiarity the with referenced Kubernetes concepts and utilities.
|
||||
While this documentation may provide guidance for configuring or deploying Kubernetes-related resources on a best-effort basis, it is not a replacement for the official :kube-docs:`Kubernetes Documentation <>`.
|
||||
|
||||
Namespace
|
||||
~~~~~~~~~
|
||||
|
||||
The tenant must use its own namespace and cannot share a namespace with another tenant.
|
||||
In addition, MinIO strongly recommends using a dedicated namespace for the tenant with no other applications running in the namespace.
|
||||
|
||||
.. _deploy-tenant-helm-repo:
|
||||
|
||||
Deploy a MinIO Tenant using Helm Charts
|
||||
---------------------------------------
|
||||
|
||||
The following procedure deploys a MinIO Tenant using the MinIO Operator Chart Repository.
|
||||
This method supports a simplified installation path compared to the :ref:`local chart installation <deploy-tenant-helm-local>`.
|
||||
|
||||
|
||||
The following procedure uses Helm to deploy a MinIO Tenant using the official MinIO Tenant Chart.
|
||||
|
||||
.. important::
|
||||
|
||||
If you use Helm to deploy a MinIO Tenant, you must use Helm to manage or upgrade that deployment.
|
||||
Do not use ``kubectl krew``, Kustomize, or similar methods to manage or upgrade the MinIO Tenant.
|
||||
|
||||
This procedure is not exhaustive of all possible configuration options available in the :ref:`Tenant Chart <minio-tenant-chart-values>`.
|
||||
It provides a baseline from which you can modify and tailor the Tenant to your requirements.
|
||||
|
||||
.. container:: procedure
|
||||
|
||||
#. Verify your MinIO Operator Repo Configuration
|
||||
|
||||
MinIO maintains a Helm-compatible repository at https://operator.min.io.
|
||||
If the repository does not already exist in your local Helm configuration, add it before continuing:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm repo add minio-operator https://operator.min.io
|
||||
|
||||
You can validate the repo contents using ``helm search``:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm search repo minio-operator
|
||||
|
||||
The response should resemble the following:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
:substitutions:
|
||||
|
||||
NAME CHART VERSION APP VERSION DESCRIPTION
|
||||
minio-operator/minio-operator 4.3.7 v4.3.7 A Helm chart for MinIO Operator
|
||||
minio-operator/operator |operator-version-stable| v|operator-version-stable| A Helm chart for MinIO Operator
|
||||
minio-operator/tenant |operator-version-stable| v|operator-version-stable| A Helm chart for MinIO Operator
|
||||
|
||||
#. Create a local copy of the Helm ``values.yaml`` for modification
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
curl -sLo values.yaml https://raw.githubusercontent.com/minio/operator/master/helm/tenant/values.yaml
|
||||
|
||||
Open the ``values.yaml`` object in your preferred text editor.
|
||||
|
||||
#. Configure the Tenant topology
|
||||
|
||||
The following fields share the ``tenant.pools[0]`` prefix and control the number of servers, volumes per server, and storage class of all pods deployed in the Tenant:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 30 70
|
||||
|
||||
* - Field
|
||||
- Description
|
||||
|
||||
* - ``servers``
|
||||
- The number of MinIO pods to deploy in the Server Pool.
|
||||
|
||||
* - ``volumesPerServer``
|
||||
- The number of persistent volumes to attach to each MinIO pod (``servers``).
|
||||
The Operator generates ``volumesPerServer x servers`` Persistant Volume Claims for the Tenant.
|
||||
|
||||
* - ``storageClassName``
|
||||
- The Kubernetes storage class to associate with the generated Persistent Volume Claims.
|
||||
|
||||
If no storage class exists matching the specified value *or* if the specified storage class cannot meet the requested number of PVCs or storage capacity, the Tenant may fail to start.
|
||||
|
||||
* - ``size``
|
||||
- The amount of storage to request for each generated PVC.
|
||||
|
||||
#. Configure Tenant Affinity or Anti-Affinity
|
||||
|
||||
The Tenant Chart supports the following Kubernetes Selector, Affinity and Anti-Affinity configurations:
|
||||
|
||||
- Node Selector (``tenant.nodeSelector``)
|
||||
- Node/Pod Affinity or Anti-Affinity (``spec.pools[n].affinity``)
|
||||
|
||||
MinIO recommends configuring Tenants with Pod Anti-Affinity to ensure that the Kubernetes schedule does not schedule multiple pods on the same worker node.
|
||||
|
||||
If you have specific worker nodes on which you want to deploy the tenant, pass those node labels or filters to the ``nodeSelector`` or ``affinity`` field to constrain the scheduler to place pods on those nodes.
|
||||
|
||||
#. Configure Network Encryption
|
||||
|
||||
The MinIO Tenant CRD provides the following fields with which you can configure tenant TLS network encryption:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 30 70
|
||||
|
||||
* - Field
|
||||
- Description
|
||||
|
||||
* - ``tenant.certificate.requestAutoCert``
|
||||
- Enable or disable MinIO :ref:`automatic TLS certificate generation <minio-tls>`.
|
||||
|
||||
Defaults to ``true`` or enabled if omitted.
|
||||
|
||||
* - ``tenant.certificate.certConfig``
|
||||
- Customize the behavior of :ref:`automatic TLS <minio-tls>`, if enabled.
|
||||
|
||||
* - ``tenant.certificate.externalCertSecret``
|
||||
- Enable TLS for multiple hostnames via Server Name Indication (SNI).
|
||||
|
||||
Specify one or more Kubernetes secrets of type ``kubernetes.io/tls`` or ``cert-manager``.
|
||||
|
||||
* - ``tenant.certificate.externalCACertSecret``
|
||||
- Enable validation of client TLS certificates signed by unknown, third-party, or internal Certificate Authorities (CA).
|
||||
|
||||
Specify one or more Kubernetes secrets of type ``kubernetes.io/tls`` containing the full chain of CA certificates for a given authority.
|
||||
|
||||
#. Configure MinIO Environment Variables
|
||||
|
||||
You can set MinIO Server environment variables using the ``tenant.configuration`` field.
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 30 70
|
||||
|
||||
* - Field
|
||||
- Description
|
||||
|
||||
* - ``tenant.configuration``
|
||||
- Specify a Kubernetes opaque secret whose data payload ``config.env`` contains each MinIO environment variable you want to set.
|
||||
|
||||
The ``config.env`` data payload **must** be a base64-encoded string.
|
||||
You can create a local file, set your environment variables, and then use ``cat LOCALFILE | base64`` to create the payload.
|
||||
|
||||
The YAML includes an object ``kind: Secret`` with ``metadata.name: storage-configuration`` that sets the root username, password, erasure parity settings, and enables Tenant Console.
|
||||
|
||||
Modify this as needed to reflect your Tenant requirements.
|
||||
|
||||
#. Deploy the Tenant
|
||||
|
||||
Use ``helm`` to install the Tenant Chart using your ``values.yaml`` as an override:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm install \
|
||||
--namespace TENANT-NAMESPACE \
|
||||
--create-namespace \
|
||||
--values values.yaml \
|
||||
TENANT-NAME minio-operator/tenant
|
||||
|
||||
You can monitor the progress using the following command:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
watch kubectl get all -n TENANT-NAMESPACE
|
||||
|
||||
#. Expose the Tenant MinIO S3 API port
|
||||
|
||||
To test the MinIO Client :mc:`mc` from your local machine, forward the MinIO port and create an alias.
|
||||
|
||||
* Forward the Tenant's MinIO port:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl port-forward svc/TENANT-NAME-hl 9000 -n TENANT-NAMESPACE
|
||||
|
||||
* Create an alias for the Tenant service:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc alias set myminio https://localhost:9000 minio minio123 --insecure
|
||||
|
||||
You can use :mc:`mc mb` to create a bucket on the Tenant:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc mb myminio/mybucket --insecure
|
||||
|
||||
If you deployed your MinIO Tenant using TLS certificates minted by a trusted Certificate Authority (CA) you can omit the ``--insecure`` flag.
|
||||
|
||||
See :ref:`create-tenant-connect-tenant` for additional documentation on external connectivity to the Tenant.
|
||||
|
||||
.. _deploy-tenant-helm-local:
|
||||
|
||||
Deploy a Tenant using a Local Helm Chart
|
||||
----------------------------------------
|
||||
|
||||
The following procedure deploys a Tenant using a local copy of the Helm Charts.
|
||||
This method may support easier pre-configuration of the Tenant compared to the :ref:`repo-based installation <deploy-tenant-helm-repo>`.
|
||||
|
||||
#. Download the Helm charts
|
||||
|
||||
On your local host, download the Tenant Helm charts to a convenient directory:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
:substitutions:
|
||||
|
||||
curl -O https://raw.githubusercontent.com/minio/operator/master/helm-releases/tenant-|operator-version-stable|.tgz
|
||||
|
||||
Each chart contains a ``values.yaml`` file you can customize to suit your needs.
|
||||
For details on the options available in the MinIO Tenant ``values.yaml``, see :ref:`minio-tenant-chart-values`.
|
||||
|
||||
Open the ``values.yaml`` object in your preferred text editor.
|
||||
|
||||
#. Configure the Tenant topology
|
||||
|
||||
The following fields share the ``tenant.pools[0]`` prefix and control the number of servers, volumes per server, and storage class of all pods deployed in the Tenant:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 30 70
|
||||
|
||||
* - Field
|
||||
- Description
|
||||
|
||||
* - ``servers``
|
||||
- The number of MinIO pods to deploy in the Server Pool.
|
||||
* - ``volumesPerServer``
|
||||
- The number of persistent volumes to attach to each MinIO pod (``servers``).
|
||||
The Operator generates ``volumesPerServer x servers`` Persistant Volume Claims for the Tenant.
|
||||
* - ``storageClassName``
|
||||
- The Kubernetes storage class to associate with the generated Persistent Volume Claims.
|
||||
|
||||
If no storage class exists matching the specified value *or* if the specified storage class cannot meet the requested number of PVCs or storage capacity, the Tenant may fail to start.
|
||||
|
||||
* - ``size``
|
||||
- The amount of storage to request for each generated PVC.
|
||||
|
||||
#. Configure Tenant Affinity or Anti-Affinity
|
||||
|
||||
The Tenant Chart supports the following Kubernetes Selector, Affinity and Anti-Affinity configurations:
|
||||
|
||||
- Node Selector (``tenant.nodeSelector``)
|
||||
- Node/Pod Affinity or Anti-Affinity (``spec.pools[n].affinity``)
|
||||
|
||||
MinIO recommends configuring Tenants with Pod Anti-Affinity to ensure that the Kubernetes schedule does not schedule multiple pods on the same worker node.
|
||||
|
||||
If you have specific worker nodes on which you want to deploy the tenant, pass those node labels or filters to the ``nodeSelector`` or ``affinity`` field to constrain the scheduler to place pods on those nodes.
|
||||
|
||||
#. Configure Network Encryption
|
||||
|
||||
The MinIO Tenant CRD provides the following fields from which you can configure tenant TLS network encryption:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 30 70
|
||||
|
||||
* - Field
|
||||
- Description
|
||||
|
||||
* - ``tenant.certificate.requestAutoCert``
|
||||
- Enables or disables MinIO :ref:`automatic TLS certificate generation <minio-tls>`
|
||||
|
||||
* - ``tenant.certificate.certConfig``
|
||||
- Controls the settings for :ref:`automatic TLS <minio-tls>`.
|
||||
Requires ``spec.requestAutoCert: true``
|
||||
|
||||
* - ``tenant.certificate.externalCertSecret``
|
||||
- Specify one or more Kubernetes secrets of type ``kubernetes.io/tls`` or ``cert-manager``.
|
||||
MinIO uses these certificates for performing TLS handshakes based on hostname (Server Name Indication).
|
||||
|
||||
* - ``tenant.certificate.externalCACertSecret``
|
||||
- Specify one or more Kubernetes secrets of type ``kubernetes.io/tls`` with the Certificate Authority (CA) chains which the Tenant must trust for allowing client TLS connections.
|
||||
|
||||
#. Configure MinIO Environment Variables
|
||||
|
||||
You can set MinIO Server environment variables using the ``tenant.configuration`` field.
|
||||
|
||||
The field must specify a Kubernetes opaque secret whose data payload ``config.env`` contains each MinIO environment variable you want to set.
|
||||
|
||||
The YAML includes an object ``kind: Secret`` with ``metadata.name: storage-configuration`` that sets the root username, password, erasure parity settings, and enables Tenant Console.
|
||||
|
||||
Modify this as needed to reflect your Tenant requirements.
|
||||
|
||||
#. The following Helm command creates a MinIO Tenant using the standard chart:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
:substitutions:
|
||||
|
||||
helm install \
|
||||
--namespace TENANT-NAMESPACE \
|
||||
--create-namespace \
|
||||
TENANT-NAME tenant-|operator-version-stable|.tgz
|
||||
|
||||
To deploy more than one Tenant, create a Helm chart with the details of the new Tenant and repeat the deployment steps.
|
||||
Redeploying the same chart updates the previously deployed Tenant.
|
||||
|
||||
#. Expose the Tenant MinIO port
|
||||
|
||||
To test the MinIO Client :mc:`mc` from your local machine, forward the MinIO port and create an alias.
|
||||
|
||||
* Forward the Tenant's MinIO port:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl port-forward svc/TENANT-NAME-hl 9000 -n TENANT-NAMESPACE
|
||||
|
||||
* Create an alias for the Tenant service:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc alias set myminio https://localhost:9000 minio minio123 --insecure
|
||||
|
||||
This example uses the non-TLS ``myminio-hl`` service, which requires :std:option:`--insecure <mc.--insecure>`.
|
||||
|
||||
If you have a TLS cert configured, omit ``--insecure`` and use ``svc/minio`` instead.
|
||||
|
||||
You can use :mc:`mc mb` to create a bucket on the Tenant:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc mb myminio/mybucket --insecure
|
||||
|
||||
See :ref:`create-tenant-connect-tenant` for additional documentation on external connectivity to the Tenant.
|
@@ -1,431 +0,0 @@
|
||||
.. The following label handles links from content to distributed MinIO in K8s context
|
||||
.. _deploy-minio-distributed:
|
||||
|
||||
.. Redirect all references to tenant topologies here
|
||||
|
||||
.. _minio-snsd:
|
||||
.. _minio-snmd:
|
||||
.. _minio-mnmd:
|
||||
|
||||
.. _minio-k8s-deploy-minio-tenant:
|
||||
|
||||
=====================
|
||||
Deploy a MinIO Tenant
|
||||
=====================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
This procedure documents deploying a MinIO Tenant onto a stock Kubernetes cluster using either Kustomize or MinIO's Helm Charts.
|
||||
|
||||
.. screenshot temporarily removed
|
||||
|
||||
.. image:: /images/k8s/operator-dashboard.png
|
||||
:align: center
|
||||
:width: 70%
|
||||
:class: no-scaled-link
|
||||
:alt: MinIO Operator Console
|
||||
|
||||
|
||||
Deploying Single-Node topologies requires additional configurations not covered in this documentation.
|
||||
You can alternatively use a simple Kubernetes YAML object to describe a Single-Node topology for local testing and evaluation as necessary.
|
||||
MinIO does not recommend nor support single-node deployment topologies for production environments.
|
||||
|
||||
This documentation assumes familiarity with all referenced Kubernetes concepts, utilities, and procedures.
|
||||
While this documentation *may* provide guidance for configuring or deploying Kubernetes-related resources on a best-effort basis, it is not a replacement for the official :kube-docs:`Kubernetes Documentation <>`.
|
||||
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
MinIO Kubernetes Operator
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The procedures on this page *requires* a valid installation of the MinIO
|
||||
Kubernetes Operator and assumes the local host has a matching installation of
|
||||
the MinIO Kubernetes Operator. This procedure assumes the latest stable Operator, version |operator-version-stable|.
|
||||
|
||||
See :ref:`deploy-operator-kubernetes` for complete documentation on deploying the MinIO Operator.
|
||||
|
||||
.. cond:: k8s and not (openshift or eks or gke or aks)
|
||||
|
||||
Kubernetes Version |k8s-floor|
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MinIO tests |operator-version-stable| against a floor of Kubernetes API of |k8s-floor|.
|
||||
MinIO **strongly recommends** maintaining Kubernetes infrastructure using `actively maintained Kubernetes API versions <https://kubernetes.io/releases/>`__.
|
||||
|
||||
|
||||
MinIO **strongly recommends** upgrading Kubernetes clusters running with `End-Of-Life API versions <https://kubernetes.io/releases/patch-releases/#non-active-branch-history>`__.
|
||||
|
||||
|
||||
.. cond:: openshift
|
||||
|
||||
Check Security Context Constraints
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The MinIO Operator deploys pods using the following default :kube-docs:`Security Context <tasks/configure-pod-container/security-context/>` per pod:
|
||||
|
||||
.. code-block:: yaml
|
||||
:class: copyable
|
||||
|
||||
securityContext:
|
||||
runAsUser: 1000
|
||||
runAsGroup: 1000
|
||||
runAsNonRoot: true
|
||||
fsGroup: 1000
|
||||
|
||||
Certain OpenShift :openshift-docs:`Security Context Constraints </authentication/managing-security-context-constraints.html>` limit the allowed UID or GID for a pod such that MinIO cannot deploy the Tenant successfully.
|
||||
Ensure that the Project in which the Operator deploys the Tenant has sufficient SCC settings that allow the default pod security context.
|
||||
You can alternatively modify the tenant security context settings during deployment.
|
||||
|
||||
The following command returns the optimal value for the securityContext:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
oc get namespace <namespace> \
|
||||
-o=jsonpath='{.metadata.annotations.openshift\.io/sa\.scc\.supplemental-groups}{"\n"}'
|
||||
|
||||
The command returns output similar to the following:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
1056560000/10000
|
||||
|
||||
Take note of this value before the slash for use in this procedure.
|
||||
|
||||
.. cond:: gke
|
||||
|
||||
GKE Cluster with Compute Engine Nodes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This procedure assumes an existing :abbr:`GKE (Google Kubernetes Engine)` cluster with a MinIO Operator installation and *at least* four Compute Engine nodes.
|
||||
The Compute Engine nodes should have matching machine types and configurations to ensure predictable performance with MinIO.
|
||||
|
||||
MinIO provides :ref:`hardware guidelines <deploy-minio-distributed-recommendations>` for selecting the appropriate Compute Engine instance class and size.
|
||||
MinIO strongly recommends selecting instances with support for local SSDs and *at least* 25Gbps egress bandwidth as a baseline for performance.
|
||||
|
||||
For more complete information on the available Compute Engine and Persistent Storage resources, see :gcp-docs:`Machine families resources and comparison guide <general-purpose-machines>` and :gcp-docs:`Persistent disks <disks>`.
|
||||
|
||||
.. cond:: eks
|
||||
|
||||
EKS Cluster with EBS-Optimized EC2 Nodes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This procedure assumes an existing :abbr:`EKS (Elastic Kubernetes Service)` cluster with *at least* four EC2 nodes.
|
||||
The EC2 nodes should have matching machine types and configurations to ensure predictable performance with MinIO.
|
||||
|
||||
MinIO provides :ref:`hardware guidelines <deploy-minio-distributed-recommendations>` for selecting the appropriate EC2 instance class and size.
|
||||
MinIO strongly recommends selecting EBS-optimized instances with *at least* 25Gbps Network bandwidth as a baseline for performance.
|
||||
|
||||
For more complete information on the available EC2 and EBS resources, see `EC2 Instance Types <https://aws.amazon.com/ec2/instance-types/>`__ and `EBS Volume Types <https://aws.amazon.com/ebs/volume-types/>`__.
|
||||
|subnet| customers should reach out to MinIO engineering as part of architecture planning for assistance in selecting the optimal instance and volume types for the target workload and performance goals.
|
||||
|
||||
.. cond:: aks
|
||||
|
||||
AKS Cluster with Azure Virtual Machines
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This procedure assumes an existing :abbr:`AKS (Azure Kubernetes Service)` cluster with *at least* four Azure virtual machines (VM).
|
||||
The Azure VMs should have matching machine types and configurations to ensure predictable performance with MinIO.
|
||||
|
||||
MinIO provides :ref:`hardware guidelines <deploy-minio-distributed-recommendations>` for selecting the appropriate EC2 instance class and size.
|
||||
MinIO strongly recommends selecting VM instances with support for Premium SSDs and *at least* 25Gbps Network bandwidth as a baseline for performance.
|
||||
|
||||
For more complete information on Azure Virtual Machine types and Storage resources, see :azure-docs:`Sizes for virtual machines in Azure <virtual-machines/sizes>` and :azure-docs:`Azure managed disk types <virtual-machines/disks-types>`
|
||||
|
||||
.. _deploy-minio-tenant-pv:
|
||||
|
||||
Persistent Volumes
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: /includes/common-admonitions.rst
|
||||
:start-after: start-exclusive-drive-access
|
||||
:end-before: end-exclusive-drive-access
|
||||
|
||||
.. cond:: not eks
|
||||
|
||||
MinIO can use any Kubernetes :kube-docs:`Persistent Volume (PV) <concepts/storage/persistent-volumes>` that supports the :kube-docs:`ReadWriteOnce <concepts/storage/persistent-volumes/#access-modes>` access mode.
|
||||
MinIO's consistency guarantees require the exclusive storage access that ``ReadWriteOnce`` provides.
|
||||
The Persistent Volume **must** exist prior to deploying the Tenant.
|
||||
|
||||
|
||||
Additionally, MinIO recommends setting a reclaim policy of ``Retain`` for the PVC :kube-docs:`StorageClass <concepts/storage/storage-classes>`.
|
||||
Where possible, configure the Storage Class, CSI, or other provisioner underlying the PV to format volumes as XFS to ensure best performance.
|
||||
|
||||
For Kubernetes clusters where nodes have Direct Attached Storage, MinIO strongly recommends using the `DirectPV CSI driver <https://min.io/directpv?ref=docs>`__.
|
||||
DirectPV provides a distributed persistent volume manager that can discover, format, mount, schedule, and monitor drives across Kubernetes nodes.
|
||||
DirectPV addresses the limitations of manually provisioning and monitoring :kube-docs:`local persistent volumes <concepts/storage/volumes/#local>`.
|
||||
|
||||
.. cond:: eks
|
||||
|
||||
MinIO Tenants on EKS must use the :github:`EBS CSI Driver <kubernetes-sigs/aws-ebs-csi-driver>` to provision the necessary underlying persistent volumes.
|
||||
MinIO strongly recommends using SSD-backed EBS volumes for best performance.
|
||||
MinIO strongly recommends deploying EBS-based PVs with the XFS filesystem.
|
||||
Create a StorageClass for the MinIO EBS PVs and set the ``csi.storage.k8s.io/fstype`` `parameter <https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/parameters.md>`__ to ``xfs`` .
|
||||
For more information on EBS resources, see `EBS Volume Types <https://aws.amazon.com/ebs/volume-types/>`__.
|
||||
For more information on StorageClass Parameters, see `StorageClass Parameters <https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/parameters.md>`__.
|
||||
|
||||
.. cond:: gke
|
||||
|
||||
MinIO Tenants on GKE should use the :gke-docs:`Compute Engine Persistent Disk CSI Driver <how-to/persistent-volumes/gce-pd-csi-driver>` to provision the necessary underlying persistent volumes.
|
||||
MinIO strongly recommends SSD-backed disk types for best performance.
|
||||
For more information on GKE disk types, see :gcp-docs:`Persistent Disks <disks>`.
|
||||
|
||||
.. cond:: aks
|
||||
|
||||
MinIO Tenants on AKS should use the :azure-docs:`Azure Disks CSI driver <azure-disk-csi>` to provision the necessary underlying persistent volumes.
|
||||
MinIO strongly recommends SSD-backed disk types for best performance.
|
||||
For more information on AKS disk types, see :azure-docs:`Azure disk types <virtual-machines/disk-types>`.
|
||||
|
||||
Namespace
|
||||
~~~~~~~~~
|
||||
|
||||
The tenant must use its own namespace and cannot share a namespace with another tenant.
|
||||
In addition, MinIO strongly recommends using a dedicated namespace for the tenant with no other applications running in the namespace.
|
||||
|
||||
.. _minio-k8s-deploy-minio-tenant-security:
|
||||
|
||||
Deploy a MinIO Tenant using Kustomize
|
||||
-------------------------------------
|
||||
|
||||
The following procedure uses ``kubectl -k`` to deploy a MinIO Tenant using the ``base`` Kustomization template in the :minio-git:`MinIO Operator Github repository <operator/tree/master/examples/kustomization/base>`.
|
||||
|
||||
You can select a different base or pre-built template from the :minio-git:`repository <operator/tree/master/examples/kustomization/>` as your starting point, or build your own Kustomization resources using the :ref:`MinIO Custom Resource Documentation <minio-operator-crd>`.
|
||||
|
||||
.. important::
|
||||
|
||||
If you use Kustomize to deploy a MinIO Tenant, you must use Kustomize to manage or upgrade that deployment.
|
||||
Do not use ``kubectl krew``, a Helm Chart, or similar methods to manage or upgrade the MinIO Tenant.
|
||||
|
||||
This procedure is not exhaustive of all possible configuration options available in the :ref:`Tenant CRD <minio-operator-crd>`.
|
||||
It provides a baseline from which you can modify and tailor the Tenant to your requirements.
|
||||
|
||||
.. container:: procedure
|
||||
|
||||
#. Create a YAML object for the Tenant
|
||||
|
||||
Use the ``kubectl kustomize`` command to produce a YAML file containing all Kubernetes resources necessary to deploy the ``base`` Tenant:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl kustomize https://github.com/minio/operator/examples/kustomization/base/ > tenant-base.yaml
|
||||
|
||||
The command creates a single YAML file with multiple objects separated by the ``---`` line.
|
||||
Open the file in your preferred editor.
|
||||
|
||||
The following steps reference each object based on it's ``kind`` and ``metadata.name`` fields:
|
||||
|
||||
#. Configure the Tenant topology
|
||||
|
||||
The ``kind: Tenant`` object describes the MinIO Tenant.
|
||||
|
||||
The following fields share the ``spec.pools[0]`` prefix and control the number of servers, volumes per server, and storage class of all pods deployed in the Tenant:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 30 70
|
||||
|
||||
* - Field
|
||||
- Description
|
||||
|
||||
* - ``servers``
|
||||
- The number of MinIO pods to deploy in the Server Pool.
|
||||
|
||||
* - ``volumesPerServer``
|
||||
- The number of persistent volumes to attach to each MinIO pod (``servers``).
|
||||
The Operator generates ``volumesPerServer x servers`` Persistant Volume Claims for the Tenant.
|
||||
|
||||
* - ``volumeClaimTemplate.spec.storageClassName``
|
||||
- The Kubernetes storage class to associate with the generated Persistent Volume Claims.
|
||||
|
||||
If no storage class exists matching the specified value *or* if the specified storage class cannot meet the requested number of PVCs or storage capacity, the Tenant may fail to start.
|
||||
|
||||
* - ``volumeClaimTemplate.spec.resources.requests.storage``
|
||||
- The amount of storage to request for each generated PVC.
|
||||
|
||||
#. Configure Tenant Affinity or Anti-Affinity
|
||||
|
||||
The MinIO Operator supports the following Kubernetes Affinity and Anti-Affinity configurations:
|
||||
|
||||
- Node Affinity (``spec.pools[n].nodeAffinity``)
|
||||
- Pod Affinity (``spec.pools[n].podAffinity``)
|
||||
- Pod Anti-Affinity (``spec.pools[n].podAntiAffinity``)
|
||||
|
||||
MinIO recommends configuring Tenants with Pod Anti-Affinity to ensure that the Kubernetes schedule does not schedule multiple pods on the same worker node.
|
||||
|
||||
If you have specific worker nodes on which you want to deploy the tenant, pass those node labels or filters to the ``nodeAffinity`` field to constrain the scheduler to place pods on those nodes.
|
||||
|
||||
#. Configure Network Encryption
|
||||
|
||||
The MinIO Tenant CRD provides the following fields from which you can configure tenant TLS network encryption:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 30 70
|
||||
|
||||
* - Field
|
||||
- Description
|
||||
|
||||
* - ``tenant.certificate.requestAutoCert``
|
||||
- Enable or disable MinIO :ref:`automatic TLS certificate generation <minio-tls>`
|
||||
|
||||
Defaults to ``true`` or enabled if omitted.
|
||||
|
||||
* - ``tenant.certificate.certConfig``
|
||||
- Customize the behavior of :ref:`automatic TLS <minio-tls>`, if enabled.
|
||||
|
||||
* - ``tenant.certificate.externalCertSecret``
|
||||
- Enable TLS for multiple hostnames via Server Name Indication (SNI)
|
||||
|
||||
Specify one or more Kubernetes secrets of type ``kubernetes.io/tls`` or ``cert-manager``.
|
||||
|
||||
* - ``tenant.certificate.externalCACertSecret``
|
||||
- Enable validation of client TLS certificates signed by unknown, third-party, or internal Certificate Authorities (CA).
|
||||
|
||||
Specify one or more Kubernetes secrets of type ``kubernetes.io/tls`` containing the full chain of CA certificates for a given authority.
|
||||
|
||||
#. Configure MinIO Environment Variables
|
||||
|
||||
You can set MinIO Server environment variables using the ``tenant.configuration`` field.
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 30 70
|
||||
|
||||
* - Field
|
||||
- Description
|
||||
|
||||
* - ``tenant.configuration``
|
||||
- Specify a Kubernetes opaque secret whose data payload ``config.env`` contains each MinIO environment variable you want to set.
|
||||
|
||||
The ``config.env`` data payload **must** be a base64-encoded string.
|
||||
You can create a local file, set your environment variables, and then use ``cat LOCALFILE | base64`` to create the payload.
|
||||
|
||||
The YAML includes an object ``kind: Secret`` with ``metadata.name: storage-configuration`` that sets the root username, password, erasure parity settings, and enables Tenant Console.
|
||||
|
||||
Modify this as needed to reflect your Tenant requirements.
|
||||
|
||||
#. Review the Namespace
|
||||
|
||||
The YAML object ``kind: Namespace`` sets the default namespace for the Tenant to ``minio-tenant``.
|
||||
|
||||
You can change this value to create a different namespace for the Tenant.
|
||||
You must change **all** ``metadata.namespace`` values in the YAML file to match the Namespace.
|
||||
|
||||
#. Deploy the Tenant
|
||||
|
||||
Use the ``kubectl apply -f`` command to deploy the Tenant.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl apply -f tenant-base.yaml
|
||||
|
||||
The command creates each of the resources specified in the YAML object at the configured namespace.
|
||||
|
||||
You can monitor the progress using the following command:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
watch kubectl get all -n minio-tenant
|
||||
|
||||
#. Expose the Tenant MinIO S3 API port
|
||||
|
||||
To test the MinIO Client :mc:`mc` from your local machine, forward the MinIO port and create an alias.
|
||||
|
||||
* Forward the Tenant's MinIO port:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl port-forward svc/MINIO_TENANT_NAME-hl 9000 -n MINIO_TENANT_NAMESPACE
|
||||
|
||||
* Create an alias for the Tenant service:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc alias set myminio https://localhost:9000 minio minio123 --insecure
|
||||
|
||||
You can use :mc:`mc mb` to create a bucket on the Tenant:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc mb myminio/mybucket --insecure
|
||||
|
||||
If you deployed your MinIO Tenant using TLS certificates minted by a trusted Certificate Authority (CA) you can omit the ``--insecure`` flag.
|
||||
|
||||
See :ref:`create-tenant-connect-tenant` for specific instructions.
|
||||
|
||||
.. _create-tenant-connect-tenant:
|
||||
|
||||
Connect to the Tenant
|
||||
---------------------
|
||||
|
||||
The MinIO Operator creates services for the MinIO Tenant.
|
||||
|
||||
.. cond:: openshift
|
||||
|
||||
Use the ``oc get svc -n TENANT-PROJECT`` command to review the deployed services:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
oc get svc -n TENANT-NAMESPACE
|
||||
|
||||
.. cond:: k8s and not openshift
|
||||
|
||||
Use the ``kubectl get svc -n NAMESPACE`` command to review the deployed services:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl get svc -n TENANT-NAMESPACE
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
minio LoadBalancer 10.97.114.60 <pending> 443:30979/TCP 2d3h
|
||||
TENANT-NAMESPACE-console LoadBalancer 10.106.103.247 <pending> 9443:32095/TCP 2d3h
|
||||
TENANT-NAMESPACE-hl ClusterIP None <none> 9000/TCP 2d3h
|
||||
|
||||
- The ``minio`` service corresponds to the MinIO Tenant service.
|
||||
Applications should use this service for performing operations against the MinIO Tenant.
|
||||
|
||||
- The ``*-console`` service corresponds to the :minio-git:`MinIO Console <console>`.
|
||||
Administrators should use this service for accessing the MinIO Console and performing administrative operations on the MinIO Tenant.
|
||||
|
||||
The remaining services support Tenant operations and are not intended for consumption by users or administrators.
|
||||
|
||||
By default each service is visible only within the Kubernetes cluster.
|
||||
Applications deployed inside the cluster can access the services using the ``CLUSTER-IP``.
|
||||
|
||||
Applications external to the Kubernetes cluster can access the services using the ``EXTERNAL-IP``.
|
||||
This value is only populated for Kubernetes clusters configured for Ingress or a similar network access service.
|
||||
Kubernetes provides multiple options for configuring external access to services.
|
||||
|
||||
.. cond:: k8s and not openshift
|
||||
|
||||
See the Kubernetes documentation on :kube-docs:`Publishing Services (ServiceTypes) <concepts/services-networking/service/#publishing-services-service-types>` and :kube-docs:`Ingress <concepts/services-networking/ingress/>` for more complete information on configuring external access to services.
|
||||
|
||||
.. cond:: openshift
|
||||
|
||||
See the OpenShift documentation on :openshift-docs:`Route or Ingress <networking/understanding-networking.html#nw-ne-comparing-ingress-route_understanding-networking>` for more complete information on configuring external access to services.
|
||||
|
||||
.. cond:: openshift
|
||||
|
||||
.. include:: /includes/openshift/steps-deploy-minio-tenant.rst
|
||||
|
||||
.. toctree::
|
||||
:titlesonly:
|
||||
:hidden:
|
||||
|
||||
/operations/install-deploy-manage/deploy-minio-tenant-helm
|
@@ -1,190 +0,0 @@
|
||||
.. _minio-k8s-deploy-operator-helm:
|
||||
|
||||
=========================
|
||||
Deploy Operator With Helm
|
||||
=========================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 2
|
||||
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
Helm is a tool for automating the deployment of applications to Kubernetes clusters.
|
||||
A `Helm chart <https://helm.sh/docs/topics/charts/>`__ is a set of YAML files, templates, and other files that define the deployment details.
|
||||
The following procedure uses a Helm Chart to install the :ref:`MinIO Kubernetes Operator <minio-operator-installation>` to a Kubernetes cluster.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
See the :ref:`Operator Prerequisites <minio-operator-prerequisites>` for a baseline of requirements.
|
||||
Helm installations have the following additional requirements:
|
||||
|
||||
* `Helm <https://helm.sh/docs/intro/install/>`__ (Use the Version appropriate for your Kubernetes API version)
|
||||
* `yq <https://github.com/mikefarah/yq/#install>`__
|
||||
|
||||
For more about Operator installation requirements, including supported Kubernetes versions and TLS certificates, see the :ref:`Operator deployment prerequisites <minio-operator-prerequisites>`.
|
||||
|
||||
This procedure assumes familiarity with the referenced Kubernetes concepts and utilities.
|
||||
While this documentation may provide guidance for configuring or deploying Kubernetes-related resources on a best-effort basis, it is not a replacement for the official :kube-docs:`Kubernetes Documentation <>`.
|
||||
|
||||
.. _minio-k8s-deploy-operator-helm-repo:
|
||||
|
||||
Install the MinIO Operator using Helm Charts
|
||||
--------------------------------------------
|
||||
|
||||
The following procedure installs the Operator using the MinIO Operator Chart Repository.
|
||||
This method supports a simplified installation path compared to the :ref:`local chart installation <minio-k8s-deploy-operator-helm-local>`.
|
||||
You can modify the Operator deployment after installation.
|
||||
|
||||
.. important::
|
||||
|
||||
If you use Helm charts to install the Operator, you must use Helm to manage that installation.
|
||||
Do not use ``kubectl krew``, Kustomize, or similar methods to update or manage the MinIO Operator installation.
|
||||
|
||||
#. Add the MinIO Operator Repo to Helm
|
||||
|
||||
MinIO maintains a Helm-compatible repository at https://operator.min.io.
|
||||
Add this repository to Helm:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm repo add minio-operator https://operator.min.io
|
||||
|
||||
You can validate the repo contents using ``helm search``:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm search repo minio-operator
|
||||
|
||||
The response should resemble the following:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
NAME CHART VERSION APP VERSION DESCRIPTION
|
||||
minio-operator/minio-operator 4.3.7 v4.3.7 A Helm chart for MinIO Operator
|
||||
minio-operator/operator 6.0.1 v6.0.1 A Helm chart for MinIO Operator
|
||||
minio-operator/tenant 6.0.1 v6.0.1 A Helm chart for MinIO Operator
|
||||
|
||||
The ``minio-operator/minio-operator`` is a legacy chart and should **not** be installed under normal circumstances.
|
||||
|
||||
#. Install the Operator
|
||||
|
||||
Run the ``helm install`` command to install the Operator.
|
||||
The following command specifies and creates a dedicated namespace ``minio-operator`` for installation.
|
||||
MinIO strongly recommends using a dedicated namespace for the Operator.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm install \
|
||||
--namespace minio-operator \
|
||||
--create-namespace \
|
||||
operator minio-operator/operator
|
||||
|
||||
#. Verify the Operator installation
|
||||
|
||||
Check the contents of the specified namespace (``minio-operator``) to ensure all pods and services have started successfully.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl get all -n minio-operator
|
||||
|
||||
The response should resemble the following:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/minio-operator-699f797b8b-th5bk 1/1 Running 0 25h
|
||||
pod/minio-operator-699f797b8b-nkrn9 1/1 Running 0 25h
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/operator ClusterIP 10.43.44.204 <none> 4221/TCP 25h
|
||||
service/sts ClusterIP 10.43.70.4 <none> 4223/TCP 25h
|
||||
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/minio-operator 2/2 2 2 25h
|
||||
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
replicaset.apps/minio-operator-79f7bfc48 2 2 2 123m
|
||||
|
||||
You can now :ref:`deploy a tenant using Helm Charts <deploy-tenant-helm>`.
|
||||
|
||||
.. _minio-k8s-deploy-operator-helm-local:
|
||||
|
||||
Install the MinIO Operator using Local Helm Charts
|
||||
--------------------------------------------------
|
||||
|
||||
The following procedure installs the Operator using a local copy of the Helm Charts.
|
||||
This method may support easier pre-configuration of the Operator compared to the :ref:`repo-based installation <minio-k8s-deploy-operator-helm-repo>`
|
||||
|
||||
#. Download the Helm charts
|
||||
|
||||
On your local host, download the Operator Helm charts to a convenient directory:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
:substitutions:
|
||||
|
||||
curl -O https://raw.githubusercontent.com/minio/operator/master/helm-releases/operator-|operator-version-stable|.tgz
|
||||
|
||||
|
||||
#. (Optional) Modify the ``values.yaml``
|
||||
|
||||
The chart contains a ``values.yaml`` file you can customize to suit your needs.
|
||||
For details on the options available in the MinIO Operator ``values.yaml``, see :ref:`minio-operator-chart-values`.
|
||||
|
||||
For example, you can change the number of replicas for ``operators.replicaCount`` to increase or decrease pod availability in the deployment.
|
||||
See :ref:`minio-operator-chart-values` for more complete documentation on the Operator Helm Chart and Values.
|
||||
|
||||
For more about customizations, see `Helm Charts <https://helm.sh/docs/topics/charts/>`__.
|
||||
|
||||
#. Install the Helm Chart
|
||||
|
||||
Use the ``helm install`` command to install the chart.
|
||||
The following command assumes the Operator chart is saved to ``./operator`` relative to the working directory.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm install \
|
||||
--namespace minio-operator \
|
||||
--create-namespace \
|
||||
minio-operator ./operator
|
||||
|
||||
#. To verify the installation, run the following command:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl get all --namespace minio-operator
|
||||
|
||||
If you initialized the Operator with a custom namespace, replace
|
||||
``minio-operator`` with that namespace.
|
||||
|
||||
The output resembles the following:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/minio-operator-7976b4df5b-rsskl 1/1 Running 0 81m
|
||||
pod/minio-operator-7976b4df5b-x622g 1/1 Running 0 81m
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/operator ClusterIP 10.110.113.146 <none> 4222/TCP,4233/TCP 81m
|
||||
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/minio-operator 2/2 2 2 81m
|
||||
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
replicaset.apps/minio-operator-7976b4df5b 1 1 1 81m
|
||||
|
||||
You can now :ref:`deploy a tenant using Helm Charts <deploy-tenant-helm>`.
|
@@ -1,374 +0,0 @@
|
||||
.. _expand-minio-distributed:
|
||||
|
||||
=====================================
|
||||
Expand a Distributed MinIO Deployment
|
||||
=====================================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
MinIO supports expanding an existing distributed deployment by adding a new :ref:`Server Pool <minio-intro-server-pool>`.
|
||||
Each Pool expands the total available storage capacity of the cluster.
|
||||
|
||||
Expansion does not provide Business Continuity/Disaster Recovery (BC/DR)-grade protections.
|
||||
While each pool is an independent set of servers with distinct :ref:`erasure sets <minio-ec-erasure-set>` for availability, the complete loss of one pool results in MinIO stopping I/O for all pools in the deployment.
|
||||
Similarly, an erasure set which loses quorum in one pool represents data loss of objects stored in that set, regardless of the number of other erasure sets or pools.
|
||||
|
||||
The new server pool does **not** need to use the same type or size of hardware and software configuration as any existing server pool, though doing so may allow for simplified cluster management and more predictable performance across pools.
|
||||
All drives in the new pool **should** be of the same type and size within the new pool.
|
||||
Review MinIO's :ref:`hardware recommendations <minio-hardware-checklist>` for more complete guidance on selecting an appropriate configuration.
|
||||
|
||||
To provide BC-DR grade failover and recovery support for your single or multi-pool MinIO deployments, use :ref:`site replication <minio-site-replication-overview>`.
|
||||
|
||||
The procedure on this page expands an existing :ref:`distributed <deploy-minio-distributed>` MinIO deployment with an additional server pool.
|
||||
|
||||
.. _expand-minio-distributed-prereqs:
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Networking and Firewalls
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Each node should have full bidirectional network access to every other node in
|
||||
the deployment. For containerized or orchestrated infrastructures, this may
|
||||
require specific configuration of networking and routing components such as
|
||||
ingress or load balancers. Certain operating systems may also require setting
|
||||
firewall rules. For example, the following command explicitly opens the default
|
||||
MinIO server API port ``9000`` on servers using ``firewalld``:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
firewall-cmd --permanent --zone=public --add-port=9000/tcp
|
||||
firewall-cmd --reload
|
||||
|
||||
All MinIO servers in the deployment *must* use the same listen port.
|
||||
|
||||
If you set a static :ref:`MinIO Console <minio-console>` port (e.g. ``:9001``)
|
||||
you must *also* grant access to that port to ensure connectivity from external
|
||||
clients.
|
||||
|
||||
MinIO **strongly recomends** using a load balancer to manage connectivity to the
|
||||
cluster. The Load Balancer should use a "Least Connections" algorithm for
|
||||
routing requests to the MinIO deployment, since any MinIO node in the deployment
|
||||
can receive, route, or process client requests.
|
||||
|
||||
The following load balancers are known to work well with MinIO:
|
||||
|
||||
- `NGINX <https://www.nginx.com/products/nginx/load-balancing/>`__
|
||||
- `HAProxy <https://cbonte.github.io/haproxy-dconv/2.3/intro.html#3.3.5>`__
|
||||
|
||||
Configuring firewalls or load balancers to support MinIO is out of scope for
|
||||
this procedure.
|
||||
The :ref:`integrations-nginx-proxy` reference provides a baseline configuration for using NGINX as a reverse proxy with basic load balancing configured.
|
||||
|
||||
Sequential Hostnames
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MinIO *requires* using expansion notation ``{x...y}`` to denote a sequential
|
||||
series of MinIO hosts when creating a server pool. MinIO therefore *requires*
|
||||
using sequentially-numbered hostnames to represent each
|
||||
:mc:`minio server` process in the pool.
|
||||
|
||||
Create the necessary DNS hostname mappings *prior* to starting this procedure.
|
||||
For example, the following hostnames would support a 4-node distributed
|
||||
server pool:
|
||||
|
||||
- ``minio5.example.com``
|
||||
- ``minio6.example.com``
|
||||
- ``minio7.example.com``
|
||||
- ``minio8.example.com``
|
||||
|
||||
You can specify the entire range of hostnames using the expansion notation
|
||||
``minio{5...8}.example.com``.
|
||||
|
||||
Configuring DNS to support MinIO is out of scope for this procedure.
|
||||
|
||||
Storage Requirements
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. |deployment| replace:: server pool
|
||||
|
||||
.. include:: /includes/common-installation.rst
|
||||
:start-after: start-storage-requirements-desc
|
||||
:end-before: end-storage-requirements-desc
|
||||
|
||||
.. include:: /includes/common-admonitions.rst
|
||||
:start-after: start-exclusive-drive-access
|
||||
:end-before: end-exclusive-drive-access
|
||||
|
||||
Minimum Drives for Erasure Code Parity
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MinIO requires each pool satisfy the deployment :ref:`erasure code
|
||||
<minio-erasure-coding>` settings. Specifically the new pool topology must
|
||||
support a minimum of ``2 x EC:N`` drives per
|
||||
:ref:`erasure set <minio-ec-erasure-set>`, where ``EC:N`` is the
|
||||
:ref:`Standard <minio-ec-storage-class>` parity storage class of the
|
||||
deployment. This requirement ensures the new server pool can satisfy the
|
||||
expected :abbr:`SLA (Service Level Agreement)` of the deployment.
|
||||
|
||||
You can use the
|
||||
`MinIO Erasure Code Calculator
|
||||
<https://min.io/product/erasure-code-calculator?ref=docs>`__ to check the
|
||||
:guilabel:`Erasure Code Stripe Size (K+M)` of your new pool. If the highest
|
||||
listed value is at least ``2 x EC:N``, the pool supports the deployment's
|
||||
erasure parity settings.
|
||||
|
||||
Time Synchronization
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Multi-node systems must maintain synchronized time and date to maintain stable internode operations and interactions.
|
||||
Make sure all nodes sync to the same time server regularly.
|
||||
Operating systems vary for methods used to synchronize time and date, such as with ``ntp``, ``timedatectl``, or ``timesyncd``.
|
||||
|
||||
Check the documentation for your operating system for how to set up and maintain accurate and identical system clock times across nodes.
|
||||
|
||||
Back Up Cluster Settings First
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the :mc:`mc admin cluster bucket export` and :mc:`mc admin cluster iam export` commands to take a snapshot of the bucket metadata and IAM configurations respectively prior to starting decommissioning.
|
||||
You can use these snapshots to restore :ref:`bucket <minio-mc-admin-cluster-bucket-import>` and :ref:`IAM <minio-mc-admin-cluster-iam-import>` settings to recover from user or process errors as necessary.
|
||||
|
||||
Considerations
|
||||
--------------
|
||||
|
||||
.. _minio-writing-files:
|
||||
|
||||
Writing Files
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
MinIO does not automatically rebalance objects across the new server pools.
|
||||
Instead, MinIO performs new write operations to the pool with the most free
|
||||
storage weighted by the amount of free space on the pool divided by the free space across all available pools.
|
||||
|
||||
The formula to determine the probability of a write operation on a particular pool is
|
||||
|
||||
:math:`FreeSpaceOnPoolA / FreeSpaceOnAllPools`
|
||||
|
||||
Consider a situation where a group of three pools has a total of 10 TiB of free space distributed as:
|
||||
|
||||
- Pool A has 3 TiB of free space
|
||||
- Pool B has 2 TiB of free space
|
||||
- Pool C has 5 TiB of free space
|
||||
|
||||
MinIO calculates the probability of a write operation to each of the pools as:
|
||||
|
||||
- Pool A: 30% chance (:math:`3TiB / 10TiB`)
|
||||
- Pool B: 20% chance (:math:`2TiB / 10TiB`)
|
||||
- Pool C: 50% chance (:math:`5TiB / 10TiB`)
|
||||
|
||||
In addition to the free space calculation, if a write option (with parity) would bring a drive
|
||||
usage above 99% or a known free inode count below 1000, MinIO does not write to the pool.
|
||||
|
||||
If desired, you can manually initiate a rebalance procedure with :mc:`mc admin rebalance`.
|
||||
For more about how rebalancing works, see :ref:`managing objects across a deployment <minio-rebalance>`.
|
||||
|
||||
Likewise, MinIO does not write to pools in a decommissioning process.
|
||||
|
||||
Expansion is Non-Disruptive
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Adding a new server pool requires restarting *all* MinIO server processes in the
|
||||
deployment at around same time.
|
||||
|
||||
.. include:: /includes/common-installation.rst
|
||||
:start-after: start-nondisruptive-upgrade-desc
|
||||
:end-before: end-nondisruptive-upgrade-desc
|
||||
|
||||
Capacity-Based Planning
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MinIO recommends planning storage capacity sufficient to store **at least** 2 years of data before reaching 70% usage.
|
||||
Performing :ref:`server pool expansion <expand-minio-distributed>` more frequently or on a "just-in-time" basis generally indicates an architecture or planning issue.
|
||||
|
||||
For example, consider an application suite expected to produce at least 100 TiB of data per year and a 3 year target before expansion.
|
||||
The deployment has ~500TiB of usable storage in the initial server pool, such that the cluster safely met the 70% threshold with some buffer for data growth.
|
||||
The new server pool should **ideally** meet at minimum 500TiB of additional storage to allow for a similar lifespan before further expansion.
|
||||
|
||||
Since MinIO :ref:`erasure coding <minio-erasure-coding>` requires some storage for parity, the total **raw** storage must exceed the planned **usable** capacity.
|
||||
Consider using the MinIO `Erasure Code Calculator <https://min.io/product/erasure-code-calculator>`__ for guidance in planning capacity around specific erasure code settings.
|
||||
|
||||
Recommended Operating Systems
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This tutorial assumes all hosts running MinIO use a :ref:`recommended Linux operating system <minio-installation-platform-support>`.
|
||||
|
||||
All hosts in the deployment should run with matching :ref:`software configurations <minio-software-checklists>`.
|
||||
|
||||
.. _expand-minio-distributed-baremetal:
|
||||
|
||||
Expand a Distributed MinIO Deployment
|
||||
-------------------------------------
|
||||
|
||||
The following procedure adds a :ref:`Server Pool <minio-intro-server-pool>`
|
||||
to an existing MinIO deployment. Each Pool expands the total available
|
||||
storage capacity of the cluster while maintaining the overall
|
||||
:ref:`availability <minio-erasure-coding>` of the cluster.
|
||||
|
||||
All commands provided below use example values. Replace these values with
|
||||
those appropriate for your deployment.
|
||||
|
||||
Review the :ref:`expand-minio-distributed-prereqs` before starting this
|
||||
procedure.
|
||||
|
||||
Complete any planned hardware expansion prior to :ref:`decommissioning older hardware pools <minio-decommissioning>`.
|
||||
|
||||
1) Install the MinIO Binary on Each Node in the New Server Pool
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. cond:: linux
|
||||
|
||||
.. include:: /includes/linux/common-installation.rst
|
||||
:start-after: start-install-minio-binary-desc
|
||||
:end-before: end-install-minio-binary-desc
|
||||
|
||||
.. cond:: macos
|
||||
|
||||
.. include:: /includes/macos/common-installation.rst
|
||||
:start-after: start-install-minio-binary-desc
|
||||
:end-before: end-install-minio-binary-desc
|
||||
|
||||
2) Add TLS/SSL Certificates
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: /includes/common-installation.rst
|
||||
:start-after: start-install-minio-tls-desc
|
||||
:end-before: end-install-minio-tls-desc
|
||||
|
||||
3) Create the ``systemd`` Service File
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: /includes/linux/common-installation.rst
|
||||
:start-after: start-install-minio-systemd-desc
|
||||
:end-before: end-install-minio-systemd-desc
|
||||
|
||||
4) Create the Service Environment File
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Create an environment file at ``/etc/default/minio``. The MinIO
|
||||
service uses this file as the source of all
|
||||
:ref:`environment variables <minio-server-environment-variables>` used by
|
||||
MinIO *and* the ``minio.service`` file.
|
||||
|
||||
The following examples assumes that:
|
||||
|
||||
- The deployment has a single server pool consisting of four MinIO server hosts
|
||||
with sequential hostnames.
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
minio1.example.com minio3.example.com
|
||||
minio2.example.com minio4.example.com
|
||||
|
||||
Each host has 4 locally attached drives with
|
||||
sequential mount points:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
/mnt/disk1/minio /mnt/disk3/minio
|
||||
/mnt/disk2/minio /mnt/disk4/minio
|
||||
|
||||
- The new server pool consists of eight new MinIO hosts with sequential
|
||||
hostnames:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
minio5.example.com minio9.example.com
|
||||
minio6.example.com minio10.example.com
|
||||
minio7.example.com minio11.example.com
|
||||
minio8.example.com minio12.example.com
|
||||
|
||||
- All hosts have eight locally-attached drives with sequential mount-points:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
/mnt/disk1/minio /mnt/disk5/minio
|
||||
/mnt/disk2/minio /mnt/disk6/minio
|
||||
/mnt/disk3/minio /mnt/disk7/minio
|
||||
/mnt/disk4/minio /mnt/disk8/minio
|
||||
|
||||
- The deployment has a load balancer running at ``https://minio.example.net``
|
||||
that manages connections across all MinIO hosts. The load balancer should
|
||||
not be routing requests to the new hosts at this step, but should have
|
||||
the necessary configuration updates planned.
|
||||
|
||||
Modify the example to reflect your deployment topology:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
# Set the hosts and volumes MinIO uses at startup
|
||||
# The command uses MinIO expansion notation {x...y} to denote a
|
||||
# sequential series.
|
||||
#
|
||||
# The following example starts the MinIO server with two server pools.
|
||||
#
|
||||
# The space delimiter indicates a seperate server pool
|
||||
#
|
||||
# The second set of hostnames and volumes is the newly added pool.
|
||||
# The pool has sufficient stripe size to meet the existing erasure code
|
||||
# parity of the deployment (2 x EC:4)
|
||||
#
|
||||
# The command includes the port on which the MinIO servers listen for each
|
||||
# server pool.
|
||||
|
||||
MINIO_VOLUMES="https://minio{1...4}.example.net:9000/mnt/disk{1...4}/minio https://minio{5...12}.example.net:9000/mnt/disk{1...8}/minio"
|
||||
|
||||
# Set all MinIO server options
|
||||
#
|
||||
# The following explicitly sets the MinIO Console listen address to
|
||||
# port 9001 on all network interfaces. The default behavior is dynamic
|
||||
# port selection.
|
||||
|
||||
MINIO_OPTS="--console-address :9001"
|
||||
|
||||
# Set the root username. This user has unrestricted permissions to
|
||||
# perform S3 and administrative API operations on any resource in the
|
||||
# deployment.
|
||||
#
|
||||
# Defer to your organizations requirements for superadmin user name.
|
||||
|
||||
MINIO_ROOT_USER=minioadmin
|
||||
|
||||
# Set the root password
|
||||
#
|
||||
# Use a long, random, unique string that meets your organizations
|
||||
# requirements for passwords.
|
||||
|
||||
MINIO_ROOT_PASSWORD=minio-secret-key-CHANGE-ME
|
||||
|
||||
You may specify other :ref:`environment variables
|
||||
<minio-server-environment-variables>` or server commandline options as required
|
||||
by your deployment. All MinIO nodes in the deployment should include the same
|
||||
environment variables with the matching values.
|
||||
|
||||
5) Restart the MinIO Deployment with Expanded Configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Issue the following commands on each node **simultaneously** in the deployment
|
||||
to restart the MinIO service:
|
||||
|
||||
.. include:: /includes/linux/common-installation.rst
|
||||
:start-after: start-install-minio-restart-service-desc
|
||||
:end-before: end-install-minio-restart-service-desc
|
||||
|
||||
.. include:: /includes/common-installation.rst
|
||||
:start-after: start-nondisruptive-upgrade-desc
|
||||
:end-before: end-nondisruptive-upgrade-desc
|
||||
|
||||
6) Next Steps
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
- Update any load balancers, reverse proxies, or other network control planes
|
||||
to route client requests to the new hosts in the MinIO distributed deployment.
|
||||
While MinIO automatically manages routing internally, having the control
|
||||
planes handle initial connection management may reduce network hops and
|
||||
improve efficiency.
|
||||
|
||||
- Review the :ref:`MinIO Console <minio-console>` to confirm the updated
|
||||
cluster topology and monitor performance.
|
@@ -1,143 +0,0 @@
|
||||
.. _minio-k8s-expand-minio-tenant:
|
||||
|
||||
=====================
|
||||
Expand a MinIO Tenant
|
||||
=====================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
|
||||
This procedure documents expanding the available storage capacity of an existing MinIO tenant by deploying an additional pool of MinIO pods in the Kubernetes infrastructure.
|
||||
|
||||
.. important::
|
||||
|
||||
The MinIO Operator Console is deprecated and removed in Operator 6.0.0.
|
||||
|
||||
See :ref:`minio-k8s-modify-minio-tenant` for instructions on migrating Tenants installed via the Operator Console to Kustomization.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
MinIO Kubernetes Operator
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This procedure on this page *requires* a valid installation of the MinIO Kubernetes Operator and assumes the local host has a matching installation of the MinIO Kubernetes Operator.
|
||||
This procedure assumes the latest stable Operator, version |operator-version-stable|.
|
||||
|
||||
See :ref:`deploy-operator-kubernetes` for complete documentation on deploying the MinIO Operator.
|
||||
|
||||
|
||||
Available Worker Nodes
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MinIO deploys additional :mc:`minio server <minio.server>` pods as part of the new Tenant pool.
|
||||
The Kubernetes cluster *must* have sufficient available worker nodes on which to schedule the new pods.
|
||||
|
||||
The MinIO Operator provides configurations for controlling pod affinity and anti-affinity to direct scheduling to specific workers.
|
||||
|
||||
Persistent Volumes
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: /includes/common-admonitions.rst
|
||||
:start-after: start-exclusive-drive-access
|
||||
:end-before: end-exclusive-drive-access
|
||||
|
||||
.. cond:: not eks
|
||||
|
||||
MinIO can use any Kubernetes :kube-docs:`Persistent Volume (PV) <concepts/storage/persistent-volumes>` that supports the :kube-docs:`ReadWriteOnce <concepts/storage/persistent-volumes/#access-modes>` access mode.
|
||||
MinIO's consistency guarantees require the exclusive storage access that ``ReadWriteOnce`` provides.
|
||||
|
||||
For Kubernetes clusters where nodes have Direct Attached Storage, MinIO strongly recommends using the `DirectPV CSI driver <https://min.io/directpv?ref=docs>`__.
|
||||
DirectPV provides a distributed persistent volume manager that can discover, format, mount, schedule, and monitor drives across Kubernetes nodes.
|
||||
DirectPV addresses the limitations of manually provisioning and monitoring :kube-docs:`local persistent volumes <concepts/storage/volumes/#local>`.
|
||||
|
||||
.. cond:: eks
|
||||
|
||||
MinIO Tenants on EKS must use the :github:`EBS CSI Driver <kubernetes-sigs/aws-ebs-csi-driver>` to provision the necessary underlying persistent volumes.
|
||||
MinIO strongly recommends using SSD-backed EBS volumes for best performance.
|
||||
For more information on EBS resources, see `EBS Volume Types <https://aws.amazon.com/ebs/volume-types/>`__.
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
The MinIO Operator supports expanding a MinIO Tenant by adding additional pools.
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Kustomization
|
||||
|
||||
#. Review the Kustomization object which describes the Tenant object (``tenant.yaml``).
|
||||
|
||||
The ``spec.pools`` array describes the current pool topology.
|
||||
|
||||
#. Add a new entry to the ``spec.pools`` array.
|
||||
|
||||
The new pool must reflect your intended combination of Worker nodes, volumes per server, storage class, and affinity/scheduler settings.
|
||||
See :ref:`minio-operator-crd` for more complete documentation on Pool-related configuration settings.
|
||||
|
||||
#. Apply the updated Tenant configuration
|
||||
|
||||
Use the ``kubectl apply`` command to update the Tenant:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
kubectl apply -k ~/kustomization/TENANT-NAME
|
||||
|
||||
Modify the path to the Kustomization directory to match your local configuration.
|
||||
|
||||
.. tab-item:: Helm
|
||||
|
||||
#. Review the Helm ``values.yaml`` file.
|
||||
|
||||
The ``tenant.pools`` array describes the current pool topology.
|
||||
|
||||
#. Add a new entry to the ``tenant.pools`` array.
|
||||
|
||||
The new pool must reflect your intended combination of Worker nodes, volumes per server, storage class, and affinity/scheduler settings.
|
||||
See :ref:`minio-tenant-chart-values` for more complete documentation on Pool-related configuration settings.
|
||||
|
||||
#. Apply the updated Tenant configuration
|
||||
|
||||
Use the ``helm upgrade`` command to update the Tenant:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
helm upgrade TENANT-NAME minio-operator/tenant -f values.yaml -n TENANT-NAMESPACE
|
||||
|
||||
The command above assumes use of the MinIO Operator Chart repository.
|
||||
If you installed the Chart manually or by using a different repository name, specify that chart or name in the command.
|
||||
|
||||
Replace ``TENANT-NAME`` and ``TENANT-NAMESPACE`` with the name and namespace of the Tenant respectively.
|
||||
You can use ``helm list -n TENANT-NAMESPACE`` to validate the Tenant name.
|
||||
|
||||
You can use the ``kubectl get events -n TENANT-NAMESPACE --watch`` to monitor the progress of expansion.
|
||||
The MinIO Operator updates services to route connections appropriately across the new nodes.
|
||||
If you use customized services, routes, ingress, or similar Kubernetes network components, you may need to update those components for the new pod hostname ranges.
|
||||
|
||||
.. Following link is intended for K8s only
|
||||
.. _minio-decommissioning:
|
||||
|
||||
Decommission a Tenant Server Pool
|
||||
----------------------------------
|
||||
|
||||
Decommissioning a server pool involves three steps:
|
||||
|
||||
1) Run the :mc-cmd:`mc admin decommission start` command against the Tenant
|
||||
|
||||
2) Wait until decommissioning completes
|
||||
|
||||
3) Modify the Tenant YAML to remove the decommissioned pool
|
||||
|
||||
When removing the Tenant pool, ensure the ``spec.pools.[n].name`` fields have values for all remaining pools.
|
||||
|
||||
.. include:: /includes/common-installation.rst
|
||||
:start-after: start-pool-order-must-not-change
|
||||
:end-before: end-pool-order-must-not-change
|
||||
|
||||
.. important::
|
||||
|
||||
You cannot reuse the same pool name or hostname sequence for a decommissioned pool.
|
@@ -1,264 +0,0 @@
|
||||
.. _minio-gateway-migration:
|
||||
|
||||
=======================================
|
||||
Migrate from Gateway or Filesystem Mode
|
||||
=======================================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
Background
|
||||
----------
|
||||
|
||||
The MinIO Gateway and the related filesystem mode entered a feature freeze in July 2020.
|
||||
In February 2022, MinIO announced the `deprecation of the MinIO Gateway <https://blog.min.io/deprecation-of-the-minio-gateway/?ref=docs>`__.
|
||||
Along with the deprecation announcement, MinIO also announced that the feature would be removed in six months time.
|
||||
|
||||
As of :minio-release:`RELEASE.2022-10-29T06-21-33Z`, the MinIO Gateway and the related filesystem mode code have been removed.
|
||||
Deployments still using the `standalone` or `filesystem` MinIO modes that upgrade to MinIO Server :minio-release:`RELEASE.2022-10-29T06-21-33Z` or later receive an error when attempting to start MinIO.
|
||||
|
||||
.. cond:: linux
|
||||
|
||||
.. note::
|
||||
|
||||
For deployments running in a container, see the `Container - Migrate from Gateway or Filesystem Mode <https://min.io/docs/minio/container/operations/install-deploy-manage/migrate-fs-gateway.html>`__ tutorial instead.
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
To upgrade to the :minio-release:`RELEASE.2022-10-29T06-21-33Z` or later release, those who were using the `standalone` or `filesystem` deployment modes must create a new :ref:`Single-Node Single-Drive <minio-snsd>` deployment and migrate settings and content to the new deployment.
|
||||
|
||||
This document outlines the steps required to successfully launch and migrate to a new deployment.
|
||||
|
||||
.. important::
|
||||
|
||||
Standalone/file system mode continues to work on any release up to and including MinIO Server `RELEASE.2022-10-24T18-35-07Z <https://github.com/minio/minio/releases/tag/RELEASE.2022-10-24T18-35-07Z>`__.
|
||||
To continue using a standalone deployment, install that MinIO Server release with MinIO Client `RELEASE.2022-10-29T10-09-23Z <https://github.com/minio/mc/releases/tag/RELEASE.2022-10-29T10-09-23Z>`__ or any `earlier release <https://github.com/minio/minio/releases>`__ with its corresponding MinIO Client. Note that the version of the MinIO Client should be newer and as close as possible to the version of the MinIO server.
|
||||
|
||||
Filesystem mode deployments must be on at least `RELEASE.2022-06-25T15-50-16Z <https://github.com/minio/minio/releases/tag/RELEASE.2022-06-25T15-50-16Z>`__ to use the MinIO Client import and export commands.
|
||||
Filesystem mode deployments up to and including `RELEASE.2022-06-20T23-13-45Z <https://github.com/minio/minio/releases/tag/RELEASE.2022-06-20T23-13-45Z>`__ can be migrated by manually recreating users, policies, buckets, and other resources on the new deployment.
|
||||
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
.. note::
|
||||
|
||||
You can set MinIO configuration settings in environment variables and using :mc-cmd:`mc admin config set <mc admin config set>`.
|
||||
Depending on your current deployment setup, you may need to retrieve the values for both.
|
||||
|
||||
You can examine any runtime settings using ``env | grep MINIO_`` or, for deployments using MinIO's systemd service, check the contents of ``/etc/default/minio``.
|
||||
|
||||
#. For filesystem mode deployments:
|
||||
|
||||
If needed, upgrade the existing deployment.
|
||||
|
||||
The oldest acceptable versions are:
|
||||
|
||||
- MinIO `RELEASE.2022-06-25T15-50-16Z <https://github.com/minio/minio/releases/tag/RELEASE.2022-06-25T15-50-16Z>`__
|
||||
- MinIO Client `RELEASE.2022-06-26T18-51-48Z <https://github.com/minio/mc/releases/tag/RELEASE.2022-06-26T18-51-48Z>`__
|
||||
|
||||
The newest acceptable versions are:
|
||||
|
||||
- MinIO `RELEASE.2022-10-24T18-35-07Z <https://github.com/minio/minio/releases/tag/RELEASE.2022-10-24T18-35-07Z>`__
|
||||
- MinIO Client `RELEASE.2022-10-29T10-09-23Z <https://github.com/minio/mc/releases/tag/RELEASE.2022-10-29T10-09-23Z>`__
|
||||
|
||||
#. Create a new Single-Node Single-Drive MinIO deployment.
|
||||
|
||||
Refer to the :ref:`documentation for step-by-step instructions <deploy-minio-standalone>` for launching a new |SNSD| deployment.
|
||||
|
||||
The location of the deployment can be any empty folder on the storage medium of your choice.
|
||||
A new folder on the same drive can work for the new deployment as long as the existing deployment is not on the root of a drive.
|
||||
If the existing standalone system points to the root of the drive, you must use a separate drive for the new deployment.
|
||||
|
||||
If both old and new deployments are on the same host:
|
||||
|
||||
- Install the new deployment to a different path from the existing deployment.
|
||||
- Set the new deployment's Console and API ports to different ports than the existing deployment.
|
||||
|
||||
The following commandline options set the ports at startup:
|
||||
|
||||
- :mc-cmd:`~minio server --address` to set the API port.
|
||||
- :mc-cmd:`~minio server --console-address` to set the Console port.
|
||||
|
||||
- For deployments managed by ``systemd``:
|
||||
|
||||
- Duplicate the existing ``/etc/default/minio`` environment file with a unique name.
|
||||
- In the new deployment's service file, update ``EnvironmentFile`` to reference the new environment file.
|
||||
|
||||
The steps below use the :mc:`mc` command line tool from both deployments.
|
||||
*Existing MinIO Client* is :mc:`mc` from the old deployment.
|
||||
*New MinIO Client* is :mc:`mc` from the new deployment.
|
||||
|
||||
#. Add an alias for the deployment created in the previous step using :mc:`mc alias set` and the new MinIO Client.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc alias set NEWALIAS PATH ACCESSKEY SECRETKEY
|
||||
|
||||
- Use the new MinIO Client.
|
||||
- Replace ``NEWALIAS`` with the alias to create for the deployment.
|
||||
- Replace ``PATH`` with the IP address or hostname and port for the new deployment.
|
||||
- Replace ``ACCESSKEY`` and ``SECRETKEY`` with the credentials you used when creating the new deployment.
|
||||
|
||||
#. Migrate settings according to the type of deployment:
|
||||
|
||||
- The MinIO Gateway is a stateless proxy service that provides S3 API compatibility for an array of backend storage systems.
|
||||
|
||||
- Filesystem mode deployments provide an S3 access layer for a single MinIO server process and single storage volume.
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Gateway
|
||||
|
||||
Migrate configuration settings:
|
||||
|
||||
If your deployment uses :ref:`environment variables <minio-server-environment-variables>` for configuration settings, copy the environment variables from the existing deployment's ``/etc/default/minio`` file to the same file in the new deployment.
|
||||
You may omit any ``MINIO_CACHE_*`` and ``MINIO_GATEWAY_SSE`` environment variables, as these are no longer used.
|
||||
|
||||
If you use :mc-cmd:`mc admin config set <mc admin config set>` for configuration settings, duplicate the existing settings for the new deployment using the new MinIO Client.
|
||||
|
||||
.. tab-item:: Filesystem mode
|
||||
|
||||
.. note::
|
||||
|
||||
The following Filesystem mode steps presume the existing MinIO Client supports the needed export commands.
|
||||
If it does not, recreate users, policies, lifecycle rules, and buckets manually on the new deployment using the new MinIO Client.
|
||||
|
||||
a. Export the existing deployment's **configurations**.
|
||||
|
||||
Use the :mc-cmd:`mc admin config export <mc admin config export>` command with the existing MinIO Client to retrieve the configurations defined for the existing standalone MinIO deployment.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin config export ALIAS > config.txt
|
||||
|
||||
- Use the existing MinIO Client.
|
||||
- Replace ``ALIAS`` with the alias used for the existing standalone deployment you are retrieving values from.
|
||||
|
||||
b. Import **configurations** from the existing standalone deployment to the new deployment with the new MinIO Client.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin config import ALIAS < config.txt
|
||||
|
||||
- Use the new MinIO Client.
|
||||
- Replace ``ALIAS`` with the alias for the new deployment.
|
||||
|
||||
If :mc-cmd:`~mc admin config import` reports an error for a configuration key, comment it out with ``#`` at the beginning of the relevant line and try again.
|
||||
When you are finished migrating the deployment, verify the current syntax for the target MinIO Server version and set any needed keys manually using :mc-cmd:`mc admin config set`.
|
||||
|
||||
c. Restart the server for the new deployment with the new MinIO Client.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin service restart ALIAS
|
||||
|
||||
- Use the new MinIO Client.
|
||||
- Replace ``ALIAS`` with the alias for the new deployment.
|
||||
|
||||
d. Export **bucket metadata** from the existing standalone deployment with the existing MinIO Client.
|
||||
|
||||
The following command exports bucket metadata from the existing deployment to a ``.zip`` file.
|
||||
|
||||
The data includes:
|
||||
|
||||
- bucket targets
|
||||
- lifecycle rules
|
||||
- notifications
|
||||
- quotas
|
||||
- locks
|
||||
- versioning
|
||||
|
||||
The export includes the bucket metadata only.
|
||||
This command does not export objects from the existing deployment.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin cluster bucket export ALIAS
|
||||
|
||||
- Use the existing MinIO Client.
|
||||
- Replace ``ALIAS`` with the alias for your existing deployment.
|
||||
|
||||
This command creates a ``cluster-metadata.zip`` file with metadata for each bucket.
|
||||
|
||||
e. Import **bucket metadata** to the new deployment with the new MinIO Client.
|
||||
|
||||
The following command reads the contents of the exported bucket ``.zip`` file and creates buckets on the new deployment with the same configurations.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin cluster bucket import ALIAS cluster-metadata.zip
|
||||
|
||||
- Use the new MinIO Client.
|
||||
- Replace ``ALIAS`` with the alias for the new deployment.
|
||||
|
||||
The command creates buckets on the new deployment with the same configurations as provided by the metadata in the .zip file from the existing deployment.
|
||||
|
||||
f. Export **IAM settings** from the existing standalone deployment to new deployment with the existing MinIO Client.
|
||||
|
||||
If you are using an external identity and access management provider, recreate those settings in the new deployment along with all associated policies.
|
||||
|
||||
Use the following command to export IAM settings from the existing deployment.
|
||||
This command exports:
|
||||
|
||||
- Groups and group mappings
|
||||
- STS users and STS user mappings
|
||||
- Policies
|
||||
- Users and user mappings
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin cluster iam export ALIAS
|
||||
|
||||
- Use the existing MinIO Client.
|
||||
- Replace ``ALIAS`` with the alias for your existing deployment.
|
||||
|
||||
This command creates a ``ALIAS-iam-info.zip`` file with IAM data.
|
||||
|
||||
g. Import the **IAM settings** to the new deployment with the new MinIO Client.
|
||||
|
||||
Use the exported file to create the IAM setting on the new deployment.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc admin cluster iam import ALIAS alias-iam-info.zip
|
||||
|
||||
- Use the new MinIO Client.
|
||||
- Replace ``ALIAS`` with the alias for the new deployment.
|
||||
- Replace the name of the zip file with the name for the existing deployment's file.
|
||||
|
||||
#. Migrate bucket contents with :mc:`mc mirror`.
|
||||
|
||||
Use :mc:`mc mirror` with the :mc-cmd:`~mc mirror --preserve` and :mc-cmd:`~mc mirror --watch` flags on the standalone deployment to move objects to the new |SNSD| deployment with the existing MinIO Client
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
mc mirror --preserve --watch SOURCE/BUCKET TARGET/BUCKET
|
||||
|
||||
- Use the existing MinIO Client.
|
||||
- Replace ``SOURCE/BUCKET`` with the alias and a bucket for the existing standalone deployment.
|
||||
- Replace ``TARGET/BUCKET`` with the alias and corresponding bucket for the new deployment.
|
||||
|
||||
#. Stop writes to the standalone deployment from any S3 or POSIX client.
|
||||
|
||||
#. Wait for ``mc mirror`` to complete for all buckets for any remaining operations.
|
||||
|
||||
#. Stop the server for both deployments.
|
||||
|
||||
#. Restart the new MinIO deployment with the ports used for the previous standalone deployment.
|
||||
For more about starting the MinIO service, refer to step four in the deploy |SNSD| :ref:`documentation <deploy-minio-standalone>`.
|
||||
|
||||
Ensure you apply all environment variables and runtime configuration settings and validate the behavior of the new deployment.
|
@@ -1,127 +0,0 @@
|
||||
:orphan:
|
||||
|
||||
.. _minio-operator-console:
|
||||
|
||||
======================
|
||||
MinIO Operator Console
|
||||
======================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 2
|
||||
|
||||
.. warning::
|
||||
|
||||
MinIO Operator 6.0.0 deprecates and removes the Operator Console.
|
||||
|
||||
You can use either Kustomization or Helm to manage and deploy MinIO Tenants.
|
||||
|
||||
This page provides a historical view at the Operator Console, and will recieve no further updates or corrections.
|
||||
|
||||
The Operator Console provides a rich user interface for deploying and
|
||||
managing MinIO Tenants on Kubernetes infrastructure. Installing the
|
||||
MinIO :ref:`Kubernetes Operator <deploy-operator-kubernetes>` automatically
|
||||
installs and configures the Operator Console.
|
||||
|
||||
.. screenshot temporarily removed
|
||||
.. image:: /images/k8s/operator-dashboard.png
|
||||
:align: center
|
||||
:width: 70%
|
||||
:class: no-scaled-link
|
||||
:alt: MinIO Operator Console
|
||||
|
||||
This page summarizes the functions available with the MinIO Operator Console.
|
||||
|
||||
.. _minio-operator-console-connect:
|
||||
|
||||
Connect to the Operator Console
|
||||
-------------------------------
|
||||
|
||||
.. include:: /includes/common/common-k8s-connect-operator-console.rst
|
||||
|
||||
Tenant Management
|
||||
-----------------
|
||||
|
||||
The MinIO Operator Console supports deploying, managing, and monitoring MinIO Tenants on the Kubernetes cluster.
|
||||
|
||||
.. screenshot temporarily removed
|
||||
.. image:: /images/k8s/operator-dashboard.png
|
||||
:align: center
|
||||
:width: 70%
|
||||
:class: no-scaled-link
|
||||
:alt: MinIO Operator Console
|
||||
|
||||
You can :ref:`deploy a MinIO Tenant <minio-k8s-deploy-minio-tenant>` through the Operator Console.
|
||||
|
||||
The Operator Console automatically detects MinIO Tenants deployed on the cluster when provisioned through:
|
||||
|
||||
- Operator Console
|
||||
- Helm
|
||||
- Kustomize
|
||||
|
||||
Select a listed tenant to open an in-browser view of that tenant's MinIO Console.
|
||||
You can use this view to directly manage, modify, expand, upgrade, and delete the tenant through the Operator UI.
|
||||
|
||||
.. versionadded:: Operator 5.0.0
|
||||
|
||||
You can download a Log Report for a tenant from the Pods summary screen.
|
||||
|
||||
The report downloads as ``<tenant-name>-report.zip``.
|
||||
The ZIP archive contains status, events, and log information for each pool on the deployment.
|
||||
The archive also includes a summary yaml file describing the deployment.
|
||||
|
||||
|subnet| users relying on the commercial license should register the MinIO tenants to their SUBNET account, which can be done through the Operator Console.
|
||||
|
||||
Tenant Registration
|
||||
-------------------
|
||||
|
||||
|subnet| users relying on the commercial license should register the MinIO tenants to their SUBNET account, which can be done through the Operator Console.
|
||||
|
||||
.. screenshot temporarily removed
|
||||
.. image:: /images/k8s/operator-console-register.png
|
||||
:align: center
|
||||
:width: 70%
|
||||
:class: no-scaled-link
|
||||
:alt: MinIO Operator Console Register Screen
|
||||
|
||||
#. Select the :guilabel:`Register` tab
|
||||
#. Enter the :guilabel:`API Key`
|
||||
|
||||
You can obtain the key from |SUBNET| through the Console by selecting :guilabel:`Get from SUBNET`.
|
||||
|
||||
TLS Certificate Renewal
|
||||
-----------------------
|
||||
|
||||
Operator 4.5.4 or later
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Operator versions 4.5.4 and later automatically renew a tenant's certificates when the duration of the certificate has reached 80% of its life.
|
||||
|
||||
For example, a tenant certificate was issued on January 1, 2023, and set to expire on December 31, 2023.
|
||||
80% of the 1 year life of the certificate comes on day 292, or October 19, 2023.
|
||||
On that date, Operator automatically renews the tenant's certificate.
|
||||
|
||||
Operator 4.3.3 to 4.5.3
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Operator versions 4.3.3 through 4.5.3 automatically renew tenant certificates after they reach 48 hours before expiration.
|
||||
|
||||
For a certificate that expires on December 31, 2023, Operator renews the certificate on December 29 or December 30, within 48 of the expiration.
|
||||
|
||||
Operator 4.3.2 or earlier
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Operator versions 4.3.2 and earlier do not automatically renew certificates.
|
||||
You must renew the tenant certificates on these releases separately.
|
||||
|
||||
Review Your MinIO License
|
||||
-------------------------
|
||||
|
||||
To review which license you are using and the features available through different license options, select the :guilabel:`License` tab.
|
||||
|
||||
MinIO supports two licenses: `AGPLv3 Open Source <https://opensource.org/licenses/AGPL-3.0>`__ or a `MinIO Commercial License <https://min.io/pricing?ref=docs>`__.
|
||||
Subscribers to |SUBNET| use MinIO under a commercial license.
|
||||
|
||||
You can also :guilabel:`Subscribe` from the License screen.
|
@@ -1,47 +0,0 @@
|
||||
.. _minio-k8s-modify-minio-tenant:
|
||||
.. _minio-k8s-modify-minio-tenant-security:
|
||||
|
||||
=====================
|
||||
Modify a MinIO Tenant
|
||||
=====================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
You can modify tenants after deployment to change mutable configuration settings.
|
||||
See :ref:`minio-operator-crd` for a complete description of available settings in the MinIO Custom Resource Definition.
|
||||
|
||||
The method for modifying the Tenant depends on how you deployed the tenant:
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Kustomize
|
||||
:sync: kustomize
|
||||
|
||||
For Kustomize-deployed Tenants, you can modify the base Kustomization resources and apply them using ``kubectl apply -k`` against the directory containing the ``kustomization.yaml`` object.
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
kubectl apply -k ~/kustomization/TENANT-NAME/
|
||||
|
||||
Modify the path to the Kustomization directory to match your local configuration.
|
||||
|
||||
.. tab-item:: Helm
|
||||
:sync: helm
|
||||
|
||||
For Helm-deployed Tenants, you can modify the base ``values.yaml`` and upgrade the Tenant using the chart:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
helm upgrade TENANT-NAME minio-operator/tenant -f values.yaml -n TENANT-NAMESPACE
|
||||
|
||||
The command above assumes use of the MinIO Operator Chart repository.
|
||||
If you installed the Chart manually or by using a different repository name, specify that chart or name in the command.
|
||||
|
||||
Replace ``TENANT-NAME`` and ``TENANT-NAMESPACE`` with the name and namespace of the Tenant, respectively.
|
||||
You can use ``helm list -n TENANT-NAMESPACE`` to validate the Tenant name.
|
||||
|
||||
See :ref:`minio-tenant-chart-values` for more complete documentation on the available Chart fields.
|
@@ -1,460 +0,0 @@
|
||||
.. _minio-site-replication-overview:
|
||||
|
||||
=========================
|
||||
Site Replication Overview
|
||||
=========================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
Site replication configures multiple independent MinIO deployments as a cluster of replicas called peer sites.
|
||||
|
||||
.. figure:: /images/architecture/architecture-load-balancer-multi-site.svg
|
||||
:figwidth: 100%
|
||||
:alt: Diagram of a site replication deployment with two sites
|
||||
|
||||
A site replication deployment with two peer sites.
|
||||
A load balancer manages routing operations to either of the two sites.
|
||||
Data written to one site automatically replicates to the other peer site.
|
||||
|
||||
Site replication assumes the use of either the included MinIO identity provider (IDP) *or* an external IDP.
|
||||
All configured deployments must use the same IDP.
|
||||
Deployments using an external IDP must use the same configuration across sites.
|
||||
|
||||
For more information on site replication architecture and deployment concepts, see :ref:`Deployment Architecture: Replicated MinIO Deployments <minio-deployment-architecture-replicated>`.
|
||||
|
||||
.. cond:: macos or windows or container
|
||||
|
||||
MinIO does not recommend using |platform| hosts for site replication outside of early development, evaluation, or general experimentation.
|
||||
For production, use :minio-docs:`Linux <minio/linux/operations/install-deploy-manage/multi-site-replication.html>` or :minio-docs:`Kubernetes <minio/kubernetes/upstream/operations/install-deploy-manage/multi-site-replication.html>`
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. _minio-site-replication-what-replicates:
|
||||
|
||||
What Replicates Across All Sites
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: /includes/common-replication.rst
|
||||
:start-after: start-mc-admin-replicate-what-replicates
|
||||
:end-before: end-mc-admin-replicate-what-replicates
|
||||
|
||||
What Does Not Replicate Across Sites
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: /includes/common-replication.rst
|
||||
:start-after: start-mc-admin-replicate-what-does-not-replicate
|
||||
:end-before: end-mc-admin-replicate-what-does-not-replicate
|
||||
|
||||
|
||||
Initial Site Replication Process
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
After enabling site replication, identity and access management (IAM) settings sync in the following order:
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: MinIO IDP
|
||||
|
||||
#. Policies
|
||||
#. User accounts (for local users)
|
||||
#. Groups
|
||||
#. Access Keys
|
||||
|
||||
Access Keys for ``root`` do not sync.
|
||||
|
||||
#. Policy mapping for synced user accounts
|
||||
#. Policy mapping for :ref:`Security Token Service (STS) users <minio-security-token-service>`
|
||||
|
||||
.. tab-item:: OIDC
|
||||
|
||||
#. Policies
|
||||
#. Access Keys associated to OIDC accounts with a valid :ref:`MinIO Policy <minio-policy>`. ``root`` access keys do not sync.
|
||||
#. Policy mapping for synced user accounts
|
||||
#. Policy mapping for :ref:`Security Token Service (STS) users <minio-security-token-service>`
|
||||
|
||||
.. tab-item:: LDAP
|
||||
|
||||
#. Policies
|
||||
#. Groups
|
||||
#. Access Keys associated to LDAP accounts with a valid :ref:`MinIO Policy <minio-policy>`. ``root`` access keys do not sync.
|
||||
#. Policy mapping for synced user accounts
|
||||
#. Policy mapping for :ref:`Security Token Service (STS) users <minio-security-token-service>`
|
||||
|
||||
After the initial synchronization of data across peer sites, MinIO continually replicates and synchronizes :ref:`replicable data <minio-site-replication-what-replicates>` among all sites as they occur on any site.
|
||||
|
||||
Site Healing
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Any MinIO deployment in the site replication configuration can resynchronize damaged :ref:`replica-eligible data <minio-site-replication-what-replicates>` from the peer with the most updated ("latest") version of that data.
|
||||
|
||||
.. versionchanged:: RELEASE.2023-07-18T17-49-40Z
|
||||
|
||||
Site replication operations retry up to three (3) times.
|
||||
|
||||
MinIO dequeues replication operations that fail to replicate after three attempts.
|
||||
The :ref:`scanner <minio-concepts-scanner>` picks up those affected objects at a later time and requeues them for replication.
|
||||
|
||||
.. versionchanged:: RELEASE.2022-08-11T04-37-28Z
|
||||
|
||||
Failed or pending replications requeue automatically when performing any ``GET`` or ``HEAD`` API method.
|
||||
For example, using :mc:`mc stat`, :mc:`mc cat`, or :mc:`mc ls` commands after a site comes back online prompts healing to requeue.
|
||||
|
||||
.. versionchanged:: RELEASE.2022-12-02T23-48-47Z
|
||||
|
||||
If one site loses data for any reason, resynchronize the data from another healthy site with :mc-cmd:`mc admin replicate resync`.
|
||||
This launches an active process that resynchronizes the data without waiting for the passive :ref:`MinIO scanner <minio-concepts-scanner>` to recognize the missing data.
|
||||
|
||||
.. include:: /includes/common/scanner.rst
|
||||
:start-after: start-scanner-speed-config
|
||||
:end-before: end-scanner-speed-config
|
||||
|
||||
Synchronous vs Asynchronous Replication
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: /includes/common-replication.rst
|
||||
:start-after: start-replication-sync-vs-async
|
||||
:end-before: end-replication-sync-vs-async
|
||||
|
||||
MinIO strongly recommends using the default asynchronous site replication.
|
||||
Synchronous site replication performance depends strongly on latency between sites, where higher latency can result in lower PUT performance and replication lag.
|
||||
To configure synchronous site replication use :mc-cmd:`mc admin replicate update` with the :mc-cmd:`~mc admin replicate update --mode` option.
|
||||
|
||||
Proxy to Other Sites
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MinIO peer sites can proxy ``GET/HEAD`` requests for an object to other peers to check if it exists.
|
||||
This allows a site that is healing or lagging behind other peers to still return an object persisted to other sites.
|
||||
|
||||
For example:
|
||||
|
||||
1. A client issues ``GET("data/invoices/january.xls")`` to ``Site1``
|
||||
2. ``Site1`` cannot locate the object
|
||||
3. ``Site1`` proxies the request to ``Site2``
|
||||
4. ``Site2`` returns the latest version of the requested object
|
||||
5. ``Site1`` returns the proxied object to the client
|
||||
|
||||
For ``GET/HEAD`` requests that do *not* include a unique version ID, the proxy request returns the *latest* version of that object on the peer site.
|
||||
This may result in retrieval of a non-current version of an object, such as if the responding peer site is also experiencing replication lag.
|
||||
|
||||
MinIO does not proxy ``LIST``, ``DELETE``, and ``PUT`` operations.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Back Up Cluster Settings First
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the :mc:`mc admin cluster bucket export` and :mc:`mc admin cluster iam export` commands to take a snapshot of the bucket metadata and IAM configurations respectively prior to configuring Site Replication.
|
||||
You can use these snapshots to restore bucket/IAM settings in the event of misconfiguration during site replication configuration.
|
||||
|
||||
One Site with Data at Setup
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Only *one* site can have data at the time of setup.
|
||||
The other sites must be empty of buckets and objects.
|
||||
|
||||
After configuring site replication, any data on the first deployment replicates to the other sites.
|
||||
|
||||
All Sites Must Use the Same IDP
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
All sites must use the same :ref:`Identity Provider <minio-authentication-and-identity-management>`.
|
||||
Site replication supports the included MinIO IDP, OIDC, or LDAP.
|
||||
|
||||
All Sites Must use the Same MinIO Server Version
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
All sites must have a matching and consistent MinIO Server version.
|
||||
Configuring replication between sites with mismatched MinIO Server versions may result in unexpected or undesired replication behavior.
|
||||
|
||||
You should also ensure the :mc:`mc` version used to configure replication closely matches the server version.
|
||||
|
||||
Access to the Same Encryption Service
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
For :ref:`SSE-S3 <minio-encryption-sse-s3>` or :ref:`SSE-KMS <minio-encryption-sse-kms>` encryption via Key Management Service (KMS), all sites must have access to a central KMS deployment.
|
||||
|
||||
You can achieve this with a central KES server or multiple KES servers (say one per site) connected via a central supported :ref:`key vault server <minio-sse>`.
|
||||
|
||||
Replication Requires Versioning
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Site replication *requires* :ref:`minio-bucket-versioning` and enables it for all created buckets automatically.
|
||||
You cannot disable versioning in site replication deployments.
|
||||
|
||||
MinIO cannot replicate objects in prefixes in the bucket that you excluded from versioning.
|
||||
|
||||
Load Balancers Installed on Each Site
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: /includes/common-replication.rst
|
||||
:start-after: start-mc-admin-replicate-load-balancing
|
||||
:end-before: end-mc-admin-replicate-load-balancing
|
||||
|
||||
Switch to Site Replication from Bucket Replication
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:ref:`Bucket replication <minio-bucket-replication>` and multi-site replication are mutually exclusive.
|
||||
You cannot use both replication methods on the same deployments.
|
||||
|
||||
If you previously set up bucket replication and wish to now use site replication, you must first delete all of the bucket replication rules on the deployment that has data when initializing site replication.
|
||||
Use :mc:`mc replicate rm` on the command line to remove bucket replication rules.
|
||||
|
||||
Only one site can have data when setting up site replication.
|
||||
All other sites must be empty.
|
||||
|
||||
Tutorials
|
||||
---------
|
||||
|
||||
.. _minio-configure-site-replication:
|
||||
|
||||
Configure Site Replication
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following steps create a new site replication configuration for three :ref:`distributed deployments <deploy-minio-distributed>`.
|
||||
One of the sites contains :ref:`replicable data <minio-site-replication-what-replicates>`.
|
||||
|
||||
The three sites use aliases, ``minio1``, ``minio2``, and ``minio3``, and only ``minio1`` contains any data.
|
||||
|
||||
#. :ref:`Deploy <deploy-minio-distributed>` three or more separate MinIO sites, using the same :ref:`IDP <minio-authentication-and-identity-management>`
|
||||
|
||||
Start with empty sites *or* have no more than one site with any :ref:`replicable data <minio-site-replication-what-replicates>`.
|
||||
|
||||
#. Configure an alias for each site
|
||||
|
||||
.. include:: /includes/common-replication.rst
|
||||
:start-after: start-mc-admin-replicate-load-balancing
|
||||
:end-before: end-mc-admin-replicate-load-balancing
|
||||
|
||||
For example, for three MinIO sites, you might create aliases ``minio1``, ``minio2``, and ``minio3``.
|
||||
|
||||
Use :mc:`mc alias set` to define the hostname or IP of the load balancer managing connections to the site.
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
mc alias set minio1 https://minio1.example.com:9000 adminuser adminpassword
|
||||
mc alias set minio2 https://minio2.example.com:9000 adminuser adminpassword
|
||||
mc alias set minio3 https://minio3.example.com:9000 adminuser adminpassword
|
||||
|
||||
or define environment variables
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
export MC_HOST_minio1=https://adminuser:adminpassword@minio1.example.com
|
||||
export MC_HOST_minio2=https://adminuser:adminpassword@minio2.example.com
|
||||
export MC_HOST_minio3=https://adminuser:adminpassword@minio3.example.com
|
||||
|
||||
#. Add site replication configuration
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
mc admin replicate add minio1 minio2 minio3
|
||||
|
||||
If all sites are empty, the order of the aliases does not matter.
|
||||
If one of the sites contains any :ref:`replicable data <minio-site-replication-what-replicates>`, you must list it first.
|
||||
|
||||
No more than one site can contain any replicable data.
|
||||
|
||||
#. Query the site replication configuration to verify
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
mc admin replicate info minio1
|
||||
|
||||
You can use the alias for any peer site in the site replication configuration.
|
||||
|
||||
#. Query the site replication status to confirm any initial data has replicated to all peer sites.
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
mc admin replicate status minio1
|
||||
|
||||
You can use the alias for any of the peer sites in the site replication configuration.
|
||||
The output should say that all :ref:`replicable data <minio-site-replication-what-replicates>` is in sync.
|
||||
|
||||
The output could resemble the following:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
Bucket replication status:
|
||||
● 1/1 Buckets in sync
|
||||
|
||||
Policy replication status:
|
||||
● 5/5 Policies in sync
|
||||
|
||||
User replication status:
|
||||
No Users present
|
||||
|
||||
Group replication status:
|
||||
No Groups present
|
||||
|
||||
For more on reviewing site replication, see the :ref:`Site Replication Status tutorial <minio-site-replication-status-tutorial>`.
|
||||
|
||||
|
||||
.. _minio-expand-site-replication:
|
||||
|
||||
Expand Site Replication
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can add more sites to an existing site replication configuration.
|
||||
|
||||
The new site must meet the following requirements:
|
||||
|
||||
- Site is fully deployed and accessible by hostname or IP
|
||||
- Shares the IDP configuration as all other sites in the configuration
|
||||
- Uses the same root user credentials as other configured sites
|
||||
- Contains no bucket or object data
|
||||
|
||||
#. Deploy the new MinIO peer site(s) following the stated requirements
|
||||
|
||||
#. Configure an alias for the new site
|
||||
|
||||
.. include:: /includes/common-replication.rst
|
||||
:start-after: start-mc-admin-replicate-load-balancing
|
||||
:end-before: end-mc-admin-replicate-load-balancing
|
||||
|
||||
To check the existing aliases, use :mc:`mc alias list`.
|
||||
|
||||
Use :mc:`mc alias set` to define the hostname or IP of the load balancer managing connections to the new site(s).
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
mc alias set minio4 https://minio4.example.com:9000 adminuser adminpassword
|
||||
|
||||
or define environment variables
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
export MC_HOST_minio4=https://adminuser:adminpassword@minio4.example.com
|
||||
|
||||
#. Add site replication configuration
|
||||
|
||||
Use the :mc-cmd:`mc admin replicate add` command to expand the site replication configuration with the new peer site.
|
||||
Specify the alias of *all* existing peer sites, then the alias of the new site to add.
|
||||
|
||||
For example, the following command adds the new peer site ``minio4`` to an existing site replication configuration that includes the existing sites ``minio1``, ``minio2``, and ``minio3``.
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
mc admin replicate add minio1 minio2 minio3 minio4
|
||||
|
||||
.. note::
|
||||
|
||||
If any of the sites are unreachable or permanently lost, you **must** first remove the unreachable site(s) with :mc-cmd:`mc admin replicate rm` before expanding with the new site.
|
||||
|
||||
#. Query the site replication configuration to verify
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
mc admin replicate info minio1
|
||||
|
||||
Modify a Site's Endpoint
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If a peer site changes its hostname, you can modify the replication configuration to reflect the new hostname.
|
||||
|
||||
#. Obtain the site's Deployment ID with :mc-cmd:`mc admin replicate info`
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
mc admin replicate info <ALIAS>
|
||||
|
||||
|
||||
#. Update the site's endpoint with :mc-cmd:`mc admin replicate update`
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
mc admin replicate update ALIAS --deployment-id [DEPLOYMENT-ID] --endpoint [NEW-ENDPOINT]
|
||||
|
||||
Replace [DEPLOYMENT-ID] with the deployment ID of the site to update.
|
||||
|
||||
Replace [NEW-ENDPOINT] with the new endpoint for the site.
|
||||
|
||||
.. include:: /includes/common-replication.rst
|
||||
:start-after: start-mc-admin-replicate-load-balancing
|
||||
:end-before: end-mc-admin-replicate-load-balancing
|
||||
|
||||
Remove a Site from Replication
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can remove a site from replication at any time.
|
||||
You can re-add the site at a later date, but you must first completely wipe bucket and object data from the site.
|
||||
|
||||
Use :mc-cmd:`mc admin replicate rm`:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
mc admin replicate rm ALIAS PEER_TO_REMOVE --force
|
||||
|
||||
- Replace ``ALIAS`` with the :ref:`alias <alias>` of any peer site in the replication configuration.
|
||||
|
||||
- Replace ``PEER_TO_REMOVE`` with the alias of the peer site to remove.
|
||||
|
||||
All healthy peers in the site replication configuration update to remove the specified peer automatically.
|
||||
|
||||
MinIO requires the ``--force`` flag to remove the peer from the site replication configuration.
|
||||
|
||||
.. _minio-site-replication-status-tutorial:
|
||||
|
||||
Review Replication Status
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MinIO provides information on replication across the sites for users, groups, policies, or buckets.
|
||||
|
||||
The summary information includes the number of **Synced** and **Failed** items for each category.
|
||||
|
||||
Use :mc-cmd:`mc admin replicate status`:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
mc admin replicate status <ALIAS> --<flag> <value>
|
||||
|
||||
For example:
|
||||
|
||||
- ``mc admin replicate status minio3 --bucket images``
|
||||
|
||||
Displays the replication status for the ``images`` bucket on the ``minio3`` site.
|
||||
|
||||
The output resembles the following:
|
||||
|
||||
.. code-block::
|
||||
|
||||
● Bucket config replication summary for: images
|
||||
|
||||
Bucket | MINIO2 | MINIO3 | MINIO4
|
||||
Tags | | |
|
||||
Policy | | |
|
||||
Quota | | |
|
||||
Retention | | |
|
||||
Encryption | | |
|
||||
Replication | ✔ | ✔ | ✔
|
||||
|
||||
- ``mc admin replicate status minio3 --all``
|
||||
|
||||
Displays the replication status summary for all replication sites of which ``minio3`` is part.
|
||||
|
||||
The output resembles the following:
|
||||
|
||||
.. code-block::
|
||||
|
||||
Bucket replication status:
|
||||
● 1/1 Buckets in sync
|
||||
|
||||
Policy replication status:
|
||||
● 5/5 Policies in sync
|
||||
|
||||
User replication status:
|
||||
● 1/1 Users in sync
|
||||
|
||||
Group replication status:
|
||||
● 0/2 Groups in sync
|
||||
|
||||
Group | MINIO2 | MINIO3 | MINIO4
|
||||
ittechs | ✗ in-sync | | ✗ in-sync
|
||||
managers | ✗ in-sync | | ✗ in-sync
|
||||
|
@@ -1,32 +0,0 @@
|
||||
.. _minio-upgrade:
|
||||
|
||||
==========================
|
||||
Upgrade a MinIO Deployment
|
||||
==========================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 2
|
||||
|
||||
.. important::
|
||||
|
||||
For deployments older than :minio-release:`RELEASE.2024-03-30T09-41-56Z` running with :ref:`AD/LDAP <minio-ldap-config-settings>` enabled, you **must** read through the release notes for :minio-release:`RELEASE.2024-04-18T19-09-19Z` before starting this procedure.
|
||||
You must take the extra steps documented in the linked release as part of the upgrade.
|
||||
|
||||
.. cond:: linux
|
||||
|
||||
.. include:: /includes/linux/steps-upgrade-minio-deployment.rst
|
||||
|
||||
.. cond:: container
|
||||
|
||||
.. include:: /includes/container/steps-upgrade-minio-deployment.rst
|
||||
|
||||
.. cond:: windows
|
||||
|
||||
.. include:: /includes/windows/steps-upgrade-minio-deployment.rst
|
||||
|
||||
.. cond:: macos
|
||||
|
||||
.. include:: /includes/macos/steps-upgrade-minio-deployment.rst
|
@@ -1,614 +0,0 @@
|
||||
:orphan:
|
||||
|
||||
================================
|
||||
Upgrade Legacy MinIO Operators
|
||||
================================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
|
||||
MinIO supports the following upgrade paths for older versions of the MinIO Operator:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 40 40
|
||||
:width: 100%
|
||||
|
||||
* - Current Version
|
||||
- Supported Upgrade Target
|
||||
|
||||
* - 5.0.15 or later
|
||||
- |operator-version-stable|
|
||||
|
||||
* - 5.0.0 to 5.0.14
|
||||
- 5.0.15
|
||||
|
||||
* - 4.2.3 to 4.5.7
|
||||
- 4.5.8
|
||||
|
||||
* - 4.0.0 through 4.2.2
|
||||
- 4.2.3
|
||||
|
||||
* - 3.X.X
|
||||
- 4.2.2
|
||||
|
||||
To upgrade from Operator to |operator-version-stable| from version 4.5.7 or earlier, you must first upgrade to version 4.5.8, then upgrade to 5.0.15.
|
||||
Depending on your current version, you may need to do one or more intermediate upgrades to reach v4.5.8.
|
||||
|
||||
After upgrading to 5.0.15, see :ref:`minio-k8s-upgrade-minio-operator` to upgrade to the latest version.
|
||||
|
||||
.. _minio-k8s-upgrade-minio-operator-to-5.0.15:
|
||||
|
||||
Upgrade MinIO Operator 4.5.8 and Later to 5.0.15
|
||||
------------------------------------------------
|
||||
|
||||
.. admonition:: Prerequisites
|
||||
:class: note
|
||||
|
||||
This procedure requires the following:
|
||||
|
||||
- You have an existing MinIO Operator deployment running 4.5.8 or later
|
||||
- Your Kubernetes cluster runs 1.21.0 or later
|
||||
- Your local host has ``kubectl`` installed and configured with access to the Kubernetes cluster
|
||||
|
||||
This procedure upgrades the MinIO Operator from any 4.5.8 or later release to 5.0.15
|
||||
|
||||
Tenant Custom Resource Definition Changes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following changes apply for Operator v5.0.0 or later:
|
||||
|
||||
- The ``.spec.s3`` field is replaced by the ``.spec.features`` field.
|
||||
- The ``.spec.credsSecret`` field is replaced by the ``.spec.configuration`` field.
|
||||
|
||||
The ``.spec.credsSecret`` should hold all the environment variables for the MinIO deployment that contain sensitive information and should not show in ``.spec.env``.
|
||||
This change impacts the Tenant :abbr:`CRD (CustomResourceDefinition)` and only impacts users editing a tenant YAML directly, such as through Helm or Kustomize.
|
||||
- Both the **Log Search API** (``.spec.log``) and **Prometheus** (``.spec.prometheus``) deployments have been removed.
|
||||
However, existing deployments are left running as standalone deployments / statefulsets with no connection to the Tenant CR.
|
||||
Deleting the Tenant :abbr:`CRD (Custom Resource Definition)` does **not** cascade to the log or Prometheus deployments.
|
||||
|
||||
.. important::
|
||||
|
||||
MinIO recommends that you create a yaml file to manage these deployments going forward.
|
||||
|
||||
Log Search and Prometheus
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The latest releases of Operator remove Log Search and Prometheus from included Operator tools.
|
||||
The following steps back up the existing yaml files, perform some clean up, and provide steps to continue using either or both of these functions.
|
||||
|
||||
#. Back up Prometheus and Log Search yaml files.
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
export TENANT_NAME=myminio
|
||||
export NAMESPACE=mynamespace
|
||||
kubectl -n $NAMESPACE get secret $TENANT_NAME-log-secret -o yaml > $TENANT_NAME-log-secret.yaml
|
||||
kubectl -n $NAMESPACE get cm $TENANT_NAME-prometheus-config-map -o yaml > $TENANT_NAME-prometheus-config-map.yaml
|
||||
kubectl -n $NAMESPACE get sts $TENANT_NAME-prometheus -o yaml > $TENANT_NAME-prometheus.yaml
|
||||
kubectl -n $NAMESPACE get sts $TENANT_NAME-log -o yaml > $TENANT_NAME-log.yaml
|
||||
kubectl -n $NAMESPACE get deployment $TENANT_NAME-log-search-api -o yaml > $TENANT_NAME-log-search-api.yaml
|
||||
kubectl -n $NAMESPACE get svc $TENANT_NAME-log-hl-svc -o yaml > $TENANT_NAME-log-hl-svc.yaml
|
||||
kubectl -n $NAMESPACE get svc $TENANT_NAME-log-search-api -o yaml > $TENANT_NAME-log-search-api-svc.yaml
|
||||
kubectl -n $NAMESPACE get svc $TENANT_NAME-prometheus-hl-svc -o yaml > $TENANT_NAME-prometheus-hl-svc.yaml
|
||||
|
||||
- Replace ``myminio`` with the name of the tenant on the operator deployment you are upgrading.
|
||||
- Replace ``mynamespace`` with the namespace for the tenant on the operator deployment you are upgrading.
|
||||
|
||||
Repeat for each tenant.
|
||||
|
||||
#. Remove ``.metadata.ownerReferences`` for all backed up files for all tenants.
|
||||
|
||||
#. *(Optional)* To continue using Log Search API and Prometheus, add the following variables to the tenant's yaml specification file under ``.spec.env``
|
||||
|
||||
Use the following command to edit a tenant:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl edit tenants <TENANT-NAME> -n <TENANT-NAMESPACE>
|
||||
|
||||
- Replace ``<TENANT-NAME>`` with the name of the tenant to modify.
|
||||
- Replace ``<TENANT-NAMESPACE>`` with the namespace of the tenant you are modifying.
|
||||
|
||||
Add the following values under ``.spec.env`` in the file:
|
||||
|
||||
.. code-block:: yaml
|
||||
:class: copyable
|
||||
|
||||
- name: MINIO_LOG_QUERY_AUTH_TOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
key: MINIO_LOG_QUERY_AUTH_TOKEN
|
||||
name: <TENANT_NAME>-log-secret
|
||||
- name: MINIO_LOG_QUERY_URL
|
||||
value: http://<TENANT_NAME>-log-search-api:8080
|
||||
- name: MINIO_PROMETHEUS_JOB_ID
|
||||
value: minio-job
|
||||
- name: MINIO_PROMETHEUS_URL
|
||||
value: http://<TENANT_NAME>-prometheus-hl-svc:9001
|
||||
|
||||
- Replace ``<TENANT_NAME>`` in the ``name`` or ``value`` lines with the name of your tenant.
|
||||
|
||||
Procedure
|
||||
~~~~~~~~~
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Upgrade using Kustomize
|
||||
|
||||
The following procedure upgrades the MinIO Operator using Kustomize.
|
||||
|
||||
For Operator versions 5.0.1 to 5.0.14 installed with the MinIO Kubernetes Plugin, follow the Kustomize instructions below to upgrade to 5.0.15 or later.
|
||||
If you installed the Operator using :ref:`Helm <minio-k8s-deploy-operator-helm>`, use the :guilabel:`Upgrade using Helm` instructions instead.
|
||||
|
||||
#. *(Optional)* Update each MinIO Tenant to the latest stable MinIO Version.
|
||||
|
||||
Upgrading MinIO regularly ensures your Tenants have the latest features and performance improvements.
|
||||
Test upgrades in a lower environment such as a Dev or QA Tenant, before applying to your production Tenants.
|
||||
See :ref:`minio-k8s-upgrade-minio-tenant` for a procedure on upgrading MinIO Tenants.
|
||||
|
||||
#. Verify the existing Operator installation.
|
||||
Use ``kubectl get all -n minio-operator`` to verify the health and status of all Operator pods and services.
|
||||
|
||||
If you installed the Operator to a custom namespace, specify that namespace as ``-n <NAMESPACE>``.
|
||||
|
||||
You can verify the currently installed Operator version by retrieving the object specification for an operator pod in the namespace.
|
||||
The following example uses the ``jq`` tool to filter the necessary information from ``kubectl``:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl get pod -l 'name=minio-operator' -n minio-operator -o json | jq '.items[0].spec.containers'
|
||||
|
||||
The output resembles the following:
|
||||
|
||||
.. code-block:: json
|
||||
:emphasize-lines: 8-10
|
||||
:substitutions:
|
||||
|
||||
{
|
||||
"env": [
|
||||
{
|
||||
"name": "CLUSTER_DOMAIN",
|
||||
"value": "cluster.local"
|
||||
}
|
||||
],
|
||||
"image": "minio/operator:v|operator-version-stable|",
|
||||
"imagePullPolicy": "IfNotPresent",
|
||||
"name": "minio-operator"
|
||||
}
|
||||
|
||||
If your local host does not have the ``jq`` utility installed, you can run the first part of the command and locate the ``spec.containers`` section of the output.
|
||||
|
||||
#. Upgrade Operator with Kustomize
|
||||
|
||||
The following command upgrades Operator to version 5.0.15:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl apply -k github.com/minio/operator/?ref=v5.0.15
|
||||
|
||||
In the sample output below, ``configured`` at the end of the line indicates where a new change was applied from the updated CRD:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
namespace/minio-operator configured
|
||||
customresourcedefinition.apiextensions.k8s.io/miniojobs.job.min.io configured
|
||||
customresourcedefinition.apiextensions.k8s.io/policybindings.sts.min.io configured
|
||||
customresourcedefinition.apiextensions.k8s.io/tenants.minio.min.io configured
|
||||
serviceaccount/console-sa unchanged
|
||||
serviceaccount/minio-operator unchanged
|
||||
clusterrole.rbac.authorization.k8s.io/console-sa-role unchanged
|
||||
clusterrole.rbac.authorization.k8s.io/minio-operator-role unchanged
|
||||
clusterrolebinding.rbac.authorization.k8s.io/console-sa-binding unchanged
|
||||
clusterrolebinding.rbac.authorization.k8s.io/minio-operator-binding unchanged
|
||||
configmap/console-env unchanged
|
||||
secret/console-sa-secret configured
|
||||
service/console unchanged
|
||||
service/operator unchanged
|
||||
service/sts unchanged
|
||||
deployment.apps/console configured
|
||||
deployment.apps/minio-operator configured
|
||||
|
||||
|
||||
#. Validate the Operator upgrade
|
||||
|
||||
You can check the new Operator version with the same ``kubectl`` command used previously:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl get pod -l 'name=minio-operator' -n minio-operator -o json | jq '.items[0].spec.containers'
|
||||
|
||||
#. *(Optional)* Connect to the Operator Console
|
||||
|
||||
.. include:: /includes/common/common-k8s-connect-operator-console-no-plugin.rst
|
||||
|
||||
#. Retrieve the Operator Console JWT for login
|
||||
|
||||
To continue upgrading to |operator-version-stable|, see :ref:`minio-k8s-upgrade-minio-operator`.
|
||||
|
||||
.. include:: /includes/common/common-k8s-operator-console-jwt.rst
|
||||
|
||||
|
||||
.. tab-item:: Upgrade using Helm
|
||||
|
||||
The following procedure upgrades an existing MinIO Operator Installation using Helm.
|
||||
|
||||
If you installed the Operator using Kustomize, use the :guilabel:`Upgrade using Kustomize` instructions instead.
|
||||
|
||||
#. *(Optional)* Update each MinIO Tenant to the latest stable MinIO Version.
|
||||
|
||||
Upgrading MinIO regularly ensures your Tenants have the latest features and performance improvements.
|
||||
Test upgrades in a lower environment such as a Dev or QA Tenant, before applying to your production Tenants.
|
||||
See :ref:`minio-k8s-upgrade-minio-tenant` for a procedure on upgrading MinIO Tenants.
|
||||
|
||||
#. Verify the existing Operator installation.
|
||||
|
||||
Use ``kubectl get all -n minio-operator`` to verify the health and status of all Operator pods and services.
|
||||
|
||||
If you installed the Operator to a custom namespace, specify that namespace as ``-n <NAMESPACE>``.
|
||||
|
||||
Use the ``helm list`` command to view the installed charts in the namespace:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm list -n minio-operator
|
||||
|
||||
The result should resemble the following:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
|
||||
operator minio-operator 1 2023-11-01 15:49:54.539724775 -0400 EDT deployed operator-5.0.x v5.0.x
|
||||
|
||||
#. Update the Operator Repository
|
||||
|
||||
Use ``helm repo update minio-operator`` to update the MinIO Operator repo.
|
||||
If you set a different alias for the MinIO Operator repository, specify that in the command instead of ``minio-operator``.
|
||||
You can use ``helm repo list`` to review your installed repositories.
|
||||
|
||||
Use ``helm search`` to check the latest available chart version after updating the Operator Repo:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm search repo minio-operator
|
||||
|
||||
The response should resemble the following:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
:substitutions:
|
||||
|
||||
NAME CHART VERSION APP VERSION DESCRIPTION
|
||||
minio-operator/minio-operator 4.3.7 v4.3.7 A Helm chart for MinIO Operator
|
||||
minio-operator/operator |operator-version-stable| v|operator-version-stable| A Helm chart for MinIO Operator
|
||||
minio-operator/tenant |operator-version-stable| v|operator-version-stable| A Helm chart for MinIO Operator
|
||||
|
||||
The ``minio-operator/minio-operator`` is a legacy chart and should **not** be installed under normal circumstances.
|
||||
|
||||
#. Run ``helm upgrade``
|
||||
|
||||
Helm uses the latest chart to upgrade the MinIO Operator:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm upgrade -n minio-operator \
|
||||
operator minio-operator/operator
|
||||
|
||||
If you installed the MinIO Operator to a different namespace, specify that in the ``-n`` argument.
|
||||
|
||||
If you used a different installation name from ``operator``, replace the value above with the installation name.
|
||||
|
||||
The command results should return success with a bump in the ``REVISION`` value.
|
||||
|
||||
#. Validate the Operator upgrade
|
||||
|
||||
.. include:: /includes/common/common-k8s-connect-operator-console-no-plugin.rst
|
||||
|
||||
#. Retrieve the Operator Console JWT for login
|
||||
|
||||
.. include:: /includes/common/common-k8s-operator-console-jwt.rst
|
||||
|
||||
.. _minio-k8s-upgrade-minio-operator-to-4.5.8:
|
||||
|
||||
Upgrade MinIO Operator 4.2.3 through 4.5.7 to 4.5.8
|
||||
---------------------------------------------------
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
This procedure requires the following:
|
||||
|
||||
- You have an existing MinIO Operator deployment running 4.2.3 through 4.5.7
|
||||
- Your Kubernetes cluster runs 1.19.0 or later
|
||||
- Your local host has ``kubectl`` installed and configured with access to the Kubernetes cluster
|
||||
|
||||
Procedure
|
||||
~~~~~~~~~
|
||||
|
||||
This procedure upgrades MinIO Operator release 4.2.3 through 4.5.7 to release 4.5.8.
|
||||
You can then upgrade from release 4.5.8 to 5.0.15.
|
||||
|
||||
1. *(Optional)* Update each MinIO Tenant to the latest stable MinIO Version.
|
||||
|
||||
Upgrading MinIO regularly ensures your Tenants have the latest features and performance improvements.
|
||||
|
||||
Test upgrades in a lower environment such as a Dev or QA Tenant, before applying to your production Tenants.
|
||||
|
||||
See :ref:`minio-k8s-upgrade-minio-tenant` for a procedure on upgrading MinIO Tenants.
|
||||
|
||||
#. Verify the existing Operator installation.
|
||||
|
||||
Use ``kubectl get all -n minio-operator`` to verify the health and status of all Operator pods and services.
|
||||
|
||||
If you installed the Operator to a custom namespace, specify that namespace as ``-n <NAMESPACE>``.
|
||||
|
||||
You can verify the currently installed Operator version by retrieving the object specification for an operator pod in the namespace.
|
||||
The following example uses the ``jq`` tool to filter the necessary information from ``kubectl``:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl get pod -l 'name=minio-operator' -n minio-operator -o json | jq '.items[0].spec.containers'
|
||||
|
||||
The output resembles the following:
|
||||
|
||||
.. code-block:: json
|
||||
:emphasize-lines: 8-10
|
||||
|
||||
{
|
||||
"env": [
|
||||
{
|
||||
"name": "CLUSTER_DOMAIN",
|
||||
"value": "cluster.local"
|
||||
}
|
||||
],
|
||||
"image": "minio/operator:v4.5.1",
|
||||
"imagePullPolicy": "IfNotPresent",
|
||||
"name": "minio-operator"
|
||||
}
|
||||
|
||||
#. Download the Latest Stable Version of the MinIO Kubernetes Plugin
|
||||
|
||||
.. include:: /includes/k8s/install-minio-kubectl-plugin.rst
|
||||
|
||||
#. Run the initialization command to upgrade the Operator
|
||||
|
||||
Use the ``kubectl minio init`` command to upgrade the existing MinIO Operator installation
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl minio init
|
||||
|
||||
#. Validate the Operator upgrade
|
||||
|
||||
You can check the Operator version by reviewing the object specification for an Operator Pod using a previous step.
|
||||
|
||||
.. include:: /includes/common/common-k8s-connect-operator-console.rst
|
||||
|
||||
.. _minio-k8s-upgrade-minio-operator-4.2.2-procedure:
|
||||
|
||||
Upgrade MinIO Operator 4.0.0 through 4.2.2 to 4.2.3
|
||||
---------------------------------------------------
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
This procedure assumes that:
|
||||
|
||||
- You have an existing MinIO Operator deployment running any release from 4.0.0 through 4.2.2
|
||||
- Your Kubernetes cluster runs 1.19.0 or later
|
||||
- Your local host has ``kubectl`` installed and configured with access to the Kubernetes cluster
|
||||
|
||||
Procedure
|
||||
~~~~~~~~~
|
||||
|
||||
This procedure covers the necessary steps to upgrade a MinIO Operator deployment running any release from 4.0.0 through 4.2.2 to 4.2.3.
|
||||
You can then perform :ref:`minio-k8s-upgrade-minio-operator-procedure` to complete the upgrade to |operator-version-stable|.
|
||||
|
||||
There is no direct upgrade path for 4.0.0 - 4.2.2 installations to |operator-version-stable|.
|
||||
|
||||
1. *(Optional)* Update each MinIO Tenant to the latest stable MinIO Version.
|
||||
|
||||
Upgrading MinIO regularly ensures your Tenants have the latest features and performance improvements.
|
||||
Test upgrades in a lower environment such as a Dev or QA Tenant, before applying to your production Tenants.
|
||||
|
||||
See :ref:`minio-k8s-upgrade-minio-tenant` for a procedure on upgrading MinIO Tenants.
|
||||
|
||||
#. Check the Security Context for each Tenant Pool
|
||||
|
||||
Use the following command to validate the specification for each managed MinIO Tenant:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl get tenants <TENANT-NAME> -n <TENANT-NAMESPACE> -o yaml
|
||||
|
||||
If the ``spec.pools.securityContext`` field does not exist for a Tenant, the tenant pods likely run as root.
|
||||
|
||||
As part of the 4.2.3 and later series, pods run with a limited permission set enforced as part of the Operator upgrade.
|
||||
However, Tenants running pods as root may fail to start due to the security context mismatch.
|
||||
You can set an explicit Security Context that allows pods to run as root for those Tenants:
|
||||
|
||||
.. code-block:: yaml
|
||||
:class: copyable
|
||||
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
runAsGroup: 0
|
||||
runAsNonRoot: false
|
||||
fsGroup: 0
|
||||
|
||||
You can use the following command to edit the tenant and apply the changes:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
kubectl edit tenants <TENANT-NAME> -n <TENANT-NAMESPACE>
|
||||
# Modify the securityContext as needed
|
||||
|
||||
See :kube-docs:`Pod Security Standards <concepts/security/pod-security-standards/>` for more information on Kubernetes Security Contexts.
|
||||
|
||||
#. Upgrade to Operator 4.2.3
|
||||
|
||||
Download the MinIO Kubernetes Plugin 4.2.3 and use it to upgrade the Operator.
|
||||
Open https://github.com/minio/operator/releases/tag/v4.2.3 in a browser and download the binary that corresponds to your local host OS.
|
||||
|
||||
For example, Linux hosts running an Intel or AMD processor can run the following commands:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
wget https://github.com/minio/operator/releases/download/v4.2.3/kubectl-minio_4.2.3_linux_amd64 -o kubectl-minio_4.2.3
|
||||
chmod +x kubectl-minio_4.2.3
|
||||
./kubectl-minio_4.2.3 init
|
||||
|
||||
#. Validate all Tenants and Operator pods
|
||||
|
||||
Check the Operator and MinIO Tenant namespaces to ensure all pods and services started successfully.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl get all -n minio-operator
|
||||
kubectl get pods -l "v1.min.io/tenant" --all-namespaces
|
||||
|
||||
#. Upgrade to |operator-version-stable|
|
||||
|
||||
Follow the :ref:`minio-k8s-upgrade-minio-operator-procedure` procedure to upgrade to the latest stable Operator version.
|
||||
|
||||
Upgrade MinIO Operator 3.0.0 through 3.0.29 to 4.2.2
|
||||
----------------------------------------------------
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
This procedure assumes that:
|
||||
|
||||
- You have an existing MinIO Operator deployment running 3.X.X
|
||||
- Your Kubernetes cluster runs 1.19.0 or later
|
||||
- Your local host has ``kubectl`` installed and configured with access to the Kubernetes cluster
|
||||
|
||||
Procedure
|
||||
~~~~~~~~~
|
||||
|
||||
This procedure covers the necessary steps to upgrade a MinIO Operator deployment running any release from 3.0.0 through 3.2.9 to 4.2.2.
|
||||
You can then perform :ref:`minio-k8s-upgrade-minio-operator-4.2.2-procedure`, followed by :ref:`minio-k8s-upgrade-minio-operator-procedure`.
|
||||
|
||||
There is no direct upgrade path from a 3.X.X series installation to |operator-version-stable|.
|
||||
|
||||
1. (Optional) Update each MinIO Tenant to the latest stable MinIO Version.
|
||||
|
||||
Upgrading MinIO regularly ensures your Tenants have the latest features and performance improvements.
|
||||
|
||||
Test upgrades in a lower environment such as a Dev or QA Tenant, before applying to your production Tenants.
|
||||
|
||||
See :ref:`minio-k8s-upgrade-minio-tenant` for a procedure on upgrading MinIO Tenants.
|
||||
|
||||
#. Validate the Tenant ``tenant.spec.zones`` values
|
||||
|
||||
Use the following command to validate the specification for each managed MinIO Tenant:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl get tenants <TENANT-NAME> -n <TENANT-NAMESPACE> -o yaml
|
||||
|
||||
- Ensure each ``tenant.spec.zones`` element has a ``name`` field set to the name for that zone.
|
||||
Each zone must have a unique name for that Tenant, such as ``zone-0`` and ``zone-1`` for the first and second zones respectively.
|
||||
|
||||
- Ensure each ``tenant.spec.zones`` has an explicit ``securityContext`` describing the permission set with which pods run in the cluster.
|
||||
|
||||
The following example tenant YAML fragment sets the specified fields:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
image: "minio/minio:$(LATEST-VERSION)"
|
||||
...
|
||||
zones:
|
||||
- servers: 4
|
||||
name: "zone-0"
|
||||
volumesPerServer: 4
|
||||
volumeClaimTemplate:
|
||||
metadata:
|
||||
name: data
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Ti
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
runAsGroup: 0
|
||||
runAsNonRoot: false
|
||||
fsGroup: 0
|
||||
- servers: 4
|
||||
name: "zone-1"
|
||||
volumesPerServer: 4
|
||||
volumeClaimTemplate:
|
||||
metadata:
|
||||
name: data
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Ti
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
runAsGroup: 0
|
||||
runAsNonRoot: false
|
||||
fsGroup: 0
|
||||
|
||||
You can use the following command to edit the tenant and apply the changes:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
kubectl edit tenants <TENANT-NAME> -n <TENANT-NAMESPACE>
|
||||
|
||||
#. Upgrade to Operator 4.2.2
|
||||
|
||||
Download the MinIO Kubernetes Plugin 4.2.2 and use it to upgrade the Operator.
|
||||
Open https://github.com/minio/operator/releases/tag/v4.2.2 in a browser and download the binary that corresponds to your local host OS.
|
||||
For example, Linux hosts running an Intel or AMD processor can run the following commands:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
wget https://github.com/minio/operator/releases/download/v4.2.3/kubectl-minio_4.2.2_linux_amd64 -o kubectl-minio_4.2.2
|
||||
chmod +x kubectl-minio_4.2.2
|
||||
|
||||
./kubectl-minio_4.2.2 init
|
||||
|
||||
#. Validate all Tenants and Operator pods
|
||||
|
||||
Check the Operator and MinIO Tenant namespaces to ensure all pods and services started successfully.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl get all -n minio-operator
|
||||
|
||||
kubectl get pods -l "v1.min.io/tenant" --all-namespaces
|
||||
|
||||
#. Upgrade to 4.2.3
|
||||
|
||||
Follow the :ref:`minio-k8s-upgrade-minio-operator-4.2.2-procedure` procedure to upgrade to Operator 4.2.3.
|
||||
You can then upgrade to |operator-version-stable|.
|
@@ -1,207 +0,0 @@
|
||||
.. _minio-k8s-upgrade-minio-operator:
|
||||
|
||||
======================
|
||||
Upgrade MinIO Operator
|
||||
======================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
You can upgrade the MinIO Operator at any time without impacting your managed MinIO Tenants.
|
||||
|
||||
As part of the upgrade process, the Operator may update and restart Tenants to support changes to the MinIO Custom Resource Definition (CRD).
|
||||
These changes require no action on the part of any operator or administrator, and do not impact Tenant operations.
|
||||
|
||||
This page describes how to upgrade from Operator 5.0.15 to |operator-version-stable|.
|
||||
See :ref:`minio-k8s-upgrade-minio-operator-to-5.0.15` for instructions on upgrading to Operator 5.0.15 before starting this procedure.
|
||||
|
||||
|
||||
.. admonition:: Operator 6.0.0 Deprecates the Operator Console
|
||||
|
||||
Starting with Operator 6.0.0, the MinIO Operator Console is deprecated and removed.
|
||||
|
||||
You can continue to manage and deploy MinIO Tenants using standard Kubernetes approaches such as Kustomize or Helm.
|
||||
|
||||
.. _minio-k8s-upgrade-minio-operator-procedure:
|
||||
|
||||
Upgrade MinIO Operator 5.0.15 to |operator-version-stable|
|
||||
----------------------------------------------------------
|
||||
|
||||
.. important::
|
||||
|
||||
Operator 6.0.0 deprecates the MinIO Operator Console and removes the related resources from the MinIO Operator CRD.
|
||||
This includes removal of Operator Console resources such as services and pods.
|
||||
|
||||
Use either Kustomization or Helm for managing Tenants moving forward.
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Upgrade using Kustomize
|
||||
|
||||
The following procedure upgrades the MinIO Operator using Kustomize.
|
||||
For deployments using Operator 5.0.0 through 5.0.14, follow the :ref:`minio-k8s-upgrade-minio-operator-to-5.0.15` procedure before performing this upgrade.
|
||||
|
||||
If you installed the Operator using :ref:`Helm <minio-k8s-deploy-operator-helm>`, use the :guilabel:`Upgrade using Helm` instructions instead.
|
||||
|
||||
.. container:: procedure
|
||||
|
||||
#. *(Optional)* Update each MinIO Tenant to the latest stable MinIO Version.
|
||||
|
||||
Upgrading MinIO regularly ensures your Tenants have the latest features and performance improvements.
|
||||
Test upgrades in a lower environment such as a Dev or QA Tenant, before applying to your production Tenants.
|
||||
See :ref:`minio-k8s-upgrade-minio-tenant` for a procedure on upgrading MinIO Tenants.
|
||||
|
||||
#. Verify the existing Operator installation.
|
||||
Use ``kubectl get all -n minio-operator`` to verify the health and status of all Operator pods and services.
|
||||
|
||||
If you installed the Operator to a custom namespace, specify that namespace as ``-n <NAMESPACE>``.
|
||||
|
||||
You can verify the currently installed Operator version by retrieving the object specification for an operator pod in the namespace.
|
||||
The following example uses the ``jq`` tool to filter the necessary information from ``kubectl``:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl get pod -l 'name=minio-operator' -n minio-operator -o json | jq '.items[0].spec.containers'
|
||||
|
||||
The output resembles the following:
|
||||
|
||||
.. code-block:: json
|
||||
:emphasize-lines: 8-10
|
||||
:substitutions:
|
||||
|
||||
{
|
||||
"env": [
|
||||
{
|
||||
"name": "CLUSTER_DOMAIN",
|
||||
"value": "cluster.local"
|
||||
}
|
||||
],
|
||||
"image": "minio/operator:v5.0.15",
|
||||
"imagePullPolicy": "IfNotPresent",
|
||||
"name": "minio-operator"
|
||||
}
|
||||
|
||||
If your local host does not have the ``jq`` utility installed, you can run the first part of the command and locate the ``spec.containers`` section of the output.
|
||||
|
||||
#. Upgrade Operator with Kustomize
|
||||
|
||||
The following command upgrades Operator to version |operator-version-stable|:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl apply -k github.com/minio/operator
|
||||
|
||||
In the sample output below, ``configured`` indicates where a new change was applied from the updated CRD:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
namespace/minio-operator unchanged
|
||||
customresourcedefinition.apiextensions.k8s.io/miniojobs.job.min.io configured
|
||||
customresourcedefinition.apiextensions.k8s.io/policybindings.sts.min.io configured
|
||||
customresourcedefinition.apiextensions.k8s.io/tenants.minio.min.io configured
|
||||
serviceaccount/minio-operator unchanged
|
||||
clusterrole.rbac.authorization.k8s.io/minio-operator-role configured
|
||||
clusterrolebinding.rbac.authorization.k8s.io/minio-operator-binding unchanged
|
||||
service/operator unchanged
|
||||
service/sts unchanged
|
||||
deployment.apps/minio-operator configured
|
||||
|
||||
#. Validate the Operator upgrade
|
||||
|
||||
You can check the new Operator version with the same ``kubectl`` command used previously:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl get pod -l 'name=minio-operator' -n minio-operator -o json | jq '.items[0].spec.containers'
|
||||
|
||||
.. tab-item:: Upgrade using Helm
|
||||
|
||||
The following procedure upgrades an existing MinIO Operator Installation using Helm.
|
||||
|
||||
If you installed the Operator using Kustomize, use the :guilabel:`Upgrade using Kustomize` instructions instead.
|
||||
|
||||
.. container:: procedure
|
||||
|
||||
#. *(Optional)* Update each MinIO Tenant to the latest stable MinIO Version.
|
||||
|
||||
Upgrading MinIO regularly ensures your Tenants have the latest features and performance improvements.
|
||||
Test upgrades in a lower environment such as a Dev or QA Tenant, before applying to your production Tenants.
|
||||
See :ref:`minio-k8s-upgrade-minio-tenant` for a procedure on upgrading MinIO Tenants.
|
||||
|
||||
#. Verify the existing Operator installation.
|
||||
|
||||
Use ``kubectl get all -n minio-operator`` to verify the health and status of all Operator pods and services.
|
||||
|
||||
If you installed the Operator to a custom namespace, specify that namespace as ``-n <NAMESPACE>``.
|
||||
|
||||
Use the ``helm list`` command to view the installed charts in the namespace:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm list -n minio-operator
|
||||
|
||||
The result should resemble the following:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
|
||||
operator minio-operator 1 2023-11-01 15:49:54.539724775 -0400 EDT deployed operator-5.0.x v5.0.x
|
||||
|
||||
#. Update the Operator Repository
|
||||
|
||||
Use ``helm repo update minio-operator`` to update the MinIO Operator repo.
|
||||
If you set a different alias for the MinIO Operator repository, specify that in the command instead of ``minio-operator``.
|
||||
You can use ``helm repo list`` to review your installed repositories.
|
||||
|
||||
Use ``helm search`` to check the latest available chart version after updating the Operator Repo:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm search repo minio-operator
|
||||
|
||||
The response should resemble the following:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
:substitutions:
|
||||
|
||||
NAME CHART VERSION APP VERSION DESCRIPTION
|
||||
minio-operator/minio-operator 4.3.7 v4.3.7 A Helm chart for MinIO Operator
|
||||
minio-operator/operator |operator-version-stable| v|operator-version-stable| A Helm chart for MinIO Operator
|
||||
minio-operator/tenant |operator-version-stable| v|operator-version-stable| A Helm chart for MinIO Operator
|
||||
|
||||
The ``minio-operator/minio-operator`` is a legacy chart and should **not** be installed under normal circumstances.
|
||||
|
||||
#. Run ``helm upgrade``
|
||||
|
||||
Helm uses the latest chart to upgrade the MinIO Operator:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm upgrade -n minio-operator \
|
||||
operator minio-operator/operator
|
||||
|
||||
If you installed the MinIO Operator to a different namespace, specify that in the ``-n`` argument.
|
||||
|
||||
If you used a different installation name from ``operator``, replace the value above with the installation name.
|
||||
|
||||
The command results should return success with a bump in the ``REVISION`` value.
|
||||
|
||||
#. Validate the Operator upgrade
|
||||
|
||||
You can check the new Operator version with the same ``kubectl`` command used previously:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl get pod -l 'name=minio-operator' -n minio-operator -o json | jq '.items[0].spec.containers'
|
@@ -1,203 +0,0 @@
|
||||
.. _minio-k8s-upgrade-minio-tenant:
|
||||
|
||||
======================
|
||||
Upgrade a MinIO Tenant
|
||||
======================
|
||||
|
||||
.. default-domain:: minio
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
|
||||
The following procedures upgrade a single MinIO Tenant, using either Kustomize or Helm.
|
||||
MinIO recommends you test upgrades in a lower environment such as a Dev or QA Tenant, before upgrading production Tenants.
|
||||
|
||||
.. important::
|
||||
|
||||
For Tenants using a MinIO Image older than :minio-release:`RELEASE.2024-03-30T09-41-56Z` running with :ref:`AD/LDAP <minio-ldap-config-settings>` enabled, you **must** read through the release notes for :minio-release:`RELEASE.2024-04-18T19-09-19Z` before starting this procedure.
|
||||
You must take the extra steps documented in the linked release as part of the upgrade procedure.
|
||||
|
||||
.. _minio-upgrade-tenant-plugin:
|
||||
.. _minio-upgrade-tenant-kustomize:
|
||||
|
||||
Upgrade a Tenant using Kustomize
|
||||
--------------------------------
|
||||
|
||||
The following procedure upgrades a MinIO Tenant using Kustomize and the ``kubectl`` CLI.
|
||||
If you deployed the Tenant using :ref:`Helm <deploy-tenant-helm>`, use the :ref:`minio-upgrade-tenant-helm` procedure instead.
|
||||
|
||||
To upgrade a Tenant with Kustomize:
|
||||
|
||||
If the tenant was deployed with Operator Console, there are additional steps to create a base configuration file before upgrading.
|
||||
|
||||
If the tenant was deployed with Kustomize, the base configuration is your existing ``kustomization`` files from the original tenant deployment.
|
||||
|
||||
Choose a tab below depending on how the tenant was deployed:
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Operator Console-Deployed Tenant
|
||||
:selected:
|
||||
|
||||
1. Create the base configuration file:
|
||||
|
||||
a. In a convenient directory, save the current Tenant configuration to a file using ``kubectl get``:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl get tenant/my-tenant -n my-tenant-ns -o yaml > my-tenant-base.yaml
|
||||
|
||||
Replace ``my-tenant`` and ``my-tenant-ns`` with the name and namespace of the Tenant to upgrade.
|
||||
|
||||
Edit the file to remove the following lines:
|
||||
|
||||
- ``creationTimestamp:``
|
||||
- ``resourceVersion:``
|
||||
- ``uid:``
|
||||
- ``selfLink:`` (if present)
|
||||
|
||||
For example, remove the highlighted lines:
|
||||
|
||||
.. code-block:: shell
|
||||
:emphasize-lines: 2, 6, 7
|
||||
|
||||
metadata:
|
||||
creationTimestamp: "2024-05-29T21:22:20Z"
|
||||
generation: 1
|
||||
name: my-tenant
|
||||
namespace: my-tenant-ns
|
||||
resourceVersion: "4699"
|
||||
uid: d5b8e468-3bed-4aa3-8ddb-dfe1ee0362da
|
||||
|
||||
b. In the same directory, create a ``kustomization.yaml`` file with contents resembling the following:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- my-tenant-base.yaml
|
||||
|
||||
patches:
|
||||
- path: upgrade-minio-tenant.yaml
|
||||
|
||||
If you used a different filename for the ``kubectl get`` output in the previous step, replace ``my-tenant-base.yaml`` with the name of that file.
|
||||
|
||||
.. tab-item:: Existing Kustomized-deployed Tenant
|
||||
|
||||
1. You can upgrade the tenant using the ``kustomization`` files from the original deployment as the base configuration.
|
||||
If you no longer have these files, follow the instructions in the Operator Console-Deployed Tenant tab.
|
||||
|
||||
2. Create a ``upgrade-minio-tenant.yaml`` file with contents resembling the following:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
:substitutions:
|
||||
|
||||
apiVersion: minio.min.io/v2
|
||||
kind: Tenant
|
||||
|
||||
metadata:
|
||||
name: my-tenant
|
||||
namespace: my-tenant-ns
|
||||
|
||||
spec:
|
||||
image: minio/minio:|minio-tag|
|
||||
|
||||
This file instructs Kustomize to upgrade the tenant using the specified image.
|
||||
The name of this file, ``upgrade-minio-tenant.yaml``, must match the ``patches.path`` filename specified in the ``kustomization.yaml`` file created in the previous step.
|
||||
|
||||
Replace ``my-tenant`` and ``my-tenant-ns`` with the name and namespace of the Tenant to upgrade.
|
||||
Specify the MinIO version to upgrade to in ``image:``.
|
||||
|
||||
Alternatively, you can update the base configuration directly, according to your local procedures.
|
||||
Refer to the :kube-docs:`Kustomize Documentation <tasks/manage-kubernetes-objects/kustomization>` for more information.
|
||||
|
||||
3. From the same directory as the above files, apply the updated configuration to the Tenant with ``kubectl apply``:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
kubectl apply -k ./
|
||||
|
||||
The output resembles the following:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
tenant.minio.min.io/my-tenant configured
|
||||
|
||||
|
||||
.. _minio-upgrade-tenant-helm:
|
||||
|
||||
Upgrade the Tenant using the MinIO Helm Chart
|
||||
---------------------------------------------
|
||||
|
||||
This procedure upgrades an existing MinIO Tenant using Helm Charts.
|
||||
|
||||
If you deployed the Tenant using Kustomize, use the :ref:`minio-upgrade-tenant-kustomize` procedure instead.
|
||||
|
||||
1. Verify the existing MinIO Tenant installation.
|
||||
|
||||
Use ``kubectl get all -n TENANT_NAMESPACE`` to verify the health and status of all Tenant pods and services.
|
||||
|
||||
Use the ``helm list`` command to view the installed charts in the namespace:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm list -n TENANT_NAMESPACE
|
||||
|
||||
The result should resemble the following:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
|
||||
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
|
||||
CHART_NAME TENANT_NAMESPACE 1 2023-11-01 15:49:58.810412732 -0400 EDT deployed tenant-5.0.x v5.0.x
|
||||
|
||||
#. Update the Operator Repository
|
||||
|
||||
Use ``helm repo update minio-operator`` to update the MinIO Operator repo.
|
||||
If you set a different alias for the MinIO Operator repository, specify that to the command.
|
||||
You can use ``helm repo list`` to review your installed repositories.
|
||||
|
||||
Use ``helm search`` to check the latest available chart version after updating the Operator Repo:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm search repo minio-operator
|
||||
|
||||
The response should resemble the following:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
:substitutions:
|
||||
|
||||
NAME CHART VERSION APP VERSION DESCRIPTION
|
||||
minio-operator/minio-operator 4.3.7 v4.3.7 A Helm chart for MinIO Operator
|
||||
minio-operator/operator |operator-version-stable| v|operator-version-stable| A Helm chart for MinIO Operator
|
||||
minio-operator/tenant |operator-version-stable| v|operator-version-stable| A Helm chart for MinIO Operator
|
||||
|
||||
The ``minio-operator/minio-operator`` is a legacy chart and should **not** be installed under normal circumstances.
|
||||
|
||||
#. Run ``helm upgrade``
|
||||
|
||||
Helm uses the latest chart to upgrade the Tenant:
|
||||
|
||||
.. code-block:: shell
|
||||
:class: copyable
|
||||
|
||||
helm upgrade -n minio-tenant \
|
||||
CHART_NAME minio-operator/tenant
|
||||
|
||||
The command results should return success with a bump in the ``REVISION`` value.
|
||||
|
||||
#. Validate the Tenant Upgrade
|
||||
|
||||
Check that all services and pods are online and functioning normally.
|
Reference in New Issue
Block a user