Welcome to the upcoming version of the MinIO Documentation!
- The content of these pages may change at any time.
+ The content on this page is under active development and
+ may change at any time.
If you can't find what you're looking for, check our
legacy documentation.
Thank you for your patience.
diff --git a/source/bare-metal/minio-baremetal-overview.rst b/source/bare-metal/minio-baremetal-overview.rst
deleted file mode 100644
index 191fb295..00000000
--- a/source/bare-metal/minio-baremetal-overview.rst
+++ /dev/null
@@ -1,363 +0,0 @@
-.. _minio-baremetal:
-
-====================
-MinIO for Bare Metal
-====================
-
-.. default-domain:: minio
-
-.. contents:: Table of Contents
- :local:
- :depth: 2
-
-MinIO is a high performance distributed object storage server, designed for
-large-scale private cloud infrastructure. MinIO fully supports deployment onto
-bare-metal hardware with or without containerization for process management.
-
-Standalone Installation
------------------------
-
-Standalone MinIO deployments consist of a single ``minio`` server process with
-one or more disks. Standalone deployments are best suited for local development
-environments.
-
-1) Install the ``minio`` Server
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Install the :program:`minio` server onto the host machine. Select the tab that
-corresponds to the host machine operating system or environment:
-
-.. include:: /includes/minio-server-installation.rst
-
-2) Add TLS/SSL Certificates (Optional)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Enable TLS/SSL connectivity to the MinIO server by specifying a private key
-(``.key``) and public certificate (``.crt``) to the MinIO ``certs`` directory:
-
-- For Linux/MacOS: ``${HOME}/.minio/certs``
-
-- For Windows: ``%%USERPROFILE%%\.minio\certs``
-
-The MinIO server automatically enables TLS/SSL connectivity if it detects
-the required certificates in the ``certs`` directory.
-
-.. note::
-
- The MinIO documentation makes a best-effort to provide generally applicable
- and accurate information on TLS/SSL connectivity in the context of MinIO
- products and services, and is not intended as a complete guide to the larger
- topic of TLS/SSL certificate creation and management.
-
-3) Run the ``minio`` Server
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Issue the following command to start the :program:`minio` server. The following
-example assumes the host machine has *at least* four disks, which is the minimum
-required number of disks to enable :ref:`erasure coding `:
-
-.. code-block:: shell
- :class: copyable
-
- export MINIO_ACCESS_KEY=minio-admin
- export MINIO_SECRET_KEY=minio-secret-key-CHANGE-ME
- minio server /mnt/disk{1...4}/data
-
-The example command breaks down as follows:
-
-.. list-table::
- :widths: 40 60
- :width: 100%
-
- * - :envvar:`MINIO_ACCESS_KEY`
- - The access key for the :ref:`root ` user.
-
- Replace this value with a unique, random, and long string.
-
- * - :envvar:`MINIO_SECRET_KEY`
- - The corresponding secret key to use for the
- :ref:`root ` user.
-
- Replace this value with a unique, random, and long string.
-
- * - ``/mnt/disk{1...4}/data``
- - The path to each disk on the host machine.
-
- ``/data`` is an optional folder in which the ``minio`` server stores
- all information related to the deployment.
-
- See :mc-cmd:`minio server DIRECTORIES` for more information on
- configuring the backing storage for the :mc:`minio server` process.
-
-The command uses MinIO expansion notation ``{x...y}`` to denote a sequential
-series. Specifically, ``/mnt/disk{1...4}/data`` expands to:
-
-- ``/mnt/disk1/data``
-- ``/mnt/disk2/data``
-- ``/mnt/disk3/data``
-- ``/mnt/disk4/data``
-
-4) Connect to the Server
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-Use the :mc-cmd:`mc alias set` command from a machine with connectivity to
-the host running the ``minio`` server. See :ref:`mc-install` for documentation
-on installing :program:`mc`.
-
-.. code-block:: shell
- :class: copyable
-
- mc alias set mylocalminio 192.0.2.10:9000 minioadmin minio-secret-key-CHANGE-ME
-
-Replace the IP address and port with one of the ``minio`` servers endpoints.
-
-See :ref:`minio-mc-commands` for a list of commands you can run on the
-MinIO server.
-
-Distributed Installation
-------------------------
-
-Distributed MinIO deployments consist of multiple ``minio`` servers with
-one or more disks each. Distributed deployments are best suited for
-staging and production environments.
-
-MinIO *requires* using sequentially-numbered hostnames to represent each
-``minio`` server in the deployment. For example, the following hostnames support
-a 4-node distributed deployment:
-
-- ``minio1.example.com``
-- ``minio2.example.com``
-- ``minio3.example.com``
-- ``minio4.example.com``
-
-Create the necessary DNS hostname mappings *prior* to starting this
-procedure.
-
-1) Install the ``minio`` Server
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Install the :program:`minio` server onto each host machine in the deployment.
-Select the tab that corresponds to the host machine operating system or
-environment:
-
-.. include:: /includes/minio-server-installation.rst
-
-2) Add TLS/SSL Certificates (Optional)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Enable TLS/SSL connectivity to the MinIO server by specifying a private key
-(``.key``) and public certificate (``.crt``) to the MinIO ``certs`` directory:
-
-- For Linux/MacOS: ``${HOME}/.minio/certs``
-
-- For Windows: ``%%USERPROFILE%%\.minio\certs``
-
-The MinIO server automatically enables TLS/SSL connectivity if it detects
-the required certificates in the ``certs`` directory.
-
-.. note::
-
- The MinIO documentation makes a best-effort to provide generally applicable
- and accurate information on TLS/SSL connectivity in the context of MinIO
- products and services, and is not intended as a complete guide to the larger
- topic of TLS/SSL certificate creation and management.
-
-3) Run the ``minio`` Server
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Issue the following command on each host machine in the deployment. The
-following example assumes that:
-
-- The deployment has four host machines with sequential hostnames
- (i.e. ``minio1.example.com``, ``minio2.example.com``).
-
-- Each host machine has *at least* four disks mounted at ``/data``. 4 disks is
- the minimum required for :ref:`erasure coding
- `.
-
-.. code-block:: shell
- :class: copyable
-
- export MINIO_ACCESS_KEY=minio-admin
- export MINIO_SECRET_KEY=minio-secret-key-CHANGE-ME
- minio server https://minio{1...4}.example.com/mnt/disk{1...4}/data
-
-The example command breaks down as follows:
-
-.. list-table::
- :widths: 40 60
- :width: 100%
-
- * - :envvar:`MINIO_ACCESS_KEY`
- - The access key for the :ref:`root ` user.
-
- Replace this value with a unique, random, and long string.
-
- * - :envvar:`MINIO_SECRET_KEY`
- - The corresponding secret key to use for the
- :ref:`root ` user.
-
- Replace this value with a unique, random, and long string.
-
- * - ``https://minio{1...4}.example.com/``
- - The DNS hostname of each server in the distributed deployment.
-
- * - ``/mnt/disk{1...4}/data``
- - The path to each disk on the host machine.
-
- ``/data`` is an optional folder in which the ``minio`` server stores
- all information related to the deployment.
-
- See :mc-cmd:`minio server DIRECTORIES` for more information on
- configuring the backing storage for the :mc:`minio server` process.
-
-The command uses MinIO expansion notation ``{x...y}`` to denote a sequential
-series. Specifically:
-
-- The hostname ``https://minio{1...4}.example.com`` expands to:
-
- - ``https://minio1.example.com``
- - ``https://minio2.example.com``
- - ``https://minio3.example.com``
- - ``https://minio4.example.com``
-
-- ``/mnt/disk{1...4}/data`` expands to
-
- - ``/mnt/disk1/data``
- - ``/mnt/disk2/data``
- - ``/mnt/disk3/data``
- - ``/mnt/disk4/data``
-
-4) Connect to the Server
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-Use the :mc-cmd:`mc alias set` command from a machine with connectivity to any
-hostname running the ``minio`` server. See :ref:`mc-install` for documentation
-on installing :program:`mc`.
-
-.. code-block:: shell
- :class: copyable
-
- mc alias set mylocalminio minio1.example.net minioadmin minio-secret-key-CHANGE-ME
-
-See :ref:`minio-mc-commands` for a list of commands you can run on the
-MinIO server.
-
-Docker Installation
--------------------
-
-Stable MinIO
-~~~~~~~~~~~~
-
-The following ``docker`` command creates a container running the latest stable
-version of the ``minio`` server process:
-
-.. code-block:: shell
- :class: copyable
-
- docker run -p 9000:9000 \
- -e "MINIO_ACCESS_KEY=ROOT_ACCESS_KEY" \
- -e "MINIO_SECRET_KEY=SECRET_ACCESS_KEY_CHANGE_ME" \
- -v /mnt/disk1:/disk1 \
- -v /mnt/disk2:/disk2 \
- -v /mnt/disk3:/disk3 \
- -v /mnt/disk4:/disk4 \
- minio/minio server /disk{1...4}
-
-The command uses the following options:
-
-- ``-e MINIO_ACCESS_KEY`` and ``-e MINIO_SECRET_KEY`` for configuring the
- :ref:`root ` user credentials.
-
-- ``-v /mnt/disk:/disk`` for configuring each disk the ``minio``
- server uses.
-
-Bleeding Edge MinIO
-~~~~~~~~~~~~~~~~~~~
-
-*Do not use bleeding-edge deployments of MinIO in production environments*
-
-The following ``docker`` command creates a container running the latest
-bleeding-edge version of the ``minio`` server process:
-
-.. code-block:: shell
- :class: copyable
-
- docker run -p 9000:9000 \
- -e "MINIO_ACCESS_KEY=ROOT_ACCESS_KEY" \
- -e "MINIO_SECRET_KEY=SECRET_ACCESS_KEY_CHANGE_ME" \
- -v /mnt/disk1:/disk1 \
- -v /mnt/disk2:/disk2 \
- -v /mnt/disk3:/disk3 \
- -v /mnt/disk4:/disk4 \
- minio/minio:edge server /disk{1...4}
-
-The command uses the following options:
-
-- ``MINIO_ACCESS_KEY`` and ``MINIO_SECRET_KEY`` for configuring the
- :ref:`root ` user credentials.
-
-- ``-v /mnt/disk:/disk`` for configuring each disk the ``minio``
- server uses.
-
-Deployment Recommendations
---------------------------
-
-Minimum Nodes per Deployment
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-For all production deployments, MinIO recommends a *minimum* of 4 nodes per
-cluster. MinIO deployments with *at least* 4 nodes can tolerate the loss of up
-to half the nodes *or* half the disks in the deployment while maintaining
-read and write availability.
-
-For example, assuming a 4-node deployment with 4 drives per node, the
-cluster can tolerate the loss of:
-
-- Any two nodes, *or*
-- Any 8 drives.
-
-The minimum recommendation reflects MinIO's experience with assisting enterprise
-customers in deploying on a variety of IT infrastructures while
-maintaining the desired SLA/SLO. While MinIO may run on less than the
-minimum recommended topology, any potential cost savings come at the risk of
-decreased reliability.
-
-Recommended Hardware
-~~~~~~~~~~~~~~~~~~~~
-
-For MinIO's recommended hardware, please see
-`MinIO Reference Hardware `__.
-
-Bare Metal Infrastructure
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-A distributed MinIO deployment can only provide as much availability as the
-bare metal infrastructure on which it is deployed. In particular, consider the
-following potential failure points which could result in cluster downtime
-when configuring your bare metal infrastructure:
-
-- Shared networking resources (switches, routers, ISP).
-- Shared power resources.
-- Shared physical location (rack, datacenter, region).
-
-MinIO deployments using virtual machines or containerized environments should
-also consider the following:
-
-- Shared physical hardware (CPU, Memory, Storage)
-- Shared orchestration management layer (Kubernetes, Docker Swarm)
-
-FreeBSD
--------
-
-MinIO does not provide an official FreeBSD binary. FreeBSD maintains an
-`upstream release `__ you can
-install using `pkg `__:
-
-.. code-block:: shell
- :class: copyable
-
- pkg install minio
- sysrc minio_enable=yes
- sysrc minio_disks=/path/to/disks
- service minio start
\ No newline at end of file
diff --git a/source/minio-features/bucket-notifications.md b/source/concepts/bucket-notifications.md
similarity index 100%
rename from source/minio-features/bucket-notifications.md
rename to source/concepts/bucket-notifications.md
diff --git a/source/minio-features/bucket-versioning.rst b/source/concepts/bucket-versioning.rst
similarity index 100%
rename from source/minio-features/bucket-versioning.rst
rename to source/concepts/bucket-versioning.rst
diff --git a/source/minio-features/erasure-coding.rst b/source/concepts/erasure-coding.rst
similarity index 89%
rename from source/minio-features/erasure-coding.rst
rename to source/concepts/erasure-coding.rst
index 09047bdd..7cff7b77 100644
--- a/source/minio-features/erasure-coding.rst
+++ b/source/concepts/erasure-coding.rst
@@ -12,7 +12,7 @@ Erasure Coding
MinIO Erasure Coding is a data redundancy and availability feature that allows
MinIO deployments to automatically reconstruct objects on-the-fly despite the
-loss of multiple drives or nodes in the cluster.Erasure Coding provides
+loss of multiple drives or nodes in the cluster. Erasure Coding provides
object-level healing with less overhead than adjacent technologies such as
RAID or replication.
@@ -24,19 +24,15 @@ number of nodes, and number of drives per node in the Erasure Set, MinIO can
tolerate the loss of up to half (``N/2``) of drives and still retrieve stored
objects.
-For example, consider the following small-scale MinIO deployment consisting of a
-single :ref:`Server Set ` with 4 :mc:`minio server`
+For example, consider a small-scale MinIO deployment consisting of a
+single :ref:`Server Pool ` with 4 :mc:`minio server`
nodes. Each node in the deployment has 4 locally attached ``1Ti`` drives for
-a total of 16 drives:
-
-
+a total of 16 drives.
MinIO creates :ref:`Erasure Sets ` by dividing the total
number of drives in the deployment into sets consisting of between 4 and 16
drives each. In the example deployment, the largest possible Erasure Set size
-that evenly divides into the total number of drives is ``16``:
-
-
+that evenly divides into the total number of drives is ``16``.
MinIO uses a Reed-Solomon algorithm to split objects into data and parity blocks
based on the size of the Erasure Set. MinIO then uniformly distributes the
@@ -45,8 +41,6 @@ in the set contains no more than one block per object. MinIO uses
the ``EC:N`` notation to refer to the number of parity blocks (``N``) in the
Erasure Set.
-
-
The number of parity blocks in a deployment controls the deployment's relative
data redundancy. Higher levels of parity allow for higher tolerance of drive
loss at the cost of total available storage. For example, using EC:4 in our
@@ -92,9 +86,6 @@ deployment:
- For more information on selecting Erasure Code Parity, see
:ref:`minio-ec-parity`
-- For more information on Erasure Code Object Healing, see
- :ref:`minio-ec-object-healing`.
-
.. _minio-ec-erasure-set:
Erasure Sets
@@ -105,34 +96,34 @@ Erasure Coding. MinIO evenly distributes object data and parity blocks among
the drives in the Erasure Set.
MinIO calculates the number and size of *Erasure Sets* by dividing the total
-number of drives in the :ref:`Server Set ` into sets
+number of drives in the :ref:`Server Pool ` into sets
consisting of between 4 and 16 drives each. MinIO considers two factors when
selecting the Erasure Set size:
- The Greatest Common Divisor (GCD) of the total drives.
-- The number of :mc:`minio server` nodes in the Server Set.
+- The number of :mc:`minio server` nodes in the Server Pool.
For an even number of nodes, MinIO uses the GCD to calculate the Erasure Set
size and ensure the minimum number of Erasure Sets possible. For an odd number
of nodes, MinIO selects a common denominator that results in an odd number of
Erasure Sets to facilitate more uniform distribution of erasure set drives
-among nodes in the Server Set.
+among nodes in the Server Pool.
-For example, consider a Server Set consisting of 4 nodes with 8 drives each
+For example, consider a Server Pool consisting of 4 nodes with 8 drives each
for a total of 32 drives. The GCD of 16 produces 2 Erasure Sets of 16 drives
each with uniform distribution of erasure set drives across all 4 nodes.
-Now consider a Server Set consisting of 5 nodes with 8 drives each for a total
+Now consider a Server Pool consisting of 5 nodes with 8 drives each for a total
of 40 drives. Using the GCD, MinIO would create 4 erasure sets with 10 drives
each. However, this distribution would result in uneven distribution with
one node contributing more drives to the Erasure Sets than the others.
MinIO instead creates 5 erasure sets with 8 drives each to ensure uniform
distribution of Erasure Set drives per Nodes.
-MinIO generally recommends maintaining an even number of nodes in a Server Set
+MinIO generally recommends maintaining an even number of nodes in a Server Pool
to facilitate simplified human calculation of the number and size of
-Erasure Sets in the Server Set.
+Erasure Sets in the Server Pool.
.. _minio-ec-parity:
@@ -179,7 +170,7 @@ Write Quorum
to serve write operations. MinIO requires enough available drives to
eliminate the risk of split-brain scenarios.
- MinIO Write Quorum is ``DRIVES - (EC:N-1)``.
+ MinIO Write Quorum is ``(DRIVES - (EC:N)) + 1``.
Storage Classes
~~~~~~~~~~~~~~~
@@ -204,8 +195,26 @@ MinIO provides the following two storage classes:
- The :mc:`mc admin config` command to modify the ``storage_class.standard``
configuration setting.
- Starting with , MinIO defaults ``STANDARD`` storage class to
- ``EC:4``.
+ Starting with :minio-git:`RELEASE.2021-01-30T00-20-58Z
+ `, MinIO defaults
+ ``STANDARD`` storage class based on the number of volumes in the Erasure Set:
+
+ .. list-table::
+ :header-rows: 1
+ :widths: 30 70
+ :width: 100%
+
+ * - Erasure Set Size
+ - Default Parity (EC:N)
+
+ * - 5 or Fewer
+ - EC:2
+
+ * - 6 - 7
+ - EC:3
+
+ * - 8 or more
+ - EC:4
The maximum value is half of the total drives in the
:ref:`Erasure Set `.
@@ -252,19 +261,12 @@ interfacing with the MinIO server.
created.
-.. _minio-ec-object-healing:
-
-Object Healing
---------------
-
-TODO
-
.. _minio-ec-bitrot-protection:
BitRot Protection
-----------------
-TODO- ReWrite w/ more detail.
+.. TODO- ReWrite w/ more detail.
Silent data corruption or bitrot is a serious problem faced by disk drives
resulting in data getting corrupted without the user’s knowledge. The reasons
diff --git a/source/minio-features/overview.rst b/source/concepts/feature-overview.rst
similarity index 61%
rename from source/minio-features/overview.rst
rename to source/concepts/feature-overview.rst
index ca2c6def..0b86cd7a 100644
--- a/source/minio-features/overview.rst
+++ b/source/concepts/feature-overview.rst
@@ -16,21 +16,28 @@ The following table lists MinIO features and their corresponding documentation:
* - Feature
- Description
- * - :doc:`Bucket Notifications `
+ * - :doc:`Bucket Notifications `
- MinIO Bucket Notifications allows you to automatically publish
notifications to one or more configured notification targets when
specific events occur in a bucket.
- * - :doc:`Bucket Versioning `
+ * - :doc:`Bucket Versioning `
- MinIO Bucket Versioning supports keeping multiple "versions" of an
object in a single bucket. Write operations which would normally
overwrite an existing object instead result in the creation of a new
versioned object.
+ * - :doc:`Erasure Coding `
+ - MinIO Erasure Coding is a data redundancy and availability feature that
+ allows MinIO deployments to automatically reconstruct objects on-the-fly
+ despite the loss of multiple drives or nodes on the cluster. Erasure
+ coding provides object-level healing with less overhead than adjacent
+ technologies such as RAID ro replication.
+
.. toctree::
:titlesonly:
:hidden:
- /minio-features/bucket-notifications
- /minio-features/bucket-versioning
- /minio-features/erasure-coding
\ No newline at end of file
+ /concepts/bucket-notifications
+ /concepts/bucket-versioning
+ /concepts/erasure-coding
\ No newline at end of file
diff --git a/source/conf.py b/source/conf.py
index 478ef1fe..aa4c4a7c 100644
--- a/source/conf.py
+++ b/source/conf.py
@@ -62,6 +62,8 @@ extlinks = {
'iam-docs' : ('https://docs.aws.amazon.com/IAM/latest/UserGuide/%s',''),
'release' : ('https://github.com/minio/mc/releases/tag/%s',''),
'legacy' : ('https://docs.min.io/docs/%s',''),
+ 'docs-k8s' : ('https://docs.min.io/minio/k8s/%s',''),
+
}
# Add any paths that contain templates here, relative to this directory.
@@ -97,7 +99,7 @@ html_theme_options = {
'show_relbars': 'false'
}
-html_short_title = "MinIO Hybrid Cloud"
+html_short_title = "MinIO Object Storage for Baremetal Infrastructure"
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
diff --git a/source/index.rst b/source/index.rst
index 490d4fc1..8cc634d0 100644
--- a/source/index.rst
+++ b/source/index.rst
@@ -12,17 +12,16 @@ First-time users of MinIO *or* object storage services should start with
our :doc:`Introduction `.
Users deploying onto a Kubernetes cluster should start with our
-:doc:`Kubernetes documentation `.
+:docs-k8s:`Kubernetes documentation <>`.
.. toctree::
:titlesonly:
:hidden:
/introduction/minio-overview
- /minio-features/overview
- /bare-metal/minio-baremetal-overview
- /kubernetes/minio-kubernetes-overview
+ /concepts/feature-overview
+ /tutorials/minio-installation
/security/security-overview
- /minio-cli/minio-mc
- /minio-cli/minio-mc-admin
- /minio-server/minio-server
+ /reference/minio-cli/minio-mc
+ /reference/minio-cli/minio-mc-admin
+ /reference/minio-server/minio-server
diff --git a/source/introduction/minio-overview.rst b/source/introduction/minio-overview.rst
index 148e25bc..5932d64a 100644
--- a/source/introduction/minio-overview.rst
+++ b/source/introduction/minio-overview.rst
@@ -31,29 +31,30 @@ needs to store a variety of blobs, including rich multimedia like videos and
images. The structure of objects on the MinIO server might look similar to the
following:
-.. code-block:: shell
+.. code-block:: text
/ #root
/images/
- 2020-01-02-blog-title.png
- 2020-01-03-blog-title.png
+ 2020-01-02-MinIO-Diagram.png
+ 2020-01-03-MinIO-Advanced-Deployment.png
+ MinIO-Logo.png
/videos/
- 2020-01-03-blog-cool-video.mp4
- /blogs/
- 2020-01-02-blog.md
- 2020-01-03-blog.md
- /comments/
- 2020-01-02-blog-comments.json
- 2020-01-02-blog-comments.json
+ 2020-01-04-MinIO-Interview.mp4
+ /articles/
+ /john.doe/
+ 2020-01-02-MinIO-Object-Storage.md
+ 2020-01-02-MinIO-Object-Storage-comments.json
+ /jane.doe/
+ 2020-01-03-MinIO-Advanced-Deployment.png
+ 2020-01-02-MinIO-Advanced-Deployment-comments.json
+ 2020-01-04-MinIO-Interview.md
+
+MinIO supports multiple levels of nested directories and objects to support
+even the most dynamic object storage workloads.
Deployment Architecture
-----------------------
-The following diagram describes the individual components in a MinIO
-deployment:
-
- ServerSet -> Cluster >
-
:ref:`Erasure Set `
A set of disks that supports MinIO :ref:`Erasure Coding
`. Erasure Coding provides high availability,
@@ -66,66 +67,68 @@ deployment:
impact despite the loss of up to half (``N/2``) of the total drives in the
deployment.
-.. _minio-intro-server-set:
+.. _minio-intro-server-pool:
-:ref:`Server Set `
+:ref:`Server Pool `
A set of MinIO :mc-cmd:`minio server` nodes which pool their drives and
resources for supporting object storage/retrieval requests. The
:mc-cmd:`~minio server HOSTNAME` argument passed to the
- :mc-cmd:`minio server` command represents a Server Set:
+ :mc-cmd:`minio server` command represents a Server Pool:
.. code-block:: shell
minio server https://minio{1...4}.example.net/mnt/disk{1...4}
- | Server Set |
+ | Server Pool |
- The above example describes a single Server Set with
+ The above example describes a single Server Pool with
4 :mc:`minio server` nodes and 4 drives each for a total of 16 drives.
MinIO requires starting each :mc:`minio server` in the set with the same
startup command to enable awareness of all set peers.
See :mc-cmd:`minio server` for complete syntax and usage.
- MinIO calculates the size and number of Erasure Sets in the Server Set based
+ MinIO calculates the size and number of Erasure Sets in the Server Pool based
on the total number of drives in the set *and* the number of :mc:`minio`
servers in the set. See :ref:`minio-ec-erasure-set` for more information.
.. _minio-intro-cluster:
:ref:`Cluster `
- The whole MinIO deployment consisting of one or more Server Sets. Each
+ The whole MinIO deployment consisting of one or more Server Pools. Each
:mc-cmd:`~minio server HOSTNAME` argument passed to the
- :mc-cmd:`minio server` command represents one Server Set:
+ :mc-cmd:`minio server` command represents one Server Pool:
.. code-block:: shell
minio server https://minio{1...4}.example.net/mnt/disk{1...4} \
https://minio{5...8}.example.net/mnt/disk{1...4}
- | Server Set |
+ | Server Pool |
- The above example describes two Server Sets, each consisting of 4
- :mc:`minio server` nodes with 4 drives each for a total of 32 drives.
+ The above example describes two Server Pools, each consisting of 4
+ :mc:`minio server` nodes with 4 drives each for a total of 32 drives. MinIO
+ always stores each unique object and all versions of that object on the
+ same Server Pool.
- Server Set expansion is a function of Horizontal Scaling, where each new set
- expands the cluster storage and compute resources. Server Set expansion
+ Server Pool expansion is a function of Horizontal Scaling, where each new set
+ expands the cluster storage and compute resources. Server Pool expansion
is not intended to support migrating existing sets to newer hardware.
- MinIO Standalone clusters consist of a single Server Set with a single
+ MinIO Standalone clusters consist of a single Server Pool with a single
:mc:`minio server` node. Standalone clusters are best suited for initial
development and evaluation. MinIO strongly recommends production
clusters consist of a *minimum* of 4 :mc:`minio server` nodes in a
- Server Set.
+ Server Pool.
Deploying MinIO
---------------
-For Kubernetes clusters, use the MinIO Kubernetes Operator.
-See :ref:`minio-kubernetes` for more information.
+Users deploying onto a Kubernetes cluster should start with our
+:docs-k8s:`Kubernetes documentation <>`.
For bare-metal environments, including private cloud services
or containerized environments, install and run the :mc:`minio server` on
-each host in the MinIO deployment. See :ref:`minio-baremetal` for more
-information.
+each host in the MinIO deployment.
+See :ref:`minio-installation` for more information.
diff --git a/source/kubernetes/minio-kubernetes-overview.rst b/source/kubernetes/minio-kubernetes-overview.rst
deleted file mode 100644
index f3756dda..00000000
--- a/source/kubernetes/minio-kubernetes-overview.rst
+++ /dev/null
@@ -1,880 +0,0 @@
-.. _minio-kubernetes:
-
-=======================
-MinIO Kubernetes Plugin
-=======================
-
-.. default-domain:: minio
-
-.. contents:: Table of Contents
- :local:
- :depth: 2
-
-Overview
---------
-
-MinIO is a high performance distributed object storage server, designed for
-large-scale private cloud infrastructure. Orchestration platforms like
-Kubernetes provide perfect cloud-native environment to deploy and scale MinIO.
-The :minio-git:`MinIO Kubernetes Operator ` brings native MinIO
-support to Kubernetes.
-
-The :mc:`kubectl minio` plugin brings native support for deploying MinIO
-tenants to Kubernetes clusters using the ``kubectl`` CLI. You can use
-:mc:`kubectl minio` to deploy a MinIO tenant with little to no interaction
-with ``YAML`` configuration files.
-
-.. image:: /images/Kubernetes-Minio.svg
- :align: center
- :width: 90%
- :class: no-scaled-link
- :alt: Kubernetes Orchestration with the MinIO Operator facilitates automated deployment of MinIO clusters.
-
-:mc:`kubectl minio` builds its interface on top of the
-MinIO Kubernetes Operator. Visit the
-:minio-git:`MinIO Operator ` Github repository to follow
-ongoing development on the Operator and Plugin.
-
-Installation
-------------
-
-**Prerequisite**
-
-Install the `krew `__ ``kubectl``
-plugin manager using the `documented installation procedure
-`__.
-
-Install Using ``krew``
-~~~~~~~~~~~~~~~~~~~~~~
-
-Run the following command to install :mc:`kubectl minio` using ``krew``:
-
-.. code-block:: shell
- :class: copyable
-
- kubectl krew update
- kubectl krew install minio
-
-Update Using ``krew``
-~~~~~~~~~~~~~~~~~~~~~
-
-Run the following command to update :mc:`kubectl minio`:
-
-.. code-block:: shell
- :class: copyable
-
- kubectl krew upgrade
-
-Deploy a MinIO Tenant
----------------------
-
-The following procedure creates a MinIO tenant using the
-:mc:`kubectl minio` plugin.
-
-1) Initialize the MinIO Operator
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-:mc:`kubectl minio` requires the MinIO Operator. Use the
-:mc-cmd:`kubectl minio init` command to initialize the MinIO Operator:
-
-.. code-block:: shell
- :class: copyable
-
- kubectl minio init
-
-The example command deploys the MinIO operator to the ``default`` namespace.
-Include the :mc-cmd-option:`~kubectl minio init namespace` option to
-specify the namespace you want to deploy the MinIO operator into.
-
-2) Configure the Persistent Volumes
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Create a :kube-docs:`Persistent Volume (PV) `
-for each drive on each node.
-
-MinIO recommends using :kube-docs:`local ` PVs
-to ensure best performance and operations:
-
-a. Create a ``StorageClass`` for the MinIO ``local`` Volumes
-````````````````````````````````````````````````````````````
-
-.. container:: indent
-
- The following YAML describes a
- :kube-docs:`StorageClass ` with the
- appropriate fields for use with the ``local`` PV:
-
- .. code-block:: yaml
- :class: copyable
-
- apiVersion: storage.k8s.io/v1
- kind: StorageClass
- metadata:
- name: local-storage
- provisioner: kubernetes.io/no-provisioner
- volumeBindingMode: WaitForFirstConsumer
-
- The ``StorageClass`` **must** have ``volumeBindingMode`` set to
- ``WaitForFirstConsumer`` to ensure correct binding of each pod's
- :kube-docs:`Persistent Volume Claims (PVC)
- ` to the
- Node ``PV``.
-
-b. Create the Required Persistent Volumes
-`````````````````````````````````````````
-
-.. container:: indent
-
- The following YAML describes a ``PV`` ``local`` volume:
-
- .. code-block:: yaml
- :class: copyable
- :emphasize-lines: 4, 12, 14, 22
-
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: PV-NAME
- spec:
- capacity:
- storage: 100Gi
- volumeMode: Filesystem
- accessModes:
- - ReadWriteOnce
- persistentVolumeReclaimPolicy: Retain
- storageClassName: local-storage
- local:
- path: /mnt/disks/ssd1
- nodeAffinity:
- required:
- nodeSelectorTerms:
- - matchExpressions:
- - key: kubernetes.io/hostname
- operator: In
- values:
- - NODE-NAME
-
- .. list-table::
- :header-rows: 1
- :widths: 20 80
- :width: 100%
-
- * - Field
- - Description
-
- * - .. code-block:: yaml
-
- metadata:
- name:
-
- - Set to a name that supports easy visual identification of the
- ``PV`` and its associated physical host. For example, for a ``PV`` on
- host ``minio-1``, consider specifying ``minio-1-pv-1``.
-
- * - .. code-block:: yaml
-
- nodeAfinnity:
- required:
- nodeSelectorTerms:
- - key:
- values:
-
- - Set to the name of the node on which the physical disk is
- installed.
-
- * - .. code-block:: yaml
-
- spec:
- storageClassName:
-
- - Set to the ``StorageClass`` created for supporting the
- MinIO ``local`` volumes.
-
- * - .. code-block:: yaml
-
- spec:
- local:
- path:
-
- - Set to the full file path of the locally-attached disk. You
- can specify a directory on the disk to isolate MinIO-specific data.
- The specified disk or directory **must** be empty for MinIO to start.
-
- Create one ``PV`` for each volume in the MinIO tenant. For example, given a
- Kubernetes cluster with 4 Nodes with 4 locally attached drives each, create a
- total of 16 ``local`` ``PVs``.
-
-c. Validate the Created PV
-``````````````````````````
-
-.. container:: indent
-
- Issue the ``kubectl get PV`` command to validate the created PVs:
-
- .. code-block:: shell
- :class: copyable
-
- kubectl get PV
-
-3) Create a Namespace for the MinIO Tenant
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Use the ``kubectl create namespace`` command to create a namespace for
-the MinIO Tenant:
-
-.. code-block:: shell
- :class: copyable
-
- kubectl create namespace minio-tenant-1
-
-4) Create the MinIO Tenant
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Use the :mc-cmd:`kubectl minio tenant create` command to create the MinIO
-Tenant.
-
-The following example creates a 4-node MinIO deployment with a
-total capacity of 16Ti across 16 drives.
-
-.. code-block:: shell
- :class: copyable
-
- kubectl minio tenant create \
- --name minio-tenant-1 \
- --servers 4 \
- --volumes 16 \
- --capacity 16Ti \
- --storageClassName local-storage \
- --namespace minio-tenant-1
-
-The following table explains each argument specified to the command:
-
-.. list-table::
- :header-rows: 1
- :widths: 30 70
- :width: 100%
-
- * - Argument
- - Description
-
- * - :mc-cmd-option:`~kubectl minio tenant create name`
- - The name of the MinIO Tenant which the command creates.
-
- * - :mc-cmd-option:`~kubectl minio tenant create servers`
- - The number of :mc:`minio` servers to deploy across the Kubernetes
- cluster.
-
- * - :mc-cmd-option:`~kubectl minio tenant create volumes`
- - The number of volumes in the cluster. :mc:`kubectl minio` determines the
- number of volumes per server by dividing ``volumes`` by ``servers``.
-
- * - :mc-cmd-option:`~kubectl minio tenant create capacity`
- - The total capacity of the cluster. :mc:`kubectl minio` determines the
- capacity of each volume by dividing ``capacity`` by ``volumes``.
-
- * - :mc-cmd-option:`~kubectl minio tenant create namespace`
- - The Kubernetes namespace in which to deploy the MinIO Tenant.
-
- * - :mc-cmd-option:`~kubectl minio tenant create storageClassName`
- - The Kubernetes ``StorageClass`` to use when creating each PVC.
-
-If :mc-cmd:`kubectl minio tenant create` succeeds in creating the MinIO Tenant,
-the command outputs connection information to the terminal. The output includes
-the credentials for the :mc:`minio` :ref:`root ` user and
-the MinIO Console Service.
-
-.. code-block:: shell
- :emphasize-lines: 1-3, 7-9
-
- Tenant
- Access Key: 999466bb-8bd6-4d73-8115-61df1b0311f4
- Secret Key: f8e5ecc3-7657-493b-b967-aaf350daeec9
- Version: minio/minio:RELEASE.2020-09-26T03-44-56Z
- ClusterIP Service: minio-tenant-1-internal-service
-
- MinIO Console
- Access Key: e9ae0f3f-18e5-44c6-a2aa-dc2e95497734
- Secret Key: 498ae13a-2f70-4adf-a38e-730d24327426
- Version: minio/console:v0.3.14
- ClusterIP Service: minio-tenant-1-console
-
-:mc-cmd:`kubectl minio` stores all credentials using Kubernetes Secrets, where
-each secret is prefixed with the tenant
-:mc-cmd:`name `:
-
-.. code-block:: shell
-
- > kubectl get secrets --namespace minio-tenant-1
-
- NAME TYPE DATA AGE
-
- minio-tenant-1-console-secret Opaque 5 123d4h
- minio-tenant-1-console-tls Opaque 2 123d4h
- minio-tenant-1-creds-secret Opaque 2 123d4h
- minio-tenant-1-tls Opaque 2 123d4h
-
-Kubernetes administrators with the correct permissions can view the secret
-contents and extract the access and secret key:
-
-.. code-block:: shell
-
- kubectl get secrets minio-tenant-1-creds-secret -o yaml
-
-The access key and secret key are ``base64`` encoded. You must decode the
-values prior to specifying them to :mc:`mc` or other S3-compatible tools.
-
-5) Configure Access to the Service
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-:mc:`kubectl minio` creates a service for the MinIO Tenant.
-Use ``kubectl get svc`` to retrieve the service name:
-
-.. code-block:: shell
- :class: copyable
-
- kubectl get svc --namespace minio-tenant-1
-
-The command returns output similar to the following:
-
-.. code-block:: shell
-
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- minio ClusterIP 10.109.88.X 443/TCP 137m
- minio-tenant-1-console ClusterIP 10.97.87.X 9090/TCP,9443/TCP 129m
- minio-tenant-1-hl ClusterIP None 9000/TCP 137m
-
-The created services are visible only within the Kubernetes cluster. External
-access to Kubernetes cluster resources requires creating an
-:kube-docs:`Ingress object ` that routes
-traffic from an externally-accessible IP address or hostname to the ``minio``
-service. Configuring Ingress also requires creating an
-:kube-docs:`Ingress Controller
-` in the cluster.
-Defer to the :kube-docs:`Kubernetes Documentation
-` for guidance on creating and configuring the
-required resources for external access to cluster resources.
-
-The following example Ingress object depends on the
-`NGINX Ingress Controller for Kubernetes
-`__.
-The example is intended as a *demonstration* for creating an Ingress object and
-may not reflect the configuration and topology of your Kubernetes cluster and
-MinIO tenant. You may need to add or remove listed fields to suit your
-Kubernetes cluster. **Do not** use this example as-is or without modification.
-
-.. code-block:: yaml
-
- apiVersion: networking.k8s.io/v1
- kind: Ingress
- metadata:
- name: minio-ingress
- annotations:
- kubernetes.io/tls-acme: "true"
- kubernetes.io/ingress.class: "nginx"
- nginx.ingress.kubernetes.io/proxy-body-size: 1024m
- spec:
- tls:
- - hosts:
- - minio.example.com
- secretName: minio-ingress-tls
- rules:
- - host: minio.example.com
- http:
- paths:
- - path: /
- backend:
- serviceName: minio
- servicePort: http
-
-MinIO Kubernetes Plugin Syntax
-------------------------------
-
-.. mc:: kubectl minio
-
-Create the MinIO Operator
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. mc-cmd:: init
- :fullpath:
-
- Initializes the MinIO Operator. :mc:`kubectl minio` requires the operator for
- core functionality.
-
- The command has the following syntax:
-
- .. code-block:: shell
- :class: copyable
-
- kubectl minio init [FLAGS]
-
- The command supports the following arguments:
-
- .. mc-cmd:: image
- :option:
-
- The image to use for deploying the operator.
- Defaults to the :minio-git:`latest release of the operator
- `:
-
- ``minio/k8s-operator:latest``
-
- .. mc-cmd:: namespace
- :option:
-
- The namespace into which to deploy the operator.
-
- Defaults to ``minio-operator``.
-
- .. mc-cmd:: cluster-domain
- :option:
-
- The domain name to use when configuring the DNS hostname of the
- operator. Defaults to ``cluster.local``.
-
- .. mc-cmd:: namespace-to-watch
- :option:
-
- The namespace which the operator watches for MinIO tenants.
-
- Defaults to ``""`` or *all namespaces*.
-
- .. mc-cmd:: image-pull-secret
- :option:
-
- Secret key for use with pulling the
- :mc-cmd-option:`~kubectl minio init image`.
-
- The MinIO-hosted ``minio/k8s-operator`` image is *not* password protected.
- This option is only required for non-MinIO image sources which are
- password protected.
-
- .. mc-cmd:: output
- :option:
-
- Performs a dry run and outputs the generated YAML to ``STDOUT``. Use
- this option to customize the YAML and apply it manually using
- ``kubectl apply -f ``.
-
-Delete the MinIO Operator
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. mc-cmd:: delete
- :fullpath:
-
- Deletes the MinIO Operator along with all associated resources,
- including all MinIO Tenant instances in the
- :mc-cmd:`watched namespace `.
-
- .. warning::
-
- If the underlying Persistent Volumes (``PV``) were created with
- a reclaim policy of ``recycle`` or ``delete``, deleting the MinIO
- Tenant results in complete loss of all objects stored on the tenant.
-
- Ensure you have performed all due diligence in confirming the safety of
- any data on the MinIO Tenant prior to deletion.
-
- The command has the following syntax:
-
- .. code-block:: shell
- :class: copyable
-
- kubectl minio delete [FLAGS]
-
- The command accepts the following arguments:
-
- .. mc-cmd:: namespace
- :option:
-
- The namespace of the MinIO operator to delete.
-
- Defaults to ``minio-operator``.
-
-Create a MinIO Tenant
-~~~~~~~~~~~~~~~~~~~~~
-
-.. include:: /includes/facts-kubectl-plugin.rst
- :start-after: start-kubectl-minio-requires-operator-desc
- :end-before: end-kubectl-minio-requires-operator-desc
-
-
-.. mc-cmd:: tenant create
- :fullpath:
-
- Creates a MinIO Tenant using the
- :minio-git:`latest release ` of :mc:`minio`:
-
- ``minio/minio:latest``
-
- The command creates the following resources in the Kubernetes cluster.
-
- - The MinIO Tenant.
-
- - Persistent Volume Claims (``PVC``) for each
- :mc-cmd:`volume ` in the tenant.
-
- - Pods for each
- :mc-cmd:`server ` in the tenant.
-
- - Kubernetes secrets for storing access keys and secret keys. Each
- secret is prefixed with the Tenant name.
-
- - The MinIO Console Service (MCS). See the :minio-git:`console `
- Github repository for more information on MCS.
-
- The command has the following syntax:
-
- .. code-block:: shell
- :class: copyable
-
- kubectl minio tenant create \
- --names NAME \
- --servers SERVERS \
- --volumes VOLUMES \
- --capacity CAPACITY \
- --storageClassName STORAGECLASS \
- [OPTIONAL_FLAGS]
-
- The command supports the following arguments:
-
- .. mc-cmd:: name
- :option:
-
- *Required*
-
- The name of the MinIO tenant which the command creates. The
- name *must* be unique in the
- :mc-cmd:`~kubectl minio tenant create namespace`.
-
- .. mc-cmd:: servers
- :option:
-
- *Required*
-
- The number of :mc:`minio` servers to deploy on the Kubernetes cluster.
-
- Ensure that the specified number of
- :mc-cmd-option:`~kubectl minio tenant create servers` does *not*
- exceed the number of nodes in the Kubernetes cluster. MinIO strongly
- recommends sizing the cluster to have one node per MinIO server.
-
- .. mc-cmd:: volumes
- :option:
-
- *Required*
-
- The number of volumes in the MinIO tenant. :mc:`kubectl minio`
- generates one Persistent Volume Claim (``PVC``) for each volume.
- :mc:`kubectl minio` divides the
- :mc-cmd-option:`~kubectl minio tenant create capacity` by the number of
- volumes to determine the amount of ``resources.requests.storage`` to
- set for each ``PVC``.
-
- :mc:`kubectl minio` determines
- the number of ``PVC`` to associate to each :mc:`minio` server by dividing
- :mc-cmd-option:`~kubectl minio tenant create volumes` by
- :mc-cmd-option:`~kubectl minio tenant create servers`.
-
- :mc:`kubectl minio` also configures each ``PVC`` with node-aware
- selectors, such that the :mc:`minio` server process uses a ``PVC``
- which correspond to a ``local`` Persistent Volume (``PV``) on the
- same node running that process. This ensures that each process
- uses local disks for optimal performance.
-
- If the specified number of volumes exceeds the number of
- ``PV`` available on the cluster, :mc:`kubectl minio tenant create`
- hangs and waits until the required ``PV`` exist.
-
- .. mc-cmd:: capacity
- :option:
-
- *Required*
-
- The total capacity of the MinIO tenant. :mc:`kubectl minio` divides
- the capacity by the number of
- :mc-cmd-option:`~kubectl minio tenant create volumes` to determine the
- amount of ``resources.requests.storage`` to set for each
- Persistent Volume Claim (``PVC``).
-
- If the existing Persistent Volumes (``PV``) in the cluster cannot
- satisfy the requested storage, :mc:`kubectl minio tenant create`
- hangs and waits until the required storage exists.
-
- .. mc-cmd:: storageClassName
- :option:
-
- *Required*
-
- The name of the Kubernetes
- :kube-docs:`Storage Class ` to use
- when creating Persistent Volume Claims (``PVC``) for the
- MinIO Tenant. The specified
- :mc-cmd-option:`~kubectl minio tenant create storageClassName`
- *must* match the ``StorageClassName`` of the Persistent Volumes (``PVs``)
- to which the ``PVCs`` should bind.
-
- .. mc-cmd:: namespace
- :option:
-
- The namespace in which to create the MinIO Tenant.
-
- Defaults to ``minio``.
-
- .. mc-cmd:: kes-config
- :option:
-
- The name of the Kubernetes Secret which contains the
- MinIO Key Encryption Service (KES) configuration.
-
- .. mc-cmd:: output
- :option:
-
- Outputs the generated ``YAML`` objects to ``STDOUT`` for further
- customization.
-
- :mc-cmd-option:`~kubectl minio tenant create output` does
- **not** create the MinIO Tenant. Use ``kubectl apply -f `` to
- manually create the MinIO tenant using the generated file.
-
-Expand a MinIO Tenant
-~~~~~~~~~~~~~~~~~~~~~
-
-.. include:: /includes/facts-kubectl-plugin.rst
- :start-after: start-kubectl-minio-requires-operator-desc
- :end-before: end-kubectl-minio-requires-operator-desc
-
-.. mc-cmd:: tenant expand
- :fullpath:
-
- Adds a new zone to an existing MinIO Tenant.
-
- The command creates the new zone using the
- :minio-git:`latest release ` of :mc:`minio`:
-
- ``minio/minio:latest``
-
- Consider using :mc-cmd:`kubectl minio tenant upgrade` to upgrade the
- MinIO tenant *before* adding the new zone to ensure consistency across the
- entire tenant.
-
- The command has the following syntax:
-
- .. code-block:: shell
- :class: copyable
-
- kubectl minio tenant expand \
- --names NAME \
- --servers SERVERS \
- --volumes VOLUMES \
- --capacity CAPACITY \
- [OPTIONAL_FLAGS]
-
- The command supports the following arguments:
-
- .. mc-cmd:: name
- :option:
-
- *Required*
-
- The name of the MinIO Tenant which the command expands.
-
- .. mc-cmd:: servers
- :option:
-
- *Required*
-
- The number of :mc:`minio` servers to deploy in the new MinIO Tenant zone.
-
- Ensure that the specified number of
- :mc-cmd-option:`~kubectl minio tenant expand servers` does *not* exceed
- the number of unused nodes in the Kubernetes cluster. MinIO strongly
- recommends sizing the cluster to have one node per MinIO server in the new
- zone.
-
- .. mc-cmd:: volumes
- :option:
-
- *Required*
-
- The number of volumes in the new MinIO Tenant zone.
- :mc:`kubectl minio` generates one Persistent Volume Claim (``PVC``) for
- each volume. :mc:`kubectl minio` divides the
- :mc-cmd-option:`~kubectl minio tenant expand capacity` by the number of
- volumes to determine the amount of ``resources.requests.storage`` to set
- for each ``PVC``.
-
- :mc:`kubectl minio` determines
- the number of ``PVC`` to associate to each :mc:`minio` server by dividing
- :mc-cmd-option:`~kubectl minio tenant expand volumes` by
- :mc-cmd-option:`~kubectl minio tenant expand servers`.
-
- :mc:`kubectl minio` also configures each ``PVC`` with node-aware
- selectors, such that the :mc:`minio` server process uses a ``PVC``
- which correspond to a ``local`` Persistent Volume (``PV``) on the
- same node running that process. This ensures that each process
- uses local disks for optimal performance.
-
- If the specified number of volumes exceeds the number of
- ``PV`` available on the cluster, :mc:`kubectl minio tenant expand`
- hangs and waits until the required ``PV`` exist.
-
- .. mc-cmd:: capacity
- :option:
-
- *Required*
-
- The total capacity of the new MinIO Tenant zone. :mc:`kubectl minio`
- divides the capacity by the number of
- :mc-cmd-option:`~kubectl minio tenant expand volumes` to determine the
- amount of ``resources.requests.storage`` to set for each
- Persistent Volume Claim (``PVC``).
-
- If the existing Persistent Volumes (``PV``) in the cluster cannot
- satisfy the requested storage, :mc:`kubectl minio tenant expand`
- hangs and waits until the required storage exists.
-
- .. mc-cmd:: namespace
- :option:
-
- The namespace in which to create the new MinIO Tenant zone. The namespace
- *must* match that of the MinIO Tenant being extended.
-
- Defaults to ``minio``.
-
- .. mc-cmd:: output
- :option:
-
- Outputs the generated ``YAML`` objects to ``STDOUT`` for further
- customization.
-
- :mc-cmd-option:`~kubectl minio tenant expand output` does **not** create
- the new MinIO Tenant zone. Use ``kubectl apply -f `` to manually
- create the MinIO tenant using the generated file.
-
-Get MinIO Tenant Zones
-~~~~~~~~~~~~~~~~~~~~~~
-
-.. include:: /includes/facts-kubectl-plugin.rst
- :start-after: start-kubectl-minio-requires-operator-desc
- :end-before: end-kubectl-minio-requires-operator-desc
-
-.. mc-cmd:: tenant info
- :fullpath:
-
- Lists all existing MinIO zones in a MinIO Tenant.
-
- The command has the following syntax:
-
- .. code-block:: shell
- :class: copyable
-
- kubectl minio tenant info --names NAME [OPTIONAL_FLAGS]
-
- The command supports the following arguments:
-
- .. mc-cmd:: name
- :option:
-
- *Required*
-
- The name of the MinIO Tenant for which the command returns the
- existing zones.
-
- .. mc-cmd:: namespace
- :option:
-
- The namespace in which to look for the MinIO Tenant.
-
- Defaults to ``minio``.
-
-Upgrade MinIO Tenant
-~~~~~~~~~~~~~~~~~~~~
-
-.. include:: /includes/facts-kubectl-plugin.rst
- :start-after: start-kubectl-minio-requires-operator-desc
- :end-before: end-kubectl-minio-requires-operator-desc
-
-.. mc-cmd:: tenant upgrade
- :fullpath:
-
- Upgrades the :mc:`minio` server Docker image used by the MinIO Tenant.
-
- .. important::
-
- MinIO upgrades *all* :mc:`minio` server processes at once. This may
- result in a brief period of downtime if a majority (``n/2-1``) of
- servers are offline at the same time.
-
- The command has the following syntax:
-
- .. code-block:: shell
- :class: copyable
-
- kubectl minio tenant upgrade --names NAME [OPTIONAL_FLAGS]
-
- The command supports the following arguments:
-
- .. mc-cmd:: name
- :option:
-
- *Required*
-
- The name of the MinIO Tenant which the command updates.
-
- .. mc-cmd:: namespace
- :option:
-
- The namespace in which to look for the MinIO Tenant.
-
- Defaults to ``minio``.
-
-Delete a MinIO Tenant
-~~~~~~~~~~~~~~~~~~~~~
-
-.. include:: /includes/facts-kubectl-plugin.rst
- :start-after: start-kubectl-minio-requires-operator-desc
- :end-before: end-kubectl-minio-requires-operator-desc
-
-.. mc-cmd:: tenant delete
- :fullpath:
-
- Deletes the MinIO Tenant and its associated resources.
-
- Kubernetes only deletes the Minio Tenant Persistent Volume Claims (``PVC``)
- if the underlying Persistent Volumes (``PV``) were created with a
- reclaim policy of ``recycle`` or ``delete``. ``PV`` with a reclaim policy of
- ``retain`` require manual deletion of their associated ``PVC``.
-
- Deletion of the underlying ``PV``, whether automatic or manual, results in
- the loss of any objects stored on the MinIO Tenant. Perform all due
- diligence in ensuring the safety of stored data *prior* to deleting the
- tenant.
-
- The command has the following syntax:
-
- .. code-block:: shell
- :class: copyable
-
- kubectl minio tenant delete --names NAME [OPTIONAL_FLAGS]
-
- The command supports the following arguments:
-
- .. mc-cmd:: name
- :option:
-
- *Required*
-
- The name of the MinIO Tenant to delete.
-
- .. mc-cmd:: namespace
- :option:
-
- The namespace in which to look for the MinIO Tenant.
-
- Defaults to ``minio``.
-
-.. toctree::
- :hidden:
- :titlesonly:
-
- /kubernetes/minio-operator-reference
\ No newline at end of file
diff --git a/source/kubernetes/minio-operator-reference.rst b/source/kubernetes/minio-operator-reference.rst
deleted file mode 100644
index 9bdfd27d..00000000
--- a/source/kubernetes/minio-operator-reference.rst
+++ /dev/null
@@ -1,1221 +0,0 @@
-.. _minio-operator:
-
-=========================
-MinIO Kubernetes Operator
-=========================
-
-.. default-domain:: minio
-
-.. contents:: Table of Contents
- :local:
- :depth: 2
-
-Overview
---------
-
-The MinIO Kubernetes Operator ("MinIO Operator") brings native support for
-deploying and managing MinIO deployments ("MinIO Tenant") on a Kubernetes
-cluster.
-
-The MinIO Operator requires familiarity with interacting with a Kubernetes
-cluster, including but not limited to using the ``kubectl`` command line tool
-and interacting with Kubernetes ``YAML`` objects. Users who would prefer a more
-simplified experience should use the :ref:`minio-kubernetes` for deploying
-and managing MinIO Tenants.
-
-
-Deploying the MinIO Operator
-----------------------------
-
-The following operations deploy the MinIO operator using ``kustomize``
-templates. Users who would prefer a more simplified deployment experience
-that does *not* require familiarity with ``kustomize`` should use the
-:ref:`minio-kubernetes` for deploying and managing MinIO Tenants.
-
-.. tabs::
-
- .. tab:: ``kubectl``
-
- Use the following command to deploy the MinIO Operator using
- ``kubectl`` and ``kustomize`` templates:
-
- .. code-block::
- :class: copyable
- :substitutions:
-
- kubectl apply -k github.com/minio/operator/\?ref\=|minio-operator-latest-version|
-
- .. tab:: ``kustomize``
-
-
- Use :github:`kustomize ` to deploy the
- MinIO Operator using ``kustomize`` templates:
-
- .. code-block::
- :class: copyable
- :substitutions:
-
- kustomize build github.com/minio/operator/\?ref\=|minio-operator-latest-version| \
- > minio-operator-|minio-operator-latest-version|.yaml
-
-
-
-MinIO Tenant Object
--------------------
-
-The following example Kubernetes object describes a MinIO Tenant with the
-following resources:
-
-- 4 :mc:`minio` server processes.
-- 4 Volumes per server.
-- 2 MinIO Console Service (MCS) processes.
-
-.. ToDo : - 2 MinIO Key Encryption Service (KES) processes.
-
-.. code-block:: yaml
- :class: copyable
-
- apiVersion: minio.min.io/v1
- kind: Tenant
- metadata:
- creationTimestamp: null
- name: minio-tenant-1
- namespace: minio-tenant-1
- scheduler:
- name: ""
- spec:
- certConfig: {}
- console:
- consoleSecret:
- name: minio-tenant-1-console-secret
- image: minio/console:v0.3.14
- metadata:
- creationTimestamp: null
- name: minio-tenant-1
- replicas: 2
- resources: {}
- credsSecret:
- name: minio-tenant-1-creds-secret
- image: minio/minio:RELEASE.2020-09-26T03-44-56Z
- imagePullSecret: {}
- liveness:
- initialDelaySeconds: 10
- periodSeconds: 1
- timeoutSeconds: 1
- mountPath: /export
- requestAutoCert: true
- serviceName: minio-tenant-1-internal-service
- zones:
- - resources: {}
- servers: 4
- volumeClaimTemplate:
- apiVersion: v1
- kind: persistentvolumeclaims
- metadata:
- creationTimestamp: null
- spec:
- accessModes:
- - ReadWriteOnce
- storageClassName: local-storage
- resources:
- requests:
- storage: 10Gi
- status: {}
- volumesPerServer: 4
-
-
-MinIO Operator ``YAML`` Reference
----------------------------------
-
-The MinIO Operator adds a
-:kube-api:`CustomResourceDefinition
-<#customresourcedefinition-v1-apiextensions-k8s-io>` that extends the
-Kubernetes Object API to support creating MinIO ``Tenant`` objects.
-
-.. tabs::
-
- .. tab:: All Top-Level Fields
-
- The following ``YAML`` block describes a MinIO Tenant object and its
- top-level fields.
-
- .. parsed-literal::
-
- :kubeconf:`apiVersion`: minio.min.io/v1
- :kubeconf:`kind`: Tenant
- :kubeconf:`metadata`:
- :kubeconf:`~metadata.name`: minio
- :kubeconf:`~metadata.namespace`:
- :kubeconf:`~metadata.labels`:
- app: minio
- :kubeconf:`~metadata.annotations`:
- prometheus.io/path:
- prometheus.io/port: ""
- prometheus.io/scrape: ""
- :kubeconf:`spec`:
- :kubeconf:`~spec.certConfig`: