1
0
mirror of https://github.com/minio/docs.git synced 2025-06-04 08:42:23 +03:00

GA Fixups

GA Preperations
This commit is contained in:
ravindk89 2021-02-08 20:48:12 -05:00
parent 50d0e4e729
commit d9ee220a36
72 changed files with 557 additions and 2779 deletions

View File

@ -227,6 +227,9 @@ div.admonition {
border: none;
border-left: 4px solid #2592EF; }
dl {
margin: 10px 0 10px 0; }
dl.minio {
margin: 10px 0 10px 0; }

File diff suppressed because one or more lines are too long

View File

@ -13,7 +13,24 @@
"link": "https://min.io/product/reference-hardware"
}
},
"Docs": "https://docs.min.io/",
"Docs": {
"MinIO Baremetal" : {
"description": "MinIO Object Storage for Baremetal Infrastructure",
"link": "https://docs.min.io/minio/baremetal"
},
"MinIO Hybrid Cloud" : {
"description" : "MinIO Object Storage for Kubernetes-Managed Private and Public Cloud Infrastructure",
"link" : "https://docs.min.io/minio/k8s"
},
"MinIO for VMware Cloud Foundation" : {
"description" : "MinIO Object Storage for VMware Cloud Foundation 4.2",
"link" : "https://docs.min.io/minio/vsphere"
},
"MinIO Legacy Documentation" : {
"description" : "MinIO Object Storage Legacy Documentation",
"link" : "https://docs.min.io"
}
},
"Solutions": {
"VMware": {
"description": "Discover how MinIO integrates with VMware across the portfolio from the Persistent Data platform to TKGI and how we support their Kubernetes ambitions.",

View File

@ -70,6 +70,10 @@ div.admonition {
}
}
dl {
margin: 10px 0 10px 0;
}
dl.minio {
margin: 10px 0 10px 0;
}

View File

@ -107,7 +107,8 @@
<div class="admonition important">
<span class="alert-message">
<p>Welcome to the upcoming version of the MinIO Documentation!
The content of these pages may change at any time.
The content on this page is under active development and
may change at any time.
If you can't find what you're looking for, check our
<a href="https://docs.min.io"> legacy documentation</a>.
Thank you for your patience.

View File

@ -1,363 +0,0 @@
.. _minio-baremetal:
====================
MinIO for Bare Metal
====================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 2
MinIO is a high performance distributed object storage server, designed for
large-scale private cloud infrastructure. MinIO fully supports deployment onto
bare-metal hardware with or without containerization for process management.
Standalone Installation
-----------------------
Standalone MinIO deployments consist of a single ``minio`` server process with
one or more disks. Standalone deployments are best suited for local development
environments.
1) Install the ``minio`` Server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install the :program:`minio` server onto the host machine. Select the tab that
corresponds to the host machine operating system or environment:
.. include:: /includes/minio-server-installation.rst
2) Add TLS/SSL Certificates (Optional)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enable TLS/SSL connectivity to the MinIO server by specifying a private key
(``.key``) and public certificate (``.crt``) to the MinIO ``certs`` directory:
- For Linux/MacOS: ``${HOME}/.minio/certs``
- For Windows: ``%%USERPROFILE%%\.minio\certs``
The MinIO server automatically enables TLS/SSL connectivity if it detects
the required certificates in the ``certs`` directory.
.. note::
The MinIO documentation makes a best-effort to provide generally applicable
and accurate information on TLS/SSL connectivity in the context of MinIO
products and services, and is not intended as a complete guide to the larger
topic of TLS/SSL certificate creation and management.
3) Run the ``minio`` Server
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Issue the following command to start the :program:`minio` server. The following
example assumes the host machine has *at least* four disks, which is the minimum
required number of disks to enable :ref:`erasure coding <minio-erasure-coding>`:
.. code-block:: shell
:class: copyable
export MINIO_ACCESS_KEY=minio-admin
export MINIO_SECRET_KEY=minio-secret-key-CHANGE-ME
minio server /mnt/disk{1...4}/data
The example command breaks down as follows:
.. list-table::
:widths: 40 60
:width: 100%
* - :envvar:`MINIO_ACCESS_KEY`
- The access key for the :ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - :envvar:`MINIO_SECRET_KEY`
- The corresponding secret key to use for the
:ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - ``/mnt/disk{1...4}/data``
- The path to each disk on the host machine.
``/data`` is an optional folder in which the ``minio`` server stores
all information related to the deployment.
See :mc-cmd:`minio server DIRECTORIES` for more information on
configuring the backing storage for the :mc:`minio server` process.
The command uses MinIO expansion notation ``{x...y}`` to denote a sequential
series. Specifically, ``/mnt/disk{1...4}/data`` expands to:
- ``/mnt/disk1/data``
- ``/mnt/disk2/data``
- ``/mnt/disk3/data``
- ``/mnt/disk4/data``
4) Connect to the Server
~~~~~~~~~~~~~~~~~~~~~~~~
Use the :mc-cmd:`mc alias set` command from a machine with connectivity to
the host running the ``minio`` server. See :ref:`mc-install` for documentation
on installing :program:`mc`.
.. code-block:: shell
:class: copyable
mc alias set mylocalminio 192.0.2.10:9000 minioadmin minio-secret-key-CHANGE-ME
Replace the IP address and port with one of the ``minio`` servers endpoints.
See :ref:`minio-mc-commands` for a list of commands you can run on the
MinIO server.
Distributed Installation
------------------------
Distributed MinIO deployments consist of multiple ``minio`` servers with
one or more disks each. Distributed deployments are best suited for
staging and production environments.
MinIO *requires* using sequentially-numbered hostnames to represent each
``minio`` server in the deployment. For example, the following hostnames support
a 4-node distributed deployment:
- ``minio1.example.com``
- ``minio2.example.com``
- ``minio3.example.com``
- ``minio4.example.com``
Create the necessary DNS hostname mappings *prior* to starting this
procedure.
1) Install the ``minio`` Server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install the :program:`minio` server onto each host machine in the deployment.
Select the tab that corresponds to the host machine operating system or
environment:
.. include:: /includes/minio-server-installation.rst
2) Add TLS/SSL Certificates (Optional)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enable TLS/SSL connectivity to the MinIO server by specifying a private key
(``.key``) and public certificate (``.crt``) to the MinIO ``certs`` directory:
- For Linux/MacOS: ``${HOME}/.minio/certs``
- For Windows: ``%%USERPROFILE%%\.minio\certs``
The MinIO server automatically enables TLS/SSL connectivity if it detects
the required certificates in the ``certs`` directory.
.. note::
The MinIO documentation makes a best-effort to provide generally applicable
and accurate information on TLS/SSL connectivity in the context of MinIO
products and services, and is not intended as a complete guide to the larger
topic of TLS/SSL certificate creation and management.
3) Run the ``minio`` Server
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Issue the following command on each host machine in the deployment. The
following example assumes that:
- The deployment has four host machines with sequential hostnames
(i.e. ``minio1.example.com``, ``minio2.example.com``).
- Each host machine has *at least* four disks mounted at ``/data``. 4 disks is
the minimum required for :ref:`erasure coding
<minio-erasure-coding>`.
.. code-block:: shell
:class: copyable
export MINIO_ACCESS_KEY=minio-admin
export MINIO_SECRET_KEY=minio-secret-key-CHANGE-ME
minio server https://minio{1...4}.example.com/mnt/disk{1...4}/data
The example command breaks down as follows:
.. list-table::
:widths: 40 60
:width: 100%
* - :envvar:`MINIO_ACCESS_KEY`
- The access key for the :ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - :envvar:`MINIO_SECRET_KEY`
- The corresponding secret key to use for the
:ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - ``https://minio{1...4}.example.com/``
- The DNS hostname of each server in the distributed deployment.
* - ``/mnt/disk{1...4}/data``
- The path to each disk on the host machine.
``/data`` is an optional folder in which the ``minio`` server stores
all information related to the deployment.
See :mc-cmd:`minio server DIRECTORIES` for more information on
configuring the backing storage for the :mc:`minio server` process.
The command uses MinIO expansion notation ``{x...y}`` to denote a sequential
series. Specifically:
- The hostname ``https://minio{1...4}.example.com`` expands to:
- ``https://minio1.example.com``
- ``https://minio2.example.com``
- ``https://minio3.example.com``
- ``https://minio4.example.com``
- ``/mnt/disk{1...4}/data`` expands to
- ``/mnt/disk1/data``
- ``/mnt/disk2/data``
- ``/mnt/disk3/data``
- ``/mnt/disk4/data``
4) Connect to the Server
~~~~~~~~~~~~~~~~~~~~~~~~
Use the :mc-cmd:`mc alias set` command from a machine with connectivity to any
hostname running the ``minio`` server. See :ref:`mc-install` for documentation
on installing :program:`mc`.
.. code-block:: shell
:class: copyable
mc alias set mylocalminio minio1.example.net minioadmin minio-secret-key-CHANGE-ME
See :ref:`minio-mc-commands` for a list of commands you can run on the
MinIO server.
Docker Installation
-------------------
Stable MinIO
~~~~~~~~~~~~
The following ``docker`` command creates a container running the latest stable
version of the ``minio`` server process:
.. code-block:: shell
:class: copyable
docker run -p 9000:9000 \
-e "MINIO_ACCESS_KEY=ROOT_ACCESS_KEY" \
-e "MINIO_SECRET_KEY=SECRET_ACCESS_KEY_CHANGE_ME" \
-v /mnt/disk1:/disk1 \
-v /mnt/disk2:/disk2 \
-v /mnt/disk3:/disk3 \
-v /mnt/disk4:/disk4 \
minio/minio server /disk{1...4}
The command uses the following options:
- ``-e MINIO_ACCESS_KEY`` and ``-e MINIO_SECRET_KEY`` for configuring the
:ref:`root <minio-users-root>` user credentials.
- ``-v /mnt/disk<int>:/disk<int>`` for configuring each disk the ``minio``
server uses.
Bleeding Edge MinIO
~~~~~~~~~~~~~~~~~~~
*Do not use bleeding-edge deployments of MinIO in production environments*
The following ``docker`` command creates a container running the latest
bleeding-edge version of the ``minio`` server process:
.. code-block:: shell
:class: copyable
docker run -p 9000:9000 \
-e "MINIO_ACCESS_KEY=ROOT_ACCESS_KEY" \
-e "MINIO_SECRET_KEY=SECRET_ACCESS_KEY_CHANGE_ME" \
-v /mnt/disk1:/disk1 \
-v /mnt/disk2:/disk2 \
-v /mnt/disk3:/disk3 \
-v /mnt/disk4:/disk4 \
minio/minio:edge server /disk{1...4}
The command uses the following options:
- ``MINIO_ACCESS_KEY`` and ``MINIO_SECRET_KEY`` for configuring the
:ref:`root <minio-users-root>` user credentials.
- ``-v /mnt/disk<int>:/disk<int>`` for configuring each disk the ``minio``
server uses.
Deployment Recommendations
--------------------------
Minimum Nodes per Deployment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For all production deployments, MinIO recommends a *minimum* of 4 nodes per
cluster. MinIO deployments with *at least* 4 nodes can tolerate the loss of up
to half the nodes *or* half the disks in the deployment while maintaining
read and write availability.
For example, assuming a 4-node deployment with 4 drives per node, the
cluster can tolerate the loss of:
- Any two nodes, *or*
- Any 8 drives.
The minimum recommendation reflects MinIO's experience with assisting enterprise
customers in deploying on a variety of IT infrastructures while
maintaining the desired SLA/SLO. While MinIO may run on less than the
minimum recommended topology, any potential cost savings come at the risk of
decreased reliability.
Recommended Hardware
~~~~~~~~~~~~~~~~~~~~
For MinIO's recommended hardware, please see
`MinIO Reference Hardware <https://min.io/product/reference-hardware>`__.
Bare Metal Infrastructure
~~~~~~~~~~~~~~~~~~~~~~~~~
A distributed MinIO deployment can only provide as much availability as the
bare metal infrastructure on which it is deployed. In particular, consider the
following potential failure points which could result in cluster downtime
when configuring your bare metal infrastructure:
- Shared networking resources (switches, routers, ISP).
- Shared power resources.
- Shared physical location (rack, datacenter, region).
MinIO deployments using virtual machines or containerized environments should
also consider the following:
- Shared physical hardware (CPU, Memory, Storage)
- Shared orchestration management layer (Kubernetes, Docker Swarm)
FreeBSD
-------
MinIO does not provide an official FreeBSD binary. FreeBSD maintains an
`upstream release <https://www.freshports.org/www/minio>`__ you can
install using `pkg <https://github.com/freebsd/pkg>`__:
.. code-block:: shell
:class: copyable
pkg install minio
sysrc minio_enable=yes
sysrc minio_disks=/path/to/disks
service minio start

View File

@ -12,7 +12,7 @@ Erasure Coding
MinIO Erasure Coding is a data redundancy and availability feature that allows
MinIO deployments to automatically reconstruct objects on-the-fly despite the
loss of multiple drives or nodes in the cluster.Erasure Coding provides
loss of multiple drives or nodes in the cluster. Erasure Coding provides
object-level healing with less overhead than adjacent technologies such as
RAID or replication.
@ -24,19 +24,15 @@ number of nodes, and number of drives per node in the Erasure Set, MinIO can
tolerate the loss of up to half (``N/2``) of drives and still retrieve stored
objects.
For example, consider the following small-scale MinIO deployment consisting of a
single :ref:`Server Set <minio-intro-server-set>` with 4 :mc:`minio server`
For example, consider a small-scale MinIO deployment consisting of a
single :ref:`Server Pool <minio-intro-server-pool>` with 4 :mc:`minio server`
nodes. Each node in the deployment has 4 locally attached ``1Ti`` drives for
a total of 16 drives:
<DIAGRAM>
a total of 16 drives.
MinIO creates :ref:`Erasure Sets <minio-ec-erasure-set>` by dividing the total
number of drives in the deployment into sets consisting of between 4 and 16
drives each. In the example deployment, the largest possible Erasure Set size
that evenly divides into the total number of drives is ``16``:
<DIAGRAM>
that evenly divides into the total number of drives is ``16``.
MinIO uses a Reed-Solomon algorithm to split objects into data and parity blocks
based on the size of the Erasure Set. MinIO then uniformly distributes the
@ -45,8 +41,6 @@ in the set contains no more than one block per object. MinIO uses
the ``EC:N`` notation to refer to the number of parity blocks (``N``) in the
Erasure Set.
<DIAGRAM>
The number of parity blocks in a deployment controls the deployment's relative
data redundancy. Higher levels of parity allow for higher tolerance of drive
loss at the cost of total available storage. For example, using EC:4 in our
@ -92,9 +86,6 @@ deployment:
- For more information on selecting Erasure Code Parity, see
:ref:`minio-ec-parity`
- For more information on Erasure Code Object Healing, see
:ref:`minio-ec-object-healing`.
.. _minio-ec-erasure-set:
Erasure Sets
@ -105,34 +96,34 @@ Erasure Coding. MinIO evenly distributes object data and parity blocks among
the drives in the Erasure Set.
MinIO calculates the number and size of *Erasure Sets* by dividing the total
number of drives in the :ref:`Server Set <minio-intro-server-set>` into sets
number of drives in the :ref:`Server Pool <minio-intro-server-pool>` into sets
consisting of between 4 and 16 drives each. MinIO considers two factors when
selecting the Erasure Set size:
- The Greatest Common Divisor (GCD) of the total drives.
- The number of :mc:`minio server` nodes in the Server Set.
- The number of :mc:`minio server` nodes in the Server Pool.
For an even number of nodes, MinIO uses the GCD to calculate the Erasure Set
size and ensure the minimum number of Erasure Sets possible. For an odd number
of nodes, MinIO selects a common denominator that results in an odd number of
Erasure Sets to facilitate more uniform distribution of erasure set drives
among nodes in the Server Set.
among nodes in the Server Pool.
For example, consider a Server Set consisting of 4 nodes with 8 drives each
For example, consider a Server Pool consisting of 4 nodes with 8 drives each
for a total of 32 drives. The GCD of 16 produces 2 Erasure Sets of 16 drives
each with uniform distribution of erasure set drives across all 4 nodes.
Now consider a Server Set consisting of 5 nodes with 8 drives each for a total
Now consider a Server Pool consisting of 5 nodes with 8 drives each for a total
of 40 drives. Using the GCD, MinIO would create 4 erasure sets with 10 drives
each. However, this distribution would result in uneven distribution with
one node contributing more drives to the Erasure Sets than the others.
MinIO instead creates 5 erasure sets with 8 drives each to ensure uniform
distribution of Erasure Set drives per Nodes.
MinIO generally recommends maintaining an even number of nodes in a Server Set
MinIO generally recommends maintaining an even number of nodes in a Server Pool
to facilitate simplified human calculation of the number and size of
Erasure Sets in the Server Set.
Erasure Sets in the Server Pool.
.. _minio-ec-parity:
@ -179,7 +170,7 @@ Write Quorum
to serve write operations. MinIO requires enough available drives to
eliminate the risk of split-brain scenarios.
MinIO Write Quorum is ``DRIVES - (EC:N-1)``.
MinIO Write Quorum is ``(DRIVES - (EC:N)) + 1``.
Storage Classes
~~~~~~~~~~~~~~~
@ -204,8 +195,26 @@ MinIO provides the following two storage classes:
- The :mc:`mc admin config` command to modify the ``storage_class.standard``
configuration setting.
Starting with <RELEASE>, MinIO defaults ``STANDARD`` storage class to
``EC:4``.
Starting with :minio-git:`RELEASE.2021-01-30T00-20-58Z
<minio/releases/tag/RELEASE.2021-01-30T00-20-58Z>`, MinIO defaults
``STANDARD`` storage class based on the number of volumes in the Erasure Set:
.. list-table::
:header-rows: 1
:widths: 30 70
:width: 100%
* - Erasure Set Size
- Default Parity (EC:N)
* - 5 or Fewer
- EC:2
* - 6 - 7
- EC:3
* - 8 or more
- EC:4
The maximum value is half of the total drives in the
:ref:`Erasure Set <minio-ec-erasure-set>`.
@ -252,19 +261,12 @@ interfacing with the MinIO server.
created.
.. _minio-ec-object-healing:
Object Healing
--------------
TODO
.. _minio-ec-bitrot-protection:
BitRot Protection
-----------------
TODO- ReWrite w/ more detail.
.. TODO- ReWrite w/ more detail.
Silent data corruption or bitrot is a serious problem faced by disk drives
resulting in data getting corrupted without the users knowledge. The reasons

View File

@ -16,21 +16,28 @@ The following table lists MinIO features and their corresponding documentation:
* - Feature
- Description
* - :doc:`Bucket Notifications </minio-features/bucket-notifications>`
* - :doc:`Bucket Notifications </concepts/bucket-notifications>`
- MinIO Bucket Notifications allows you to automatically publish
notifications to one or more configured notification targets when
specific events occur in a bucket.
* - :doc:`Bucket Versioning </minio-features/bucket-versioning>`
* - :doc:`Bucket Versioning </concepts/bucket-versioning>`
- MinIO Bucket Versioning supports keeping multiple "versions" of an
object in a single bucket. Write operations which would normally
overwrite an existing object instead result in the creation of a new
versioned object.
* - :doc:`Erasure Coding </concepts/erasure-coding>`
- MinIO Erasure Coding is a data redundancy and availability feature that
allows MinIO deployments to automatically reconstruct objects on-the-fly
despite the loss of multiple drives or nodes on the cluster. Erasure
coding provides object-level healing with less overhead than adjacent
technologies such as RAID ro replication.
.. toctree::
:titlesonly:
:hidden:
/minio-features/bucket-notifications
/minio-features/bucket-versioning
/minio-features/erasure-coding
/concepts/bucket-notifications
/concepts/bucket-versioning
/concepts/erasure-coding

View File

@ -62,6 +62,8 @@ extlinks = {
'iam-docs' : ('https://docs.aws.amazon.com/IAM/latest/UserGuide/%s',''),
'release' : ('https://github.com/minio/mc/releases/tag/%s',''),
'legacy' : ('https://docs.min.io/docs/%s',''),
'docs-k8s' : ('https://docs.min.io/minio/k8s/%s',''),
}
# Add any paths that contain templates here, relative to this directory.
@ -97,7 +99,7 @@ html_theme_options = {
'show_relbars': 'false'
}
html_short_title = "MinIO Hybrid Cloud"
html_short_title = "MinIO Object Storage for Baremetal Infrastructure"
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,

View File

@ -12,17 +12,16 @@ First-time users of MinIO *or* object storage services should start with
our :doc:`Introduction </introduction/minio-overview>`.
Users deploying onto a Kubernetes cluster should start with our
:doc:`Kubernetes documentation </kubernetes/minio-kubernetes-overview>`.
:docs-k8s:`Kubernetes documentation <>`.
.. toctree::
:titlesonly:
:hidden:
/introduction/minio-overview
/minio-features/overview
/bare-metal/minio-baremetal-overview
/kubernetes/minio-kubernetes-overview
/concepts/feature-overview
/tutorials/minio-installation
/security/security-overview
/minio-cli/minio-mc
/minio-cli/minio-mc-admin
/minio-server/minio-server
/reference/minio-cli/minio-mc
/reference/minio-cli/minio-mc-admin
/reference/minio-server/minio-server

View File

@ -31,29 +31,30 @@ needs to store a variety of blobs, including rich multimedia like videos and
images. The structure of objects on the MinIO server might look similar to the
following:
.. code-block:: shell
.. code-block:: text
/ #root
/images/
2020-01-02-blog-title.png
2020-01-03-blog-title.png
2020-01-02-MinIO-Diagram.png
2020-01-03-MinIO-Advanced-Deployment.png
MinIO-Logo.png
/videos/
2020-01-03-blog-cool-video.mp4
/blogs/
2020-01-02-blog.md
2020-01-03-blog.md
/comments/
2020-01-02-blog-comments.json
2020-01-02-blog-comments.json
2020-01-04-MinIO-Interview.mp4
/articles/
/john.doe/
2020-01-02-MinIO-Object-Storage.md
2020-01-02-MinIO-Object-Storage-comments.json
/jane.doe/
2020-01-03-MinIO-Advanced-Deployment.png
2020-01-02-MinIO-Advanced-Deployment-comments.json
2020-01-04-MinIO-Interview.md
MinIO supports multiple levels of nested directories and objects to support
even the most dynamic object storage workloads.
Deployment Architecture
-----------------------
The following diagram describes the individual components in a MinIO
deployment:
<DIAGRAM ErasureSet -> ServerSet -> Cluster >
:ref:`Erasure Set <minio-ec-erasure-set>`
A set of disks that supports MinIO :ref:`Erasure Coding
<minio-erasure-coding>`. Erasure Coding provides high availability,
@ -66,66 +67,68 @@ deployment:
impact despite the loss of up to half (``N/2``) of the total drives in the
deployment.
.. _minio-intro-server-set:
.. _minio-intro-server-pool:
:ref:`Server Set <minio-intro-server-set>`
:ref:`Server Pool <minio-intro-server-pool>`
A set of MinIO :mc-cmd:`minio server` nodes which pool their drives and
resources for supporting object storage/retrieval requests. The
:mc-cmd:`~minio server HOSTNAME` argument passed to the
:mc-cmd:`minio server` command represents a Server Set:
:mc-cmd:`minio server` command represents a Server Pool:
.. code-block:: shell
minio server https://minio{1...4}.example.net/mnt/disk{1...4}
| Server Set |
| Server Pool |
The above example describes a single Server Set with
The above example describes a single Server Pool with
4 :mc:`minio server` nodes and 4 drives each for a total of 16 drives.
MinIO requires starting each :mc:`minio server` in the set with the same
startup command to enable awareness of all set peers.
See :mc-cmd:`minio server` for complete syntax and usage.
MinIO calculates the size and number of Erasure Sets in the Server Set based
MinIO calculates the size and number of Erasure Sets in the Server Pool based
on the total number of drives in the set *and* the number of :mc:`minio`
servers in the set. See :ref:`minio-ec-erasure-set` for more information.
.. _minio-intro-cluster:
:ref:`Cluster <minio-intro-cluster>`
The whole MinIO deployment consisting of one or more Server Sets. Each
The whole MinIO deployment consisting of one or more Server Pools. Each
:mc-cmd:`~minio server HOSTNAME` argument passed to the
:mc-cmd:`minio server` command represents one Server Set:
:mc-cmd:`minio server` command represents one Server Pool:
.. code-block:: shell
minio server https://minio{1...4}.example.net/mnt/disk{1...4} \
https://minio{5...8}.example.net/mnt/disk{1...4}
| Server Set |
| Server Pool |
The above example describes two Server Sets, each consisting of 4
:mc:`minio server` nodes with 4 drives each for a total of 32 drives.
The above example describes two Server Pools, each consisting of 4
:mc:`minio server` nodes with 4 drives each for a total of 32 drives. MinIO
always stores each unique object and all versions of that object on the
same Server Pool.
Server Set expansion is a function of Horizontal Scaling, where each new set
expands the cluster storage and compute resources. Server Set expansion
Server Pool expansion is a function of Horizontal Scaling, where each new set
expands the cluster storage and compute resources. Server Pool expansion
is not intended to support migrating existing sets to newer hardware.
MinIO Standalone clusters consist of a single Server Set with a single
MinIO Standalone clusters consist of a single Server Pool with a single
:mc:`minio server` node. Standalone clusters are best suited for initial
development and evaluation. MinIO strongly recommends production
clusters consist of a *minimum* of 4 :mc:`minio server` nodes in a
Server Set.
Server Pool.
Deploying MinIO
---------------
For Kubernetes clusters, use the MinIO Kubernetes Operator.
See :ref:`minio-kubernetes` for more information.
Users deploying onto a Kubernetes cluster should start with our
:docs-k8s:`Kubernetes documentation <>`.
For bare-metal environments, including private cloud services
or containerized environments, install and run the :mc:`minio server` on
each host in the MinIO deployment. See :ref:`minio-baremetal` for more
information.
each host in the MinIO deployment.
See :ref:`minio-installation` for more information.

View File

@ -1,880 +0,0 @@
.. _minio-kubernetes:
=======================
MinIO Kubernetes Plugin
=======================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 2
Overview
--------
MinIO is a high performance distributed object storage server, designed for
large-scale private cloud infrastructure. Orchestration platforms like
Kubernetes provide perfect cloud-native environment to deploy and scale MinIO.
The :minio-git:`MinIO Kubernetes Operator </minio-operator>` brings native MinIO
support to Kubernetes.
The :mc:`kubectl minio` plugin brings native support for deploying MinIO
tenants to Kubernetes clusters using the ``kubectl`` CLI. You can use
:mc:`kubectl minio` to deploy a MinIO tenant with little to no interaction
with ``YAML`` configuration files.
.. image:: /images/Kubernetes-Minio.svg
:align: center
:width: 90%
:class: no-scaled-link
:alt: Kubernetes Orchestration with the MinIO Operator facilitates automated deployment of MinIO clusters.
:mc:`kubectl minio` builds its interface on top of the
MinIO Kubernetes Operator. Visit the
:minio-git:`MinIO Operator </minio-operator>` Github repository to follow
ongoing development on the Operator and Plugin.
Installation
------------
**Prerequisite**
Install the `krew <https://github.com/kubernetes-sigs/krew>`__ ``kubectl``
plugin manager using the `documented installation procedure
<https://krew.sigs.k8s.io/docs/user-guide/setup/install/>`__.
Install Using ``krew``
~~~~~~~~~~~~~~~~~~~~~~
Run the following command to install :mc:`kubectl minio` using ``krew``:
.. code-block:: shell
:class: copyable
kubectl krew update
kubectl krew install minio
Update Using ``krew``
~~~~~~~~~~~~~~~~~~~~~
Run the following command to update :mc:`kubectl minio`:
.. code-block:: shell
:class: copyable
kubectl krew upgrade
Deploy a MinIO Tenant
---------------------
The following procedure creates a MinIO tenant using the
:mc:`kubectl minio` plugin.
1) Initialize the MinIO Operator
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:mc:`kubectl minio` requires the MinIO Operator. Use the
:mc-cmd:`kubectl minio init` command to initialize the MinIO Operator:
.. code-block:: shell
:class: copyable
kubectl minio init
The example command deploys the MinIO operator to the ``default`` namespace.
Include the :mc-cmd-option:`~kubectl minio init namespace` option to
specify the namespace you want to deploy the MinIO operator into.
2) Configure the Persistent Volumes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Create a :kube-docs:`Persistent Volume (PV) <concepts/storage/volumes/>`
for each drive on each node.
MinIO recommends using :kube-docs:`local <concepts/storage/volumes/#local>` PVs
to ensure best performance and operations:
a. Create a ``StorageClass`` for the MinIO ``local`` Volumes
````````````````````````````````````````````````````````````
.. container:: indent
The following YAML describes a
:kube-docs:`StorageClass <concepts/storage/storage-classes/>` with the
appropriate fields for use with the ``local`` PV:
.. code-block:: yaml
:class: copyable
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
The ``StorageClass`` **must** have ``volumeBindingMode`` set to
``WaitForFirstConsumer`` to ensure correct binding of each pod's
:kube-docs:`Persistent Volume Claims (PVC)
<concepts/storage/persistent-volumes/#persistentvolumeclaims>` to the
Node ``PV``.
b. Create the Required Persistent Volumes
`````````````````````````````````````````
.. container:: indent
The following YAML describes a ``PV`` ``local`` volume:
.. code-block:: yaml
:class: copyable
:emphasize-lines: 4, 12, 14, 22
apiVersion: v1
kind: PersistentVolume
metadata:
name: PV-NAME
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- NODE-NAME
.. list-table::
:header-rows: 1
:widths: 20 80
:width: 100%
* - Field
- Description
* - .. code-block:: yaml
metadata:
name:
- Set to a name that supports easy visual identification of the
``PV`` and its associated physical host. For example, for a ``PV`` on
host ``minio-1``, consider specifying ``minio-1-pv-1``.
* - .. code-block:: yaml
nodeAfinnity:
required:
nodeSelectorTerms:
- key:
values:
- Set to the name of the node on which the physical disk is
installed.
* - .. code-block:: yaml
spec:
storageClassName:
- Set to the ``StorageClass`` created for supporting the
MinIO ``local`` volumes.
* - .. code-block:: yaml
spec:
local:
path:
- Set to the full file path of the locally-attached disk. You
can specify a directory on the disk to isolate MinIO-specific data.
The specified disk or directory **must** be empty for MinIO to start.
Create one ``PV`` for each volume in the MinIO tenant. For example, given a
Kubernetes cluster with 4 Nodes with 4 locally attached drives each, create a
total of 16 ``local`` ``PVs``.
c. Validate the Created PV
``````````````````````````
.. container:: indent
Issue the ``kubectl get PV`` command to validate the created PVs:
.. code-block:: shell
:class: copyable
kubectl get PV
3) Create a Namespace for the MinIO Tenant
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Use the ``kubectl create namespace`` command to create a namespace for
the MinIO Tenant:
.. code-block:: shell
:class: copyable
kubectl create namespace minio-tenant-1
4) Create the MinIO Tenant
~~~~~~~~~~~~~~~~~~~~~~~~~~
Use the :mc-cmd:`kubectl minio tenant create` command to create the MinIO
Tenant.
The following example creates a 4-node MinIO deployment with a
total capacity of 16Ti across 16 drives.
.. code-block:: shell
:class: copyable
kubectl minio tenant create \
--name minio-tenant-1 \
--servers 4 \
--volumes 16 \
--capacity 16Ti \
--storageClassName local-storage \
--namespace minio-tenant-1
The following table explains each argument specified to the command:
.. list-table::
:header-rows: 1
:widths: 30 70
:width: 100%
* - Argument
- Description
* - :mc-cmd-option:`~kubectl minio tenant create name`
- The name of the MinIO Tenant which the command creates.
* - :mc-cmd-option:`~kubectl minio tenant create servers`
- The number of :mc:`minio` servers to deploy across the Kubernetes
cluster.
* - :mc-cmd-option:`~kubectl minio tenant create volumes`
- The number of volumes in the cluster. :mc:`kubectl minio` determines the
number of volumes per server by dividing ``volumes`` by ``servers``.
* - :mc-cmd-option:`~kubectl minio tenant create capacity`
- The total capacity of the cluster. :mc:`kubectl minio` determines the
capacity of each volume by dividing ``capacity`` by ``volumes``.
* - :mc-cmd-option:`~kubectl minio tenant create namespace`
- The Kubernetes namespace in which to deploy the MinIO Tenant.
* - :mc-cmd-option:`~kubectl minio tenant create storageClassName`
- The Kubernetes ``StorageClass`` to use when creating each PVC.
If :mc-cmd:`kubectl minio tenant create` succeeds in creating the MinIO Tenant,
the command outputs connection information to the terminal. The output includes
the credentials for the :mc:`minio` :ref:`root <minio-users-root>` user and
the MinIO Console Service.
.. code-block:: shell
:emphasize-lines: 1-3, 7-9
Tenant
Access Key: 999466bb-8bd6-4d73-8115-61df1b0311f4
Secret Key: f8e5ecc3-7657-493b-b967-aaf350daeec9
Version: minio/minio:RELEASE.2020-09-26T03-44-56Z
ClusterIP Service: minio-tenant-1-internal-service
MinIO Console
Access Key: e9ae0f3f-18e5-44c6-a2aa-dc2e95497734
Secret Key: 498ae13a-2f70-4adf-a38e-730d24327426
Version: minio/console:v0.3.14
ClusterIP Service: minio-tenant-1-console
:mc-cmd:`kubectl minio` stores all credentials using Kubernetes Secrets, where
each secret is prefixed with the tenant
:mc-cmd:`name <kubectl minio tenant create name>`:
.. code-block:: shell
> kubectl get secrets --namespace minio-tenant-1
NAME TYPE DATA AGE
minio-tenant-1-console-secret Opaque 5 123d4h
minio-tenant-1-console-tls Opaque 2 123d4h
minio-tenant-1-creds-secret Opaque 2 123d4h
minio-tenant-1-tls Opaque 2 123d4h
Kubernetes administrators with the correct permissions can view the secret
contents and extract the access and secret key:
.. code-block:: shell
kubectl get secrets minio-tenant-1-creds-secret -o yaml
The access key and secret key are ``base64`` encoded. You must decode the
values prior to specifying them to :mc:`mc` or other S3-compatible tools.
5) Configure Access to the Service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:mc:`kubectl minio` creates a service for the MinIO Tenant.
Use ``kubectl get svc`` to retrieve the service name:
.. code-block:: shell
:class: copyable
kubectl get svc --namespace minio-tenant-1
The command returns output similar to the following:
.. code-block:: shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio ClusterIP 10.109.88.X <none> 443/TCP 137m
minio-tenant-1-console ClusterIP 10.97.87.X <none> 9090/TCP,9443/TCP 129m
minio-tenant-1-hl ClusterIP None <none> 9000/TCP 137m
The created services are visible only within the Kubernetes cluster. External
access to Kubernetes cluster resources requires creating an
:kube-docs:`Ingress object <concepts/services-networking/ingress>` that routes
traffic from an externally-accessible IP address or hostname to the ``minio``
service. Configuring Ingress also requires creating an
:kube-docs:`Ingress Controller
<concepts/services-networking/ingress-controller>` in the cluster.
Defer to the :kube-docs:`Kubernetes Documentation
<concepts/services-networking>` for guidance on creating and configuring the
required resources for external access to cluster resources.
The following example Ingress object depends on the
`NGINX Ingress Controller for Kubernetes
<https://www.nginx.com/products/nginx/kubernetes-ingress-controller>`__.
The example is intended as a *demonstration* for creating an Ingress object and
may not reflect the configuration and topology of your Kubernetes cluster and
MinIO tenant. You may need to add or remove listed fields to suit your
Kubernetes cluster. **Do not** use this example as-is or without modification.
.. code-block:: yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio-ingress
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-body-size: 1024m
spec:
tls:
- hosts:
- minio.example.com
secretName: minio-ingress-tls
rules:
- host: minio.example.com
http:
paths:
- path: /
backend:
serviceName: minio
servicePort: http
MinIO Kubernetes Plugin Syntax
------------------------------
.. mc:: kubectl minio
Create the MinIO Operator
~~~~~~~~~~~~~~~~~~~~~~~~~
.. mc-cmd:: init
:fullpath:
Initializes the MinIO Operator. :mc:`kubectl minio` requires the operator for
core functionality.
The command has the following syntax:
.. code-block:: shell
:class: copyable
kubectl minio init [FLAGS]
The command supports the following arguments:
.. mc-cmd:: image
:option:
The image to use for deploying the operator.
Defaults to the :minio-git:`latest release of the operator
<minio/operator/releases/latest>`:
``minio/k8s-operator:latest``
.. mc-cmd:: namespace
:option:
The namespace into which to deploy the operator.
Defaults to ``minio-operator``.
.. mc-cmd:: cluster-domain
:option:
The domain name to use when configuring the DNS hostname of the
operator. Defaults to ``cluster.local``.
.. mc-cmd:: namespace-to-watch
:option:
The namespace which the operator watches for MinIO tenants.
Defaults to ``""`` or *all namespaces*.
.. mc-cmd:: image-pull-secret
:option:
Secret key for use with pulling the
:mc-cmd-option:`~kubectl minio init image`.
The MinIO-hosted ``minio/k8s-operator`` image is *not* password protected.
This option is only required for non-MinIO image sources which are
password protected.
.. mc-cmd:: output
:option:
Performs a dry run and outputs the generated YAML to ``STDOUT``. Use
this option to customize the YAML and apply it manually using
``kubectl apply -f <FILE>``.
Delete the MinIO Operator
~~~~~~~~~~~~~~~~~~~~~~~~~
.. mc-cmd:: delete
:fullpath:
Deletes the MinIO Operator along with all associated resources,
including all MinIO Tenant instances in the
:mc-cmd:`watched namespace <kubectl minio init namespace-to-watch>`.
.. warning::
If the underlying Persistent Volumes (``PV``) were created with
a reclaim policy of ``recycle`` or ``delete``, deleting the MinIO
Tenant results in complete loss of all objects stored on the tenant.
Ensure you have performed all due diligence in confirming the safety of
any data on the MinIO Tenant prior to deletion.
The command has the following syntax:
.. code-block:: shell
:class: copyable
kubectl minio delete [FLAGS]
The command accepts the following arguments:
.. mc-cmd:: namespace
:option:
The namespace of the MinIO operator to delete.
Defaults to ``minio-operator``.
Create a MinIO Tenant
~~~~~~~~~~~~~~~~~~~~~
.. include:: /includes/facts-kubectl-plugin.rst
:start-after: start-kubectl-minio-requires-operator-desc
:end-before: end-kubectl-minio-requires-operator-desc
.. mc-cmd:: tenant create
:fullpath:
Creates a MinIO Tenant using the
:minio-git:`latest release <minio/minio/releases/latest>` of :mc:`minio`:
``minio/minio:latest``
The command creates the following resources in the Kubernetes cluster.
- The MinIO Tenant.
- Persistent Volume Claims (``PVC``) for each
:mc-cmd:`volume <kubectl minio tenant create volumes>` in the tenant.
- Pods for each
:mc-cmd:`server <kubectl minio tenant create servers>` in the tenant.
- Kubernetes secrets for storing access keys and secret keys. Each
secret is prefixed with the Tenant name.
- The MinIO Console Service (MCS). See the :minio-git:`console <console>`
Github repository for more information on MCS.
The command has the following syntax:
.. code-block:: shell
:class: copyable
kubectl minio tenant create \
--names NAME \
--servers SERVERS \
--volumes VOLUMES \
--capacity CAPACITY \
--storageClassName STORAGECLASS \
[OPTIONAL_FLAGS]
The command supports the following arguments:
.. mc-cmd:: name
:option:
*Required*
The name of the MinIO tenant which the command creates. The
name *must* be unique in the
:mc-cmd:`~kubectl minio tenant create namespace`.
.. mc-cmd:: servers
:option:
*Required*
The number of :mc:`minio` servers to deploy on the Kubernetes cluster.
Ensure that the specified number of
:mc-cmd-option:`~kubectl minio tenant create servers` does *not*
exceed the number of nodes in the Kubernetes cluster. MinIO strongly
recommends sizing the cluster to have one node per MinIO server.
.. mc-cmd:: volumes
:option:
*Required*
The number of volumes in the MinIO tenant. :mc:`kubectl minio`
generates one Persistent Volume Claim (``PVC``) for each volume.
:mc:`kubectl minio` divides the
:mc-cmd-option:`~kubectl minio tenant create capacity` by the number of
volumes to determine the amount of ``resources.requests.storage`` to
set for each ``PVC``.
:mc:`kubectl minio` determines
the number of ``PVC`` to associate to each :mc:`minio` server by dividing
:mc-cmd-option:`~kubectl minio tenant create volumes` by
:mc-cmd-option:`~kubectl minio tenant create servers`.
:mc:`kubectl minio` also configures each ``PVC`` with node-aware
selectors, such that the :mc:`minio` server process uses a ``PVC``
which correspond to a ``local`` Persistent Volume (``PV``) on the
same node running that process. This ensures that each process
uses local disks for optimal performance.
If the specified number of volumes exceeds the number of
``PV`` available on the cluster, :mc:`kubectl minio tenant create`
hangs and waits until the required ``PV`` exist.
.. mc-cmd:: capacity
:option:
*Required*
The total capacity of the MinIO tenant. :mc:`kubectl minio` divides
the capacity by the number of
:mc-cmd-option:`~kubectl minio tenant create volumes` to determine the
amount of ``resources.requests.storage`` to set for each
Persistent Volume Claim (``PVC``).
If the existing Persistent Volumes (``PV``) in the cluster cannot
satisfy the requested storage, :mc:`kubectl minio tenant create`
hangs and waits until the required storage exists.
.. mc-cmd:: storageClassName
:option:
*Required*
The name of the Kubernetes
:kube-docs:`Storage Class <concepts/storage/storage-classes/>` to use
when creating Persistent Volume Claims (``PVC``) for the
MinIO Tenant. The specified
:mc-cmd-option:`~kubectl minio tenant create storageClassName`
*must* match the ``StorageClassName`` of the Persistent Volumes (``PVs``)
to which the ``PVCs`` should bind.
.. mc-cmd:: namespace
:option:
The namespace in which to create the MinIO Tenant.
Defaults to ``minio``.
.. mc-cmd:: kes-config
:option:
The name of the Kubernetes Secret which contains the
MinIO Key Encryption Service (KES) configuration.
.. mc-cmd:: output
:option:
Outputs the generated ``YAML`` objects to ``STDOUT`` for further
customization.
:mc-cmd-option:`~kubectl minio tenant create output` does
**not** create the MinIO Tenant. Use ``kubectl apply -f <FILE>`` to
manually create the MinIO tenant using the generated file.
Expand a MinIO Tenant
~~~~~~~~~~~~~~~~~~~~~
.. include:: /includes/facts-kubectl-plugin.rst
:start-after: start-kubectl-minio-requires-operator-desc
:end-before: end-kubectl-minio-requires-operator-desc
.. mc-cmd:: tenant expand
:fullpath:
Adds a new zone to an existing MinIO Tenant.
The command creates the new zone using the
:minio-git:`latest release <minio/minio/releases/latest>` of :mc:`minio`:
``minio/minio:latest``
Consider using :mc-cmd:`kubectl minio tenant upgrade` to upgrade the
MinIO tenant *before* adding the new zone to ensure consistency across the
entire tenant.
The command has the following syntax:
.. code-block:: shell
:class: copyable
kubectl minio tenant expand \
--names NAME \
--servers SERVERS \
--volumes VOLUMES \
--capacity CAPACITY \
[OPTIONAL_FLAGS]
The command supports the following arguments:
.. mc-cmd:: name
:option:
*Required*
The name of the MinIO Tenant which the command expands.
.. mc-cmd:: servers
:option:
*Required*
The number of :mc:`minio` servers to deploy in the new MinIO Tenant zone.
Ensure that the specified number of
:mc-cmd-option:`~kubectl minio tenant expand servers` does *not* exceed
the number of unused nodes in the Kubernetes cluster. MinIO strongly
recommends sizing the cluster to have one node per MinIO server in the new
zone.
.. mc-cmd:: volumes
:option:
*Required*
The number of volumes in the new MinIO Tenant zone.
:mc:`kubectl minio` generates one Persistent Volume Claim (``PVC``) for
each volume. :mc:`kubectl minio` divides the
:mc-cmd-option:`~kubectl minio tenant expand capacity` by the number of
volumes to determine the amount of ``resources.requests.storage`` to set
for each ``PVC``.
:mc:`kubectl minio` determines
the number of ``PVC`` to associate to each :mc:`minio` server by dividing
:mc-cmd-option:`~kubectl minio tenant expand volumes` by
:mc-cmd-option:`~kubectl minio tenant expand servers`.
:mc:`kubectl minio` also configures each ``PVC`` with node-aware
selectors, such that the :mc:`minio` server process uses a ``PVC``
which correspond to a ``local`` Persistent Volume (``PV``) on the
same node running that process. This ensures that each process
uses local disks for optimal performance.
If the specified number of volumes exceeds the number of
``PV`` available on the cluster, :mc:`kubectl minio tenant expand`
hangs and waits until the required ``PV`` exist.
.. mc-cmd:: capacity
:option:
*Required*
The total capacity of the new MinIO Tenant zone. :mc:`kubectl minio`
divides the capacity by the number of
:mc-cmd-option:`~kubectl minio tenant expand volumes` to determine the
amount of ``resources.requests.storage`` to set for each
Persistent Volume Claim (``PVC``).
If the existing Persistent Volumes (``PV``) in the cluster cannot
satisfy the requested storage, :mc:`kubectl minio tenant expand`
hangs and waits until the required storage exists.
.. mc-cmd:: namespace
:option:
The namespace in which to create the new MinIO Tenant zone. The namespace
*must* match that of the MinIO Tenant being extended.
Defaults to ``minio``.
.. mc-cmd:: output
:option:
Outputs the generated ``YAML`` objects to ``STDOUT`` for further
customization.
:mc-cmd-option:`~kubectl minio tenant expand output` does **not** create
the new MinIO Tenant zone. Use ``kubectl apply -f <FILE>`` to manually
create the MinIO tenant using the generated file.
Get MinIO Tenant Zones
~~~~~~~~~~~~~~~~~~~~~~
.. include:: /includes/facts-kubectl-plugin.rst
:start-after: start-kubectl-minio-requires-operator-desc
:end-before: end-kubectl-minio-requires-operator-desc
.. mc-cmd:: tenant info
:fullpath:
Lists all existing MinIO zones in a MinIO Tenant.
The command has the following syntax:
.. code-block:: shell
:class: copyable
kubectl minio tenant info --names NAME [OPTIONAL_FLAGS]
The command supports the following arguments:
.. mc-cmd:: name
:option:
*Required*
The name of the MinIO Tenant for which the command returns the
existing zones.
.. mc-cmd:: namespace
:option:
The namespace in which to look for the MinIO Tenant.
Defaults to ``minio``.
Upgrade MinIO Tenant
~~~~~~~~~~~~~~~~~~~~
.. include:: /includes/facts-kubectl-plugin.rst
:start-after: start-kubectl-minio-requires-operator-desc
:end-before: end-kubectl-minio-requires-operator-desc
.. mc-cmd:: tenant upgrade
:fullpath:
Upgrades the :mc:`minio` server Docker image used by the MinIO Tenant.
.. important::
MinIO upgrades *all* :mc:`minio` server processes at once. This may
result in a brief period of downtime if a majority (``n/2-1``) of
servers are offline at the same time.
The command has the following syntax:
.. code-block:: shell
:class: copyable
kubectl minio tenant upgrade --names NAME [OPTIONAL_FLAGS]
The command supports the following arguments:
.. mc-cmd:: name
:option:
*Required*
The name of the MinIO Tenant which the command updates.
.. mc-cmd:: namespace
:option:
The namespace in which to look for the MinIO Tenant.
Defaults to ``minio``.
Delete a MinIO Tenant
~~~~~~~~~~~~~~~~~~~~~
.. include:: /includes/facts-kubectl-plugin.rst
:start-after: start-kubectl-minio-requires-operator-desc
:end-before: end-kubectl-minio-requires-operator-desc
.. mc-cmd:: tenant delete
:fullpath:
Deletes the MinIO Tenant and its associated resources.
Kubernetes only deletes the Minio Tenant Persistent Volume Claims (``PVC``)
if the underlying Persistent Volumes (``PV``) were created with a
reclaim policy of ``recycle`` or ``delete``. ``PV`` with a reclaim policy of
``retain`` require manual deletion of their associated ``PVC``.
Deletion of the underlying ``PV``, whether automatic or manual, results in
the loss of any objects stored on the MinIO Tenant. Perform all due
diligence in ensuring the safety of stored data *prior* to deleting the
tenant.
The command has the following syntax:
.. code-block:: shell
:class: copyable
kubectl minio tenant delete --names NAME [OPTIONAL_FLAGS]
The command supports the following arguments:
.. mc-cmd:: name
:option:
*Required*
The name of the MinIO Tenant to delete.
.. mc-cmd:: namespace
:option:
The namespace in which to look for the MinIO Tenant.
Defaults to ``minio``.
.. toctree::
:hidden:
:titlesonly:
/kubernetes/minio-operator-reference

File diff suppressed because it is too large Load Diff

View File

@ -36,77 +36,77 @@ The following table lists :mc-cmd:`mc admin` commands:
- Description
* - :mc:`mc admin bucket remote`
- .. include:: /minio-cli/minio-mc-admin/mc-admin-bucket-remote.rst
- .. include:: /reference/minio-cli/minio-mc-admin/mc-admin-bucket-remote.rst
:start-after: start-mc-admin-bucket-remote-desc
:end-before: end-mc-admin-bucket-remote-desc
* - :mc:`mc admin bucket quota`
- .. include:: /minio-cli/minio-mc-admin/mc-admin-bucket-quota.rst
- .. include:: /reference/minio-cli/minio-mc-admin/mc-admin-bucket-quota.rst
:start-after: start-mc-admin-bucket-quota-desc
:end-before: end-mc-admin-bucket-quota-desc
* - :mc:`mc admin group`
- .. include:: /minio-cli/minio-mc-admin/mc-admin-group.rst
- .. include:: /reference/minio-cli/minio-mc-admin/mc-admin-group.rst
:start-after: start-mc-admin-group-desc
:end-before: end-mc-admin-group-desc
* - :mc:`mc admin heal`
- .. include:: /minio-cli/minio-mc-admin/mc-admin-heal.rst
- .. include:: /reference/minio-cli/minio-mc-admin/mc-admin-heal.rst
:start-after: start-mc-admin-heal-desc
:end-before: end-mc-admin-heal-desc
* - :mc:`mc admin info`
- .. include:: /minio-cli/minio-mc-admin/mc-admin-info.rst
- .. include:: /reference/minio-cli/minio-mc-admin/mc-admin-info.rst
:start-after: start-mc-admin-info-desc
:end-before: end-mc-admin-info-desc
* - :mc:`mc admin kms key`
- .. include:: /minio-cli/minio-mc-admin/mc-admin-kms-key.rst
- .. include:: /reference/minio-cli/minio-mc-admin/mc-admin-kms-key.rst
:start-after: start-mc-admin-kms-key-desc
:end-before: end-mc-admin-kms-key-desc
* - :mc:`mc admin obd`
- .. include:: /minio-cli/minio-mc-admin/mc-admin-obd.rst
- .. include:: /reference/minio-cli/minio-mc-admin/mc-admin-obd.rst
:start-after: start-mc-admin-obd-desc
:end-before: end-mc-admin-obd-desc
* - :mc:`mc admin policy`
- .. include:: /minio-cli/minio-mc-admin/mc-admin-policy.rst
- .. include:: /reference/minio-cli/minio-mc-admin/mc-admin-policy.rst
:start-after: start-mc-admin-policy-desc
:end-before: end-mc-admin-policy-desc
* - :mc:`mc admin profile`
- .. include:: /minio-cli/minio-mc-admin/mc-admin-profile.rst
- .. include:: /reference/minio-cli/minio-mc-admin/mc-admin-profile.rst
:start-after: start-mc-admin-profile-desc
:end-before: end-mc-admin-profile-desc
* - :mc:`mc admin prometheus`
- .. include:: /minio-cli/minio-mc-admin/mc-admin-prometheus.rst
- .. include:: /reference/minio-cli/minio-mc-admin/mc-admin-prometheus.rst
:start-after: start-mc-admin-prometheus-desc
:end-before: end-mc-admin-prometheus-desc
* - :mc:`mc admin service`
- .. include:: /minio-cli/minio-mc-admin/mc-admin-service.rst
- .. include:: /reference/minio-cli/minio-mc-admin/mc-admin-service.rst
:start-after: start-mc-admin-service-desc
:end-before: end-mc-admin-service-desc
* - :mc:`mc admin top`
- .. include:: /minio-cli/minio-mc-admin/mc-admin-top.rst
- .. include:: /reference/minio-cli/minio-mc-admin/mc-admin-top.rst
:start-after: start-mc-admin-top-desc
:end-before: end-mc-admin-top-desc
* - :mc:`mc admin trace`
- .. include:: /minio-cli/minio-mc-admin/mc-admin-trace.rst
- .. include:: /reference/minio-cli/minio-mc-admin/mc-admin-trace.rst
:start-after: start-mc-admin-trace-desc
:end-before: end-mc-admin-trace-desc
* - :mc:`mc admin update`
- .. include:: /minio-cli/minio-mc-admin/mc-admin-update.rst
- .. include:: /reference/minio-cli/minio-mc-admin/mc-admin-update.rst
:start-after: start-mc-admin-update-desc
:end-before: end-mc-admin-update-desc
* - :mc:`mc admin user`
- .. include:: /minio-cli/minio-mc-admin/mc-admin-user.rst
- .. include:: /reference/minio-cli/minio-mc-admin/mc-admin-user.rst
:start-after: start-mc-admin-user-desc
:end-before: end-mc-admin-user-desc
@ -164,4 +164,4 @@ Global Options
:hidden:
:glob:
/minio-cli/minio-mc-admin/*
/reference/minio-cli/minio-mc-admin/*

View File

@ -134,52 +134,52 @@ The following table lists :mc-cmd:`mc` commands:
- Description
* - :mc:`mc alias`
- .. include:: /minio-cli/minio-mc/mc-alias.rst
- .. include:: /reference/minio-cli/minio-mc/mc-alias.rst
:start-after: start-mc-alias-desc
:end-before: end-mc-alias-desc
* - :mc:`mc cat`
- .. include:: /minio-cli/minio-mc/mc-cat.rst
- .. include:: /reference/minio-cli/minio-mc/mc-cat.rst
:start-after: start-mc-cat-desc
:end-before: end-mc-cat-desc
* - :mc:`mc cp`
- .. include:: /minio-cli/minio-mc/mc-cp.rst
- .. include:: /reference/minio-cli/minio-mc/mc-cp.rst
:start-after: start-mc-cp-desc
:end-before: end-mc-cp-desc
* - :mc:`mc diff`
- .. include:: /minio-cli/minio-mc/mc-diff.rst
- .. include:: /reference/minio-cli/minio-mc/mc-diff.rst
:start-after: start-mc-diff-desc
:end-before: end-mc-diff-desc
* - :mc:`mc encrypt`
- .. include:: /minio-cli/minio-mc/mc-encrypt.rst
- .. include:: /reference/minio-cli/minio-mc/mc-encrypt.rst
:start-after: start-mc-encrypt-desc
:end-before: end-mc-encrypt-desc
* - :mc:`mc event`
- .. include:: /minio-cli/minio-mc/mc-event.rst
- .. include:: /reference/minio-cli/minio-mc/mc-event.rst
:start-after: start-mc-event-desc
:end-before: end-mc-event-desc
* - :mc:`mc find`
- .. include:: /minio-cli/minio-mc/mc-find.rst
- .. include:: /reference/minio-cli/minio-mc/mc-find.rst
:start-after: start-mc-find-desc
:end-before: end-mc-find-desc
* - :mc:`mc head`
- .. include:: /minio-cli/minio-mc/mc-head.rst
- .. include:: /reference/minio-cli/minio-mc/mc-head.rst
:start-after: start-mc-head-desc
:end-before: end-mc-head-desc
* - :mc:`mc ilm`
- .. include:: /minio-cli/minio-mc/mc-ilm.rst
- .. include:: /reference/minio-cli/minio-mc/mc-ilm.rst
:start-after: start-mc-ilm-desc
:end-before: end-mc-ilm-desc
* - :mc:`mc legalhold`
- .. include:: /minio-cli/minio-mc/mc-legalhold.rst
- .. include:: /reference/minio-cli/minio-mc/mc-legalhold.rst
:start-after: start-mc-legalhold-desc
:end-before: end-mc-legalhold-desc
@ -188,82 +188,82 @@ The following table lists :mc-cmd:`mc` commands:
:release:`RELEASE.2020-09-18T00-13-21Z`. Use :mc:`mc retention`.
* - :mc:`mc ls`
- .. include:: /minio-cli/minio-mc/mc-ls.rst
- .. include:: /reference/minio-cli/minio-mc/mc-ls.rst
:start-after: start-mc-ls-desc
:end-before: end-mc-ls-desc
* - :mc:`mc mb`
- .. include:: /minio-cli/minio-mc/mc-mb.rst
- .. include:: /reference/minio-cli/minio-mc/mc-mb.rst
:start-after: start-mc-mb-desc
:end-before: end-mc-mb-desc
* - :mc:`mc mirror`
- .. include:: /minio-cli/minio-mc/mc-mirror.rst
- .. include:: /reference/minio-cli/minio-mc/mc-mirror.rst
:start-after: start-mc-mirror-desc
:end-before: end-mc-mirror-desc
* - :mc:`mc mv`
- .. include:: /minio-cli/minio-mc/mc-mv.rst
- .. include:: /reference/minio-cli/minio-mc/mc-mv.rst
:start-after: start-mc-mv-desc
:end-before: end-mc-mv-desc
* - :mc:`mc policy`
- .. include:: /minio-cli/minio-mc/mc-policy.rst
- .. include:: /reference/minio-cli/minio-mc/mc-policy.rst
:start-after: start-mc-policy-desc
:end-before: end-mc-policy-desc
* - :mc:`mc rb`
- .. include:: /minio-cli/minio-mc/mc-rb.rst
- .. include:: /reference/minio-cli/minio-mc/mc-rb.rst
:start-after: start-mc-rb-desc
:end-before: end-mc-rb-desc
* - :mc:`mc retention`
- .. include:: /minio-cli/minio-mc/mc-retention.rst
- .. include:: /reference/minio-cli/minio-mc/mc-retention.rst
:start-after: start-mc-retention-desc
:end-before: end-mc-retention-desc
* - :mc:`mc rm`
- .. include:: /minio-cli/minio-mc/mc-rm.rst
- .. include:: /reference/minio-cli/minio-mc/mc-rm.rst
:start-after: start-mc-rm-desc
:end-before: end-mc-rm-desc
* - :mc:`mc share`
- .. include:: /minio-cli/minio-mc/mc-share.rst
- .. include:: /reference/minio-cli/minio-mc/mc-share.rst
:start-after: start-mc-share-desc
:end-before: end-mc-share-desc
* - :mc:`mc sql`
- .. include:: /minio-cli/minio-mc/mc-sql.rst
- .. include:: /reference/minio-cli/minio-mc/mc-sql.rst
:start-after: start-mc-sql-desc
:end-before: end-mc-sql-desc
* - :mc:`mc stat`
- .. include:: /minio-cli/minio-mc/mc-stat.rst
- .. include:: /reference/minio-cli/minio-mc/mc-stat.rst
:start-after: start-mc-stat-desc
:end-before: end-mc-stat-desc
* - :mc:`mc tag`
- .. include:: /minio-cli/minio-mc/mc-tag.rst
- .. include:: /reference/minio-cli/minio-mc/mc-tag.rst
:start-after: start-mc-tag-desc
:end-before: end-mc-tag-desc
* - :mc:`mc tree`
- .. include:: /minio-cli/minio-mc/mc-tree.rst
- .. include:: /reference/minio-cli/minio-mc/mc-tree.rst
:start-after: start-mc-tree-desc
:end-before: end-mc-tree-desc
* - :mc:`mc update`
- .. include:: /minio-cli/minio-mc/mc-update.rst
- .. include:: /reference/minio-cli/minio-mc/mc-update.rst
:start-after: start-mc-update-desc
:end-before: end-mc-update-desc
* - :mc:`mc version`
- .. include:: /minio-cli/minio-mc/mc-version.rst
- .. include:: /reference/minio-cli/minio-mc/mc-version.rst
:start-after: start-mc-version-desc
:end-before: end-mc-version-desc
* - :mc:`mc watch`
- .. include:: /minio-cli/minio-mc/mc-watch.rst
- .. include:: /reference/minio-cli/minio-mc/mc-watch.rst
:start-after: start-mc-watch-desc
:end-before: end-mc-watch-desc
@ -359,7 +359,7 @@ All :ref:`commands <minio-mc-commands>` support the following global options:
:hidden:
:glob:
/minio-cli/minio-mc/*
/reference/minio-cli/minio-mc/*

View File

@ -21,7 +21,7 @@ the bucket event notifications.
MinIO automatically sends triggered events to the configured notification
targets. MinIO supports notification targets like AMQP, Redis, ElasticSearch,
NATS and PostgreSQL. See
:doc:`MinIO Bucket Notifications </minio-features/bucket-notifications>`
:doc:`MinIO Bucket Notifications </concepts/bucket-notifications>`
for more information.
.. end-mc-event-desc
@ -171,7 +171,7 @@ Syntax
The MinIO server outputs an ARN for each configured
notification target at server startup. See
:doc:`/minio-features/bucket-notifications` for more
:doc:`/concepts/bucket-notifications` for more
information.
.. mc-cmd:: event
@ -232,7 +232,7 @@ Syntax
The MinIO server outputs an ARN for each configured
notification target at server startup. See
:doc:`/minio-features/bucket-notifications` for more information.
:doc:`/concepts/bucket-notifications` for more information.
.. mc-cmd:: force
:option:
@ -302,7 +302,7 @@ Syntax
The MinIO server outputs an ARN for each configured
notification target at server startup. See
:doc:`/minio-features/bucket-notifications` for more information.
:doc:`/concepts/bucket-notifications` for more information.

View File

@ -24,10 +24,11 @@ The :mc:`minio server` command starts the MinIO server process:
minio server /mnt/disk{1...4}
For examples of deploying :mc:`minio server` on a bare metal environment,
see :ref:`minio-baremetal`.
see :ref:`minio-installation`.
For examples of deploying :mc:`minio server` on a Kubernetes environment,
see :ref:`minio-kubernetes`.
see :docs-k8s:`Kubernetes documentation <>`.
Configuration Settings
~~~~~~~~~~~~~~~~~~~~~~
@ -63,7 +64,7 @@ The command accepts the following arguments:
For distributed deployments, specify the hostname of each :mc:`minio server`
in the deployment. The group of :mc:`minio server` processes represent a
single :ref:`Server Set <minio-intro-server-set>`.
single :ref:`Server Pool <minio-intro-server-pool>`.
:mc-cmd:`~minio server HOSTNAME` supports MinIO expansion notation
``{x...y}`` to denote a sequential series of hostnames. MinIO *requires*
@ -79,11 +80,11 @@ The command accepts the following arguments:
You must run the :mc:`minio server` command with the *same* combination of
:mc-cmd:`~minio server HOSTNAME` and :mc-cmd:`~minio server DIRECTORIES` on
each host in the Server Set.
each host in the Server Pool.
Each additional ``HOSTNAME/DIRECTORIES`` pair denotes an additional Server
Set for the purpose of horizontal expansion of the MinIO deployment. For more
information on Server Sets, see :ref:`Server Set <minio-intro-server-set>`.
information on Server Pools, see :ref:`Server Pool <minio-intro-server-pool>`.
.. mc-cmd:: DIRECTORIES

View File

@ -1,11 +0,0 @@
=========
Providers
=========
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 1
Stub - might duplicate STS page?

View File

@ -1,5 +1,7 @@
.. _minio-sts:
:orphan:
======================
Security Token Service
======================
@ -10,6 +12,13 @@ Security Token Service
:local:
:depth: 2
.. important::
This page is under active development and isn't ready for prime-time.
If you've found this page, consider checking out our legacy documentation on
:legacy:`MinIO STS Quickstart Guide <minio-sts-quickstart-guide.html>`
for more information.
Overview
--------

View File

@ -61,10 +61,9 @@ For complete documentation on creating MinIO users and groups, see
:ref:`minio-users` and :ref:`minio-groups`.
MinIO *also* supports federating identity management to supported third-party
services through the :ref:`Secure Token Service <minio-sts>`. Supported
identity providers include Okta, Facebook, Google, and Active Directory/LDAP.
For more complete documentation on MinIO STS configuration, see
:ref:`minio-sts`.
services through the :legacy:`Secure Token Service
<minio-sts-quickstart-guide.html>`. Supported identity providers include Okta,
Facebook, Google, and Active Directory/LDAP.
Policies
--------
@ -85,12 +84,19 @@ policy building tools. For more complete documentation on MinIO policies, see
To assign policies to users or groups, use the :mc-cmd:`mc admin policy set`
command from the :program:`mc` command line tool.
Security Token Service
----------------------
The MinIO Security Token Service (STS) is an endpoint service that
enables clients to request temporary credentials for MinIO resources.
See :legacy:`MinIO STS Quickstart Guide <minio-sts-quickstart-guide.html>`
for more information.
.. toctree::
:hidden:
:titlesonly:
/security/IAM/iam-users
/security/IAM/iam-groups
/security/IAM/iam-policies
/security/IAM/iam-providers
/security/IAM/iam-security-token-service
/security/IAM/iam-policies

View File

@ -19,10 +19,8 @@ objects, where MinIO uses a secret key to encrypt and store objects on disk.
Only clients with access to the correct secret key can decrypt and read the
object.
<Diagram to follow>
See :ref:`Server-Side Object Encryption (SSE) <minio-sse>` for more complete
instructions on configuring MinIO for object encryption.
See the legacy documentation on :legacy:`MinIO Security Overview
<minio-security-overview.html>` for more information.
Transport Layer Security (TLS)
------------------------------
@ -30,8 +28,6 @@ Transport Layer Security (TLS)
MinIO supports :ref:`Transport Layer Security (TLS) <minio-TLS>` encryption of
incoming and outgoing traffic.
<Diagram to Follow>
TLS is the successor to Secure Socket Layer (SSL) encryption. SSL is fully
`deprecated <https://tools.ietf.org/html/rfc7568>`__ as of June 30th, 2018.
MinIO uses only supported (non-deprecated) TLS protocols (TLS 1.2 and later).
@ -43,7 +39,4 @@ for more complete instructions on configuring MinIO for TLS.
:titlesonly:
:hidden:
/security/encryption/server-side-encryption
/security/encryption/transport-layer-security
/security/encryption/minio-kes
/security/encryption/sse-s3-thales

View File

@ -1,84 +0,0 @@
.. _minio-kes:
============================
MinIO Key Encryption Service
============================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 2
Overview
--------
The MinIO Key Encryption Service (KES) is a stateless and distributed
key-management system for high-performance applications. KES provides
a bridge between applications running in bare-metal or orchestrated
environments to centralised KMS solutions.
<DIAGRAM>
KES is designed for simplicity, scalability, and security. It requires
minimal configuration to enable full functionality and requires only
basic familiarity with cryptography or key-management concepts.
MinIO servers require KES for performing Server-Side Encryption (SSE) of objects
using Key Management Services (KMS).
KES Server Process
------------------
.. mc:: kes server
:mc:`kes server` command starts the KES server. The :mc:`kes server` process
handles requests for creating and retrieving cryptography keys from a supported
Key Management System (KMS).
The command has the following syntax:
.. code-block:: shell
:class: copyable
kes server --cert CERTIFICATE --key PRIVATEKEY --root ROOT_IDENTITY [OPTIONAL_FLAGS]
:mc:`kes server` supports the following arguments:
.. mc-cmd:: cert
:option:
The location of the public certificate ``.crt`` to use for
enabling :abbr:`TLS (Transport Layer Encryption)`.
.. mc-cmd:: config
:option:
The path to the KES configuration file. See :ref:`minio-kes-config` for
more information on the configuration file format and contents.
.. mc-cmd:: key
:option:
The location of the private key ``.key`` to use for enabling
:abbr:`TLS (Transport Layer Encryption`).
.. mc-cmd:: root
:option:
ToDo: Description
.. mc-cmd:: port
:option:
The port on which the :mc:`kes server` listens.
Defaults to ``7373``.
.. _minio-kes-config:
KES Configuration File
----------------------
ToDo: Import https://github.com/minio/kes/wiki/Configuration , need to
include instructions on how to set the config file (directory, cli option etc.)

View File

@ -1,3 +1,4 @@
:orphan:
.. _minio-sse:
=============================
@ -10,6 +11,13 @@ Server-Side Object Encryption
:local:
:depth: 1
.. important::
This page is under active development and isn't ready for prime-time.
If you've found this page, consider checking out our legacy documentation on
:legacy:`MinIO Security Overview <minio-security-overview.html>` while
we work on cleaning this page up.
Overview
--------
@ -27,7 +35,7 @@ SSE-C
SSE-S3
The server uses a secret key managed by a Key Management System (KMS)
to perform encryption and decryption. SSE-S3 requires using
:ref:`MinIO KES <minio-kes>` and a supported KMS.
:minio-git:`MinIO KES <kes>` and a supported KMS.
Encryption Process Overview
---------------------------
@ -109,8 +117,8 @@ the KMS provide the following services:
the data key and return the plain data key.
Enabling SSE-S3 requires deploying one or more
:ref:`MinIO Key Encryption Servers (KES) <minio-kes>` and configuring the
:mc:`minio` server for access to KES. The KES handles processing
:minio-git:`MinIO Key Encryption Service (KES) instances <minio/kes>` and
configuring the :mc:`minio` server for access to KES. The KES handles processing
cryptographic key requests to the KMS service.
With SSE-S3, the MinIO server requests a new data key for each uploaded object
@ -129,8 +137,6 @@ requests a new data key from the KMS using the master key ID of the current
MinIO KMS configuration and re-wraps the OEK with a new KEK derived from the new
data key / EK.
<Diagram to come>
Only the root MinIO user can perform an SSE-S3 key rotation using the Admin-API via
the ``mc`` client. Refer to the ``mc admin guide`` <todo>

View File

@ -1,57 +0,0 @@
==============================================
Server-Side Encryption with Thales CipherTrust
==============================================
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 2
Overview
--------
Paragraph summarizing SSE-S3 and Thales CipherTrust as a KMS.
Note that Gemalto KeySecure is now Thales CipherTrust.
Prerequisites
-------------
Thales CipherTrust Deployment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
High-Level description of CipherTrust requirements:
- What access will the user need?
- What versions do we support?
MinIO Key Encryption Service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
High-level description of KES requirements:
- A host for deploying at least one KES server
- For Kubernetes, at least one node with enough resources to run the server
MinIO Server
~~~~~~~~~~~~
High-level description of MinIO server requirements:
- ?
Procedure
---------
1) Configure CipherTrust Manager for MinIO Access
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Substeps:
1. Foo
2. Bar
2) Configure KES...
~~~~~~~~~~~~~~~~~~~

View File

@ -30,11 +30,6 @@ You can customize the certificate directory by passing the ``--certs-dir``
option to ``minio server``. The ``certs`` directory must also include any
intermediate certificates required to establish a chain of trust to the root CA.
Creating a Certificate for a MinIO Server
-----------------------------------------
This section includes guidance for creating a private key and public
certificate for a MinIO Server instance.
For MinIO deployments on Kubernetes, see the <future TLS kubernetes doc>
tutorial for more specific instructions.
For more information, see
:minio-git:`How to secure access to MinIO server with TLS
<minio/tree/master/docs/tls>`.

View File

@ -0,0 +1,346 @@
.. _minio-installation:
============
Installation
============
.. default-domain:: minio
.. contents:: Table of Contents
:local:
:depth: 2
MinIO is a high performance distributed object storage server, designed for
large-scale private cloud infrastructure. MinIO fully supports deployment onto
bare-metal hardware with or without containerization for process management.
Distributed Installation
------------------------
Distributed MinIO deployments consist of multiple ``minio`` servers with
one or more disks each. Distributed deployments are best suited for
staging and production environments.
MinIO *requires* using sequentially-numbered hostnames to represent each
``minio`` server in the deployment. For example, the following hostnames support
a 4-node distributed deployment:
- ``minio1.example.com``
- ``minio2.example.com``
- ``minio3.example.com``
- ``minio4.example.com``
Create the necessary DNS hostname mappings *prior* to starting this
procedure.
1\) Install the ``minio`` Server
Install the :program:`minio` server onto each host machine in the deployment.
Select the tab that corresponds to the host machine operating system or
environment:
.. include:: /includes/minio-server-installation.rst
2\) Add TLS/SSL Certificates (Optional)
Enable TLS/SSL connectivity to the MinIO server by specifying a private key
(``.key``) and public certificate (``.crt``) to the MinIO ``certs`` directory:
- For Linux/MacOS: ``${HOME}/.minio/certs``
- For Windows: ``%%USERPROFILE%%\.minio\certs``
The MinIO server automatically enables TLS/SSL connectivity if it detects
the required certificates in the ``certs`` directory.
.. note::
The MinIO documentation makes a best-effort to provide generally applicable
and accurate information on TLS/SSL connectivity in the context of MinIO
products and services, and is not intended as a complete guide to the larger
topic of TLS/SSL certificate creation and management.
3\) Run the ``minio`` Server
Issue the following command on each host machine in the deployment. The
following example assumes that:
- The deployment has four host machines with sequential hostnames (i.e.
``minio1.example.com``, ``minio2.example.com``).
- Each host machine has *at least* four disks mounted at ``/data``. 4 disks
is the minimum required for :ref:`erasure coding <minio-erasure-coding>`.
.. code-block:: shell
:class: copyable
export MINIO_ACCESS_KEY=minio-admin
export MINIO_SECRET_KEY=minio-secret-key-CHANGE-ME
minio server https://minio{1...4}.example.com/mnt/disk{1...4}/data
The example command breaks down as follows:
.. list-table::
:widths: 40 60
:width: 100%
* - :envvar:`MINIO_ACCESS_KEY`
- The access key for the :ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - :envvar:`MINIO_SECRET_KEY`
- The corresponding secret key to use for the
:ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - ``https://minio{1...4}.example.com/``
- The DNS hostname of each server in the distributed deployment.
* - ``/mnt/disk{1...4}/data``
- The path to each disk on the host machine.
``/data`` is an optional folder in which the ``minio`` server stores
all information related to the deployment.
See :mc-cmd:`minio server DIRECTORIES` for more information on
configuring the backing storage for the :mc:`minio server` process.
The command uses MinIO expansion notation ``{x...y}`` to denote a sequential
series. Specifically:
- The hostname ``https://minio{1...4}.example.com`` expands to:
- ``https://minio1.example.com``
- ``https://minio2.example.com``
- ``https://minio3.example.com``
- ``https://minio4.example.com``
- ``/mnt/disk{1...4}/data`` expands to
- ``/mnt/disk1/data``
- ``/mnt/disk2/data``
- ``/mnt/disk3/data``
- ``/mnt/disk4/data``
4\) Connect to the Server
Use the :mc-cmd:`mc alias set` command from a machine with connectivity to any
hostname running the ``minio`` server. See :ref:`mc-install` for documentation
on installing :program:`mc`.
.. code-block:: shell
:class: copyable
mc alias set mylocalminio minio1.example.net minioadmin minio-secret-key-CHANGE-ME
See :ref:`minio-mc-commands` for a list of commands you can run on the
MinIO server.
Docker Installation
-------------------
Stable MinIO
~~~~~~~~~~~~
The following ``docker`` command creates a container running the latest stable
version of the ``minio`` server process:
.. code-block:: shell
:class: copyable
docker run -p 9000:9000 \
-e "MINIO_ACCESS_KEY=ROOT_ACCESS_KEY" \
-e "MINIO_SECRET_KEY=SECRET_ACCESS_KEY_CHANGE_ME" \
-v /mnt/disk1:/disk1 \
-v /mnt/disk2:/disk2 \
-v /mnt/disk3:/disk3 \
-v /mnt/disk4:/disk4 \
minio/minio server /disk{1...4}
The command uses the following options:
- ``-e MINIO_ACCESS_KEY`` and ``-e MINIO_SECRET_KEY`` for configuring the
:ref:`root <minio-users-root>` user credentials.
- ``-v /mnt/disk<int>:/disk<int>`` for configuring each disk the ``minio``
server uses.
Bleeding Edge MinIO
~~~~~~~~~~~~~~~~~~~
*Do not use bleeding-edge deployments of MinIO in production environments*
The following ``docker`` command creates a container running the latest
bleeding-edge version of the ``minio`` server process:
.. code-block:: shell
:class: copyable
docker run -p 9000:9000 \
-e "MINIO_ACCESS_KEY=ROOT_ACCESS_KEY" \
-e "MINIO_SECRET_KEY=SECRET_ACCESS_KEY_CHANGE_ME" \
-v /mnt/disk1:/disk1 \
-v /mnt/disk2:/disk2 \
-v /mnt/disk3:/disk3 \
-v /mnt/disk4:/disk4 \
minio/minio:edge server /disk{1...4}
The command uses the following options:
- ``MINIO_ACCESS_KEY`` and ``MINIO_SECRET_KEY`` for configuring the
:ref:`root <minio-users-root>` user credentials.
- ``-v /mnt/disk<int>:/disk<int>`` for configuring each disk the ``minio``
server uses.
Standalone Installation
-----------------------
Standalone MinIO deployments consist of a single ``minio`` server process with
one or more disks. Standalone deployments are best suited for local development
environments.
1\) Install the ``minio`` Server
Install the :program:`minio` server onto the host machine. Select the tab that
corresponds to the host machine operating system or environment:
.. include:: /includes/minio-server-installation.rst
2\) Add TLS/SSL Certificates (Optional)
Enable TLS/SSL connectivity to the MinIO server by specifying a private key
(``.key``) and public certificate (``.crt``) to the MinIO ``certs`` directory:
- For Linux/MacOS: ``${HOME}/.minio/certs``
- For Windows: ``%%USERPROFILE%%\.minio\certs``
The MinIO server automatically enables TLS/SSL connectivity if it detects
the required certificates in the ``certs`` directory.
.. note::
The MinIO documentation makes a best-effort to provide generally applicable
and accurate information on TLS/SSL connectivity in the context of MinIO
products and services, and is not intended as a complete guide to the larger
topic of TLS/SSL certificate creation and management.
3\) Run the ``minio`` Server
Issue the following command to start the :program:`minio` server. The following
example assumes the host machine has *at least* four disks, which is the minimum
required number of disks to enable :ref:`erasure coding <minio-erasure-coding>`:
.. code-block:: shell
:class: copyable
export MINIO_ACCESS_KEY=minio-admin
export MINIO_SECRET_KEY=minio-secret-key-CHANGE-ME
minio server /mnt/disk{1...4}/data
The example command breaks down as follows:
.. list-table::
:widths: 40 60
:width: 100%
* - :envvar:`MINIO_ACCESS_KEY`
- The access key for the :ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - :envvar:`MINIO_SECRET_KEY`
- The corresponding secret key to use for the
:ref:`root <minio-users-root>` user.
Replace this value with a unique, random, and long string.
* - ``/mnt/disk{1...4}/data``
- The path to each disk on the host machine.
``/data`` is an optional folder in which the ``minio`` server stores
all information related to the deployment.
See :mc-cmd:`minio server DIRECTORIES` for more information on
configuring the backing storage for the :mc:`minio server` process.
The command uses MinIO expansion notation ``{x...y}`` to denote a sequential
series. Specifically, ``/mnt/disk{1...4}/data`` expands to:
- ``/mnt/disk1/data``
- ``/mnt/disk2/data``
- ``/mnt/disk3/data``
- ``/mnt/disk4/data``
4\) Connect to the Server
Use the :mc-cmd:`mc alias set` command from a machine with connectivity to
the host running the ``minio`` server. See :ref:`mc-install` for documentation
on installing :program:`mc`.
.. code-block:: shell
:class: copyable
mc alias set mylocalminio 192.0.2.10:9000 minioadmin minio-secret-key-CHANGE-ME
Replace the IP address and port with one of the ``minio`` servers endpoints.
See :ref:`minio-mc-commands` for a list of commands you can run on the
MinIO server.
Deployment Recommendations
--------------------------
Minimum Nodes per Deployment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For all production deployments, MinIO recommends a *minimum* of 4 nodes per
cluster. MinIO deployments with *at least* 4 nodes can tolerate the loss of up
to half the nodes *or* half the disks in the deployment while maintaining
read and write availability.
For example, assuming a 4-node deployment with 4 drives per node, the
cluster can tolerate the loss of:
- Any two nodes, *or*
- Any 8 drives.
The minimum recommendation reflects MinIO's experience with assisting enterprise
customers in deploying on a variety of IT infrastructures while
maintaining the desired SLA/SLO. While MinIO may run on less than the
minimum recommended topology, any potential cost savings come at the risk of
decreased reliability.
Recommended Hardware
~~~~~~~~~~~~~~~~~~~~
For MinIO's recommended hardware, please see
`MinIO Reference Hardware <https://min.io/product/reference-hardware>`__.
Bare Metal Infrastructure
~~~~~~~~~~~~~~~~~~~~~~~~~
A distributed MinIO deployment can only provide as much availability as the
bare metal infrastructure on which it is deployed. In particular, consider the
following potential failure points which could result in cluster downtime
when configuring your bare metal infrastructure:
- Shared networking resources (switches, routers, ISP).
- Shared power resources.
- Shared physical location (rack, datacenter, region).
MinIO deployments using virtual machines or containerized environments should
also consider the following:
- Shared physical hardware (CPU, Memory, Storage)
- Shared orchestration management layer (Kubernetes, Docker Swarm)
FreeBSD
-------
MinIO does not provide an official FreeBSD binary. FreeBSD maintains an
`upstream release <https://www.freshports.org/www/minio>`__ you can
install using `pkg <https://github.com/freebsd/pkg>`__:
.. code-block:: shell
:class: copyable
pkg install minio
sysrc minio_enable=yes
sysrc minio_disks=/path/to/disks
service minio start