1
0
mirror of https://github.com/minio/docs.git synced 2025-07-28 19:42:10 +03:00

DOCS-625: Removing FS Mode guidance, cleanups to deploy docs (#626)

Closes #625
This commit is contained in:
Ravind Kumar
2022-10-31 14:21:44 -04:00
committed by GitHub
parent f016fdb219
commit 64ca9697cf
5 changed files with 35 additions and 88 deletions

View File

@ -11,33 +11,11 @@ Deploy MinIO: Multi-Node Multi-Drive
:local:
:depth: 1
Overview
--------
The procedures on this page cover deploying MinIO in a Multi-Node Multi-Drive (MNMD) or "Distributed" configuration.
|MNMD| deployments provide enterprise-grade performance, availability, and scalability and are the recommended topology for all production workloads.
A distributed MinIO deployment consists of 4 or more drives/volumes managed by
one or more :mc:`minio server` process, where the processes manage pooling the
compute and storage resources into a single aggregated object storage resource.
Each MinIO server has a complete picture of the distributed topology, such that
an application can connect to any node in the deployment and perform S3
operations.
Distributed deployments implicitly enable :ref:`erasure coding
<minio-erasure-coding>`, MinIO's data redundancy and availability feature that
allows deployments to automatically reconstruct objects on-the-fly despite the
loss of multiple drives or nodes in the cluster. Erasure coding provides
object-level healing with less overhead than adjacent technologies such as RAID
or replication.
Depending on the configured :ref:`erasure code parity <minio-ec-parity>`, a
distributed deployment with ``m`` servers and ``n`` disks per server can
continue serving read and write operations with only ``m/2`` servers or
``m*n/2`` drives online and accessible.
Distributed deployments also support the following features:
- :ref:`Server-Side Object Replication <minio-bucket-replication-serverside>`
- :ref:`Write-Once Read-Many Locking <minio-bucket-locking>`
- :ref:`Object Versioning <minio-bucket-versioning>`
|MNMD| deployments support :ref:`erasure coding <minio-ec-parity>` configurations which tolerate the loss of up to half the nodes or drives in the deployment while continuing to serve read operations.
Use the MinIO `Erasure Code Calculator <https://min.io/product/erasure-code-calculator?ref=docs>`__ when planning and designing your MinIO deployment to explore the effect of erasure code settings on your intended topology.
.. _deploy-minio-distributed-prereqs:

View File

@ -11,11 +11,11 @@ Deploy MinIO: Single-Node Multi-Drive
:depth: 2
The procedures on this page cover deploying MinIO in a Single-Node Multi-Drive (SNMD) configuration.
This topology provides increased drive-level reliability and failover protection as compared to :ref:`Single-Node Single-Drive (SNSD) deployments <minio-snsd>`.
|SNMD| deployments provide drive-level reliability and failover/recovery with performance and scaling limitations imposed by the single node.
.. cond:: linux or macos or windows
For production environments, MinIO strongly recommends deploying with the :ref:`Multi-Node Multi-Drive (Distributed) <minio-mnmd>` topology.
For production environments, MinIO strongly recommends deploying with the :ref:`Multi-Node Multi-Drive (Distributed) <minio-mnmd>` topology for enterprise-grade performance, availability, and scalability.
.. cond:: container

View File

@ -11,7 +11,7 @@ Deploy MinIO: Single-Node Single-Drive
:depth: 2
The procedures on this page cover deploying MinIO in a Single-Node Single-Drive (SNSD) configuration for early development and evaluation.
This mode was previously called :guilabel:`Standalone Mode` or 'filesystem' mode.
|SNSD| deployments provide no added reliability or availability beyond what the underlying storage volume implements (RAID, LVM, ZFS, etc.).
Starting with :minio-release:`RELEASE.2022-06-02T02-11-04Z`, MinIO implements a zero-parity erasure coded backend for single-node single-drive deployments.
This feature allows access to :ref:`erasure coding dependent features <minio-erasure-coding>` without the requirement of multiple drives.
@ -27,6 +27,13 @@ See the documentation on :ref:`SNSD behavior with pre-existing data <minio-snsd-
For extended development or production environments, deploy MinIO in a :ref:`Multi-Node Multi-Drive (Distributed) <minio-mnmd>` topology
.. important::
:minio-release:`RELEASE.2022-10-29T06-21-33Z` fully removes the `deprecated Gateway/Filesystem <https://blog.min.io/deprecation-of-the-minio-gateway/>`__ backends.
MinIO returns an error if it starts up and detects existing Filesystem backend files.
To migrate from an FS-backend deployment, use :mc:`mc mirror` or :mc:`mc cp` to copy your data over to a new MinIO |SNSD| deployment.
You should also recreate any necessary users, groups, policies, and bucket configurations on the |SNSD| deployment.
.. _minio-snsd-pre-existing-data:
@ -51,12 +58,14 @@ The following table lists the possible storage volume states and MinIO behavior:
* - Existing |SNSD| zero-parity objects and MinIO backend data
- MinIO resumes in |SNSD| mode
* - Existing filesystem folders, files, and MinIO backend data
- MinIO resumes in the legacy filesystem ("Standalone") mode with no erasure-coding features
* - Existing filesystem folders, files, but **no** MinIO backend data
- MinIO returns an error and does not start
* - Existing filesystem folders, files, and legacy "FS-mode" backend data
- MinIO returns an error and does not start
.. versionchanged:: RELEASE.2022-10-29T06-21-33Z
.. _deploy-minio-standalone:
Deploy Single-Node Single-Drive MinIO