Core Concepts — Read This First
Before touching a terminal, it helps to understand what MinIO is and why we set it up this way. This section explains the key ideas in plain English.
What is MinIO?
MinIO is an open-source, high-performance object storage server. Think of it like Amazon S3, but one you host yourself. You can store files (called objects) in buckets, and access them over HTTP using the S3 API.
What is a Cold Cluster vs a Hot Cluster?
❄️ Cold Cluster
Uses HDD or SAN disks. Best for backups, archives, and compliance storage. Cheaper, but slower. Data is rarely accessed.
⚡ Hot Cluster
Uses SSD or NVMe disks. Best for apps that need fast, frequent access. More expensive, but very low latency.
Why 4 Nodes? — Erasure Coding Explained
MinIO uses Erasure Coding instead of simple replication. Instead of copying data 3 times (like RAID-1), it splits data into chunks and spreads them across nodes. This is more efficient.
| Setup | Min Nodes | Can Lose | Storage Efficiency |
|---|---|---|---|
| Simple Replication (3×) | 1+ | 2 copies | 33% usable |
| MinIO Erasure Coding | 4 | 1–2 nodes | ~50–75% usable |
/) for storage. You must attach a separate disk to each node and mount it (e.g. at /mnt/san or /mnt/ssd).
Prerequisites
Make sure you have all of the following ready before starting. Going in unprepared is the #1 cause of setup failures.
- 9 servers total: 4 for cold cluster + 4 for hot cluster + 1 for HAProxy (load balancer)
- Separate disks attached to each of the 8 storage nodes (not the root disk)
- Cold nodes: HDD or SAN volume, mounted at
/mnt/sanon all 4 nodes - Hot nodes: SSD or NVMe volume, mounted at
/mnt/ssdon all 4 nodes - SSH access to all 9 servers
- Network connectivity between all nodes (they must be able to reach each other)
- Linux (Ubuntu/Debian recommended) on all servers
- Internet access on all nodes (to download MinIO)
Connect to All Nodes
Open 9 terminal tabs and SSH into each server. You'll be running the same commands on multiple nodes simultaneously.
# Cold Cluster Nodes
ssh user@cold-node1
ssh user@cold-node2
ssh user@cold-node3
ssh user@cold-node4
# Hot Cluster Nodes
ssh user@hot-node1
ssh user@hot-node2
ssh user@hot-node3
ssh user@hot-node4
# HAProxy Load Balancer
ssh user@haproxy-node
Architecture Notes
The diagram above shows the full request path. HAProxy acts as the front door, then routes requests to either the cold cluster or the hot cluster based on the service port.
9000 and 9001 map to the cold cluster, while 9100 and 9101 map to the hot cluster. Each side uses four MinIO nodes with separate disks and erasure coding for resilience.
Cold Cluster Setup (HDD / SAN)
Run all commands in this section on all 4 cold cluster nodes unless stated otherwise.
/mnt/san on all 4 cold nodes. Adjust the path if your mount point is different.
Create a Dedicated MinIO System User
MinIO should never run as root. We create a locked-down system user called minio-user that has no login shell and cannot be used interactively. Then we give it ownership of the storage disk.
# Create system user (no login shell, no home directory)
sudo useradd -r minio-user -s /sbin/nologin
# Give minio-user full ownership of the storage mount
sudo chown -R minio-user:minio-user /mnt/san
-r creates a system account (lower UID range). -s /sbin/nologin prevents anyone from logging in as this user — it's a security best practice.
Download and Install the MinIO Binary
We download the MinIO server binary directly from the official MinIO CDN, move it to /usr/local/bin so it's available system-wide, and make it executable.
# Download the MinIO server binary
wget https://dl.min.io/server/minio/release/linux-amd64/minio
# Move it to a system-wide location
sudo mv minio /usr/local/bin/
# Make it executable
sudo chmod +x /usr/local/bin/minio
# Verify it works
minio --version
Create the Environment Configuration File
This file stores MinIO's credentials and URL settings. It is read by the systemd service at startup. Create it on all 4 cold nodes with the same content.
sudo vim /etc/default/minio
Paste the following into the file:
# MinIO Admin Credentials (change these in production!)
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=password@xyz
# Replace with your HAProxy IP, or cold-node1 IP if no HAProxy yet
MINIO_SERVER_URL=http://<haproxy-ip>:9000
MINIO_BROWSER_REDIRECT_URL=http://<haproxy-ip>:9001
<haproxy-ip> with your actual HAProxy server IP. If you haven't set up HAProxy yet, temporarily use cold-node1's IP — you can update this later.
Create the Systemd Service File
Systemd manages the MinIO process — starting it on boot, restarting it on failure, and running it as the right user. This exact same file goes on all 4 cold nodes.
sudo vim /etc/systemd/system/minio.service
Paste the following (replace node1–node4 with actual IPs or hostnames):
[Unit]
Description=MinIO
After=network.target
[Service]
User=minio-user
Group=minio-user
EnvironmentFile=/etc/default/minio
ExecStart=/usr/local/bin/minio server \
http://cold-node1/mnt/san \
http://cold-node2/mnt/san \
http://cold-node3/mnt/san \
http://cold-node4/mnt/san \
--address :9000 \
--console-address :9001
Restart=always
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
cold-node1 through cold-node4 with the actual IP addresses or hostnames of your 4 cold cluster nodes. The same file with all 4 node addresses goes on every node.
Configure Firewall (UFW)
If UFW (Uncomplicated Firewall) is enabled, you must open ports 9000 and 9001 so MinIO can receive connections.
# Check if UFW is active
sudo ufw status
# If it shows "active", run these:
sudo ufw allow 9000/tcp # MinIO API
sudo ufw allow 9001/tcp # MinIO Console
sudo ufw reload
Enable, Start, and Verify MinIO
Now we reload systemd (so it picks up the new service file), enable MinIO to start on boot, and start it now.
# Reload systemd to pick up the new service file
sudo systemctl daemon-reload
# Enable MinIO to start automatically on boot
sudo systemctl enable minio
# Start MinIO now
sudo systemctl start minio
# Check that it's running
sudo systemctl status minio
# View the last 50 log lines if something seems wrong
journalctl -u minio -n 50 --no-pager
systemctl status. If you see "failed" instead, check the logs with journalctl -u minio -n 100 --no-pager for the error message.
Once all 4 nodes are running, you can access the cold cluster console from any browser:
# Cold Cluster Console (any of these work)
http://cold-node1:9001
http://cold-node2:9001
http://cold-node3:9001
http://cold-node4:9001
# Cold Cluster API Endpoint
http://cold-node1:9000
cold-node1 and that node goes down, your browser connection will fail — even though the data is safe. This is why we set up HAProxy next.
Hot Cluster Setup (SSD / NVMe)
The hot cluster setup is almost identical to the cold cluster. The only differences are: the disk mount path is /mnt/ssd, and the cluster uses the same ports (9000/9001) internally — HAProxy will separate them externally on ports 9100/9101.
/mnt/ssd on all 4 hot nodes.
Create a Dedicated MinIO System User
sudo useradd -r minio-user -s /sbin/nologin
sudo chown -R minio-user:minio-user /mnt/ssd
Download and Install the MinIO Binary
wget https://dl.min.io/server/minio/release/linux-amd64/minio
sudo mv minio /usr/local/bin/
sudo chmod +x /usr/local/bin/minio
Create the Environment Configuration File
sudo vim /etc/default/minio
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=password@xyz
# Hot cluster uses HAProxy ports 9100 / 9101
MINIO_SERVER_URL=http://<haproxy-ip>:9100
MINIO_BROWSER_REDIRECT_URL=http://<haproxy-ip>:9101
Create the Systemd Service File
sudo vim /etc/systemd/system/minio.service
[Unit]
Description=MinIO
After=network.target
[Service]
User=minio-user
Group=minio-user
EnvironmentFile=/etc/default/minio
ExecStart=/usr/local/bin/minio server \
http://hot-node1/mnt/ssd \
http://hot-node2/mnt/ssd \
http://hot-node3/mnt/ssd \
http://hot-node4/mnt/ssd \
--address :9000 \
--console-address :9001
Restart=always
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
:9000 and :9001 internally. HAProxy will expose them to the outside world on ports 9100 and 9101 to avoid conflicts with the cold cluster.
Configure Firewall (UFW)
sudo ufw allow 9000/tcp
sudo ufw allow 9001/tcp
sudo ufw reload
Enable, Start, and Verify MinIO
sudo systemctl daemon-reload
sudo systemctl enable minio
sudo systemctl start minio
sudo systemctl status minio
journalctl -u minio -n 50 --no-pager
HAProxy Load Balancer Setup
All commands in this section run on the HAProxy server only (the 9th server).
HAProxy solves the access HA problem. Instead of connecting directly to a node (which can go down), you connect to HAProxy, which always picks a healthy node behind the scenes.
Install HAProxy
sudo apt update
sudo apt install haproxy -y
sudo yum install haproxy -y
Configure HAProxy
Open the HAProxy configuration file and replace its entire contents with the config below. This sets up four listeners: cold API, cold console, hot API, hot console — each on a separate port to avoid conflicts.
sudo vim /etc/haproxy/haproxy.cfg
# ── Global Settings ─────────────────────────────────────
global
log /dev/log local0
log /dev/log local1 notice
daemon
maxconn 4096
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5s
timeout client 60s
timeout server 60s
# ═══════════════════════════════════════════════════════
# ❄️ COLD CLUSTER
# ═══════════════════════════════════════════════════════
# Cold Cluster — API (port 9000)
frontend cold_minio_api_frontend
bind *:9000
mode http
default_backend cold_minio_api_backend
backend cold_minio_api_backend
mode http
balance roundrobin
option httpchk GET /minio/health/live
server cold-node1 COLD-NODE1-IP:9000 check
server cold-node2 COLD-NODE2-IP:9000 check
server cold-node3 COLD-NODE3-IP:9000 check
server cold-node4 COLD-NODE4-IP:9000 check
# Cold Cluster — Console (port 9001)
frontend cold_minio_console_frontend
bind *:9001
mode http
default_backend cold_minio_console_backend
backend cold_minio_console_backend
mode http
balance roundrobin
option httpchk GET /minio/health/live
server cold-node1 COLD-NODE1-IP:9001 check
server cold-node2 COLD-NODE2-IP:9001 check
server cold-node3 COLD-NODE3-IP:9001 check
server cold-node4 COLD-NODE4-IP:9001 check
# ═══════════════════════════════════════════════════════
# ⚡ HOT CLUSTER
# ═══════════════════════════════════════════════════════
# Hot Cluster — API (port 9100)
frontend hot_minio_api_frontend
bind *:9100
mode http
default_backend hot_minio_api_backend
backend hot_minio_api_backend
mode http
balance roundrobin
option httpchk GET /minio/health/live
server hot-node1 HOT-NODE1-IP:9000 check
server hot-node2 HOT-NODE2-IP:9000 check
server hot-node3 HOT-NODE3-IP:9000 check
server hot-node4 HOT-NODE4-IP:9000 check
# Hot Cluster — Console (port 9101)
frontend hot_minio_console_frontend
bind *:9101
mode http
default_backend hot_minio_console_backend
backend hot_minio_console_backend
mode http
balance roundrobin
option httpchk GET /minio/health/live
server hot-node1 HOT-NODE1-IP:9001 check
server hot-node2 HOT-NODE2-IP:9001 check
server hot-node3 HOT-NODE3-IP:9001 check
server hot-node4 HOT-NODE4-IP:9001 check
COLD-NODE1-IP through HOT-NODE4-IP with the actual IP addresses of your nodes. Do not use hostnames unless DNS resolution is fully configured.
option httpchk line tells HAProxy to periodically ping each MinIO node's health endpoint to detect failures.
Open Firewall Ports on HAProxy Server
sudo ufw allow 9000/tcp # Cold API
sudo ufw allow 9001/tcp # Cold Console
sudo ufw allow 9100/tcp # Hot API
sudo ufw allow 9101/tcp # Hot Console
sudo ufw reload
Start and Enable HAProxy
# Validate config before starting (catches typos)
sudo haproxy -c -f /etc/haproxy/haproxy.cfg
# Restart HAProxy to apply new config
sudo systemctl restart haproxy
# Enable it to start on boot
sudo systemctl enable haproxy
# Confirm it's running
sudo systemctl status haproxy
Final Access Endpoints
With HAProxy running, you now have a single IP address for all cluster access. Use these URLs in your applications and browser.
❄️ Cold Cluster
⚡ Hot Cluster
minioadmin | Password: password@xyz (as set in the env file — change this in production!)
Port Reference Summary
| Port | Cluster | Purpose | Access Via |
|---|---|---|---|
9000 |
❄️ Cold | S3-compatible API | HAProxy IP |
9001 |
❄️ Cold | Web Console | HAProxy IP |
9100 |
⚡ Hot | S3-compatible API | HAProxy IP |
9101 |
⚡ Hot | Web Console | HAProxy IP |
What You've Built
- 4-node distributed cold cluster with erasure coding, backed by HDD/SAN storage
- 4-node distributed hot cluster with erasure coding, backed by SSD/NVMe storage
- Both clusters tolerate losing 1–2 nodes without data loss
- HAProxy load balancer providing a single HA endpoint for both clusters
- Automatic failover — if a node goes down, HAProxy routes around it
- Both clusters run as a secure system user (not root)
- Systemd services that auto-restart on crash and start on boot