MinIO Architecture Playbook

End-to-End MinIO Architecture: Cold Storage, Hot Storage & High Availability

A complete walkthrough for designing distributed MinIO storage with lifecycle-aware cold tiers, high-speed hot tiers, erasure coding, and resilient cluster availability.

4-Node Cluster Erasure Coding HAProxy LB Systemd Service UFW Firewall

Architecture Diagram

This is the end-to-end MinIO layout for the article: client traffic flows through HAProxy, then splits into dedicated cold and hot clusters with separate storage tiers and high availability.

💡
SECTION 01

Core Concepts — Read This First

Before touching a terminal, it helps to understand what MinIO is and why we set it up this way. This section explains the key ideas in plain English.

What is MinIO?

MinIO is an open-source, high-performance object storage server. Think of it like Amazon S3, but one you host yourself. You can store files (called objects) in buckets, and access them over HTTP using the S3 API.

What is a Cold Cluster vs a Hot Cluster?

❄️ Cold Cluster

Uses HDD or SAN disks. Best for backups, archives, and compliance storage. Cheaper, but slower. Data is rarely accessed.

⚡ Hot Cluster

Uses SSD or NVMe disks. Best for apps that need fast, frequent access. More expensive, but very low latency.

🎯
Real-world analogy Think of cold storage like a warehouse — cheap, lots of space, but you don't go there every day. Hot storage is like your desk drawer — quick to reach, but limited space.

Why 4 Nodes? — Erasure Coding Explained

MinIO uses Erasure Coding instead of simple replication. Instead of copying data 3 times (like RAID-1), it splits data into chunks and spreads them across nodes. This is more efficient.

SetupMin NodesCan LoseStorage Efficiency
Simple Replication (3×)1+2 copies33% usable
MinIO Erasure Coding41–2 nodes~50–75% usable
⚠️
Critical Rule — No Root Disk! MinIO will refuse to start if you try to use the OS root disk (/) for storage. You must attach a separate disk to each node and mount it (e.g. at /mnt/san or /mnt/ssd).
📋
SECTION 02

Prerequisites

Make sure you have all of the following ready before starting. Going in unprepared is the #1 cause of setup failures.

  • 9 servers total: 4 for cold cluster + 4 for hot cluster + 1 for HAProxy (load balancer)
  • Separate disks attached to each of the 8 storage nodes (not the root disk)
  • Cold nodes: HDD or SAN volume, mounted at /mnt/san on all 4 nodes
  • Hot nodes: SSD or NVMe volume, mounted at /mnt/ssd on all 4 nodes
  • SSH access to all 9 servers
  • Network connectivity between all nodes (they must be able to reach each other)
  • Linux (Ubuntu/Debian recommended) on all servers
  • Internet access on all nodes (to download MinIO)

Connect to All Nodes

Open 9 terminal tabs and SSH into each server. You'll be running the same commands on multiple nodes simultaneously.

bash
# Cold Cluster Nodes
ssh user@cold-node1
ssh user@cold-node2
ssh user@cold-node3
ssh user@cold-node4

# Hot Cluster Nodes
ssh user@hot-node1
ssh user@hot-node2
ssh user@hot-node3
ssh user@hot-node4

# HAProxy Load Balancer
ssh user@haproxy-node
🗺️
SECTION 03

Architecture Notes

The diagram above shows the full request path. HAProxy acts as the front door, then routes requests to either the cold cluster or the hot cluster based on the service port.

🎯
Reading the diagram Ports 9000 and 9001 map to the cold cluster, while 9100 and 9101 map to the hot cluster. Each side uses four MinIO nodes with separate disks and erasure coding for resilience.

❄️
SECTION 04

Cold Cluster Setup (HDD / SAN)

Run all commands in this section on all 4 cold cluster nodes unless stated otherwise.

ℹ️
Assumption Your separate disk is already formatted and mounted at /mnt/san on all 4 cold nodes. Adjust the path if your mount point is different.
01

Create a Dedicated MinIO System User

MinIO should never run as root. We create a locked-down system user called minio-user that has no login shell and cannot be used interactively. Then we give it ownership of the storage disk.

bash — all 4 cold nodes
# Create system user (no login shell, no home directory)
sudo useradd -r minio-user -s /sbin/nologin

# Give minio-user full ownership of the storage mount
sudo chown -R minio-user:minio-user /mnt/san
💡
Why -r and -s /sbin/nologin? -r creates a system account (lower UID range). -s /sbin/nologin prevents anyone from logging in as this user — it's a security best practice.
02

Download and Install the MinIO Binary

We download the MinIO server binary directly from the official MinIO CDN, move it to /usr/local/bin so it's available system-wide, and make it executable.

bash — all 4 cold nodes
# Download the MinIO server binary
wget https://dl.min.io/server/minio/release/linux-amd64/minio

# Move it to a system-wide location
sudo mv minio /usr/local/bin/

# Make it executable
sudo chmod +x /usr/local/bin/minio

# Verify it works
minio --version
03

Create the Environment Configuration File

This file stores MinIO's credentials and URL settings. It is read by the systemd service at startup. Create it on all 4 cold nodes with the same content.

bash — all 4 cold nodes
sudo vim /etc/default/minio

Paste the following into the file:

/etc/default/minio
# MinIO Admin Credentials (change these in production!)
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=password@xyz

# Replace with your HAProxy IP, or cold-node1 IP if no HAProxy yet
MINIO_SERVER_URL=http://<haproxy-ip>:9000
MINIO_BROWSER_REDIRECT_URL=http://<haproxy-ip>:9001
⚠️
Replace <haproxy-ip> with your actual HAProxy server IP. If you haven't set up HAProxy yet, temporarily use cold-node1's IP — you can update this later.
04

Create the Systemd Service File

Systemd manages the MinIO process — starting it on boot, restarting it on failure, and running it as the right user. This exact same file goes on all 4 cold nodes.

bash — all 4 cold nodes
sudo vim /etc/systemd/system/minio.service

Paste the following (replace node1node4 with actual IPs or hostnames):

/etc/systemd/system/minio.service
[Unit]
Description=MinIO
After=network.target

[Service]
User=minio-user
Group=minio-user
EnvironmentFile=/etc/default/minio
ExecStart=/usr/local/bin/minio server \
  http://cold-node1/mnt/san \
  http://cold-node2/mnt/san \
  http://cold-node3/mnt/san \
  http://cold-node4/mnt/san \
  --address :9000 \
  --console-address :9001
Restart=always
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
⚠️
Replace cold-node1 through cold-node4 with the actual IP addresses or hostnames of your 4 cold cluster nodes. The same file with all 4 node addresses goes on every node.
💡
Why LimitNOFILE=65536? MinIO opens many file descriptors simultaneously. This raises the OS limit to prevent "too many open files" errors under load.
05

Configure Firewall (UFW)

If UFW (Uncomplicated Firewall) is enabled, you must open ports 9000 and 9001 so MinIO can receive connections.

bash — all 4 cold nodes
# Check if UFW is active
sudo ufw status

# If it shows "active", run these:
sudo ufw allow 9000/tcp   # MinIO API
sudo ufw allow 9001/tcp   # MinIO Console
sudo ufw reload
06

Enable, Start, and Verify MinIO

Now we reload systemd (so it picks up the new service file), enable MinIO to start on boot, and start it now.

bash — all 4 cold nodes
# Reload systemd to pick up the new service file
sudo systemctl daemon-reload

# Enable MinIO to start automatically on boot
sudo systemctl enable minio

# Start MinIO now
sudo systemctl start minio

# Check that it's running
sudo systemctl status minio

# View the last 50 log lines if something seems wrong
journalctl -u minio -n 50 --no-pager
💡
What does "active (running)" mean? That's the green output from systemctl status. If you see "failed" instead, check the logs with journalctl -u minio -n 100 --no-pager for the error message.

Once all 4 nodes are running, you can access the cold cluster console from any browser:

urls
# Cold Cluster Console (any of these work)
http://cold-node1:9001
http://cold-node2:9001
http://cold-node3:9001
http://cold-node4:9001

# Cold Cluster API Endpoint
http://cold-node1:9000
⚠️
Data is HA, but access is not yet! If you access via cold-node1 and that node goes down, your browser connection will fail — even though the data is safe. This is why we set up HAProxy next.

SECTION 05

Hot Cluster Setup (SSD / NVMe)

The hot cluster setup is almost identical to the cold cluster. The only differences are: the disk mount path is /mnt/ssd, and the cluster uses the same ports (9000/9001) internally — HAProxy will separate them externally on ports 9100/9101.

ℹ️
Assumption Your SSD/NVMe disk is already formatted and mounted at /mnt/ssd on all 4 hot nodes.
01

Create a Dedicated MinIO System User

bash — all 4 hot nodes
sudo useradd -r minio-user -s /sbin/nologin
sudo chown -R minio-user:minio-user /mnt/ssd
02

Download and Install the MinIO Binary

bash — all 4 hot nodes
wget https://dl.min.io/server/minio/release/linux-amd64/minio
sudo mv minio /usr/local/bin/
sudo chmod +x /usr/local/bin/minio
03

Create the Environment Configuration File

bash — all 4 hot nodes
sudo vim /etc/default/minio
/etc/default/minio — hot cluster
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=password@xyz

# Hot cluster uses HAProxy ports 9100 / 9101
MINIO_SERVER_URL=http://<haproxy-ip>:9100
MINIO_BROWSER_REDIRECT_URL=http://<haproxy-ip>:9101
04

Create the Systemd Service File

bash — all 4 hot nodes
sudo vim /etc/systemd/system/minio.service
/etc/systemd/system/minio.service — hot cluster
[Unit]
Description=MinIO
After=network.target

[Service]
User=minio-user
Group=minio-user
EnvironmentFile=/etc/default/minio
ExecStart=/usr/local/bin/minio server \
  http://hot-node1/mnt/ssd \
  http://hot-node2/mnt/ssd \
  http://hot-node3/mnt/ssd \
  http://hot-node4/mnt/ssd \
  --address :9000 \
  --console-address :9001
Restart=always
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
⚠️
Notice that the hot cluster nodes still use ports :9000 and :9001 internally. HAProxy will expose them to the outside world on ports 9100 and 9101 to avoid conflicts with the cold cluster.
05

Configure Firewall (UFW)

bash — all 4 hot nodes
sudo ufw allow 9000/tcp
sudo ufw allow 9001/tcp
sudo ufw reload
06

Enable, Start, and Verify MinIO

bash — all 4 hot nodes
sudo systemctl daemon-reload
sudo systemctl enable minio
sudo systemctl start minio
sudo systemctl status minio
journalctl -u minio -n 50 --no-pager

🔀
SECTION 06

HAProxy Load Balancer Setup

All commands in this section run on the HAProxy server only (the 9th server).

HAProxy solves the access HA problem. Instead of connecting directly to a node (which can go down), you connect to HAProxy, which always picks a healthy node behind the scenes.

01

Install HAProxy

bash — Ubuntu/Debian
sudo apt update
sudo apt install haproxy -y
bash — RHEL/CentOS
sudo yum install haproxy -y
02

Configure HAProxy

Open the HAProxy configuration file and replace its entire contents with the config below. This sets up four listeners: cold API, cold console, hot API, hot console — each on a separate port to avoid conflicts.

bash
sudo vim /etc/haproxy/haproxy.cfg
/etc/haproxy/haproxy.cfg
# ── Global Settings ─────────────────────────────────────
global
    log /dev/log local0
    log /dev/log local1 notice
    daemon
    maxconn 4096

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5s
    timeout client  60s
    timeout server  60s


# ═══════════════════════════════════════════════════════
# ❄️  COLD CLUSTER
# ═══════════════════════════════════════════════════════

# Cold Cluster — API (port 9000)
frontend cold_minio_api_frontend
    bind *:9000
    mode http
    default_backend cold_minio_api_backend

backend cold_minio_api_backend
    mode http
    balance roundrobin
    option httpchk GET /minio/health/live
    server cold-node1 COLD-NODE1-IP:9000 check
    server cold-node2 COLD-NODE2-IP:9000 check
    server cold-node3 COLD-NODE3-IP:9000 check
    server cold-node4 COLD-NODE4-IP:9000 check

# Cold Cluster — Console (port 9001)
frontend cold_minio_console_frontend
    bind *:9001
    mode http
    default_backend cold_minio_console_backend

backend cold_minio_console_backend
    mode http
    balance roundrobin
    option httpchk GET /minio/health/live
    server cold-node1 COLD-NODE1-IP:9001 check
    server cold-node2 COLD-NODE2-IP:9001 check
    server cold-node3 COLD-NODE3-IP:9001 check
    server cold-node4 COLD-NODE4-IP:9001 check


# ═══════════════════════════════════════════════════════
# ⚡  HOT CLUSTER
# ═══════════════════════════════════════════════════════

# Hot Cluster — API (port 9100)
frontend hot_minio_api_frontend
    bind *:9100
    mode http
    default_backend hot_minio_api_backend

backend hot_minio_api_backend
    mode http
    balance roundrobin
    option httpchk GET /minio/health/live
    server hot-node1 HOT-NODE1-IP:9000 check
    server hot-node2 HOT-NODE2-IP:9000 check
    server hot-node3 HOT-NODE3-IP:9000 check
    server hot-node4 HOT-NODE4-IP:9000 check

# Hot Cluster — Console (port 9101)
frontend hot_minio_console_frontend
    bind *:9101
    mode http
    default_backend hot_minio_console_backend

backend hot_minio_console_backend
    mode http
    balance roundrobin
    option httpchk GET /minio/health/live
    server hot-node1 HOT-NODE1-IP:9001 check
    server hot-node2 HOT-NODE2-IP:9001 check
    server hot-node3 HOT-NODE3-IP:9001 check
    server hot-node4 HOT-NODE4-IP:9001 check
⚠️
Replace every COLD-NODE1-IP through HOT-NODE4-IP with the actual IP addresses of your nodes. Do not use hostnames unless DNS resolution is fully configured.
💡
What does "balance roundrobin" mean? HAProxy sends each incoming request to the next available node in rotation (1→2→3→4→1…). If a node is down, it automatically skips it. The option httpchk line tells HAProxy to periodically ping each MinIO node's health endpoint to detect failures.
03

Open Firewall Ports on HAProxy Server

bash — HAProxy server
sudo ufw allow 9000/tcp   # Cold API
sudo ufw allow 9001/tcp   # Cold Console
sudo ufw allow 9100/tcp   # Hot API
sudo ufw allow 9101/tcp   # Hot Console
sudo ufw reload
04

Start and Enable HAProxy

bash — HAProxy server
# Validate config before starting (catches typos)
sudo haproxy -c -f /etc/haproxy/haproxy.cfg

# Restart HAProxy to apply new config
sudo systemctl restart haproxy

# Enable it to start on boot
sudo systemctl enable haproxy

# Confirm it's running
sudo systemctl status haproxy

SECTION 07

Final Access Endpoints

With HAProxy running, you now have a single IP address for all cluster access. Use these URLs in your applications and browser.

❄️ Cold Cluster

API http://<haproxy-ip>:9000
UI http://<haproxy-ip>:9001

⚡ Hot Cluster

API http://<haproxy-ip>:9100
UI http://<haproxy-ip>:9101
🎉
Login Credentials Username: minioadmin  |  Password: password@xyz  (as set in the env file — change this in production!)

Port Reference Summary

PortClusterPurposeAccess Via
9000 ❄️ Cold S3-compatible API HAProxy IP
9001 ❄️ Cold Web Console HAProxy IP
9100 ⚡ Hot S3-compatible API HAProxy IP
9101 ⚡ Hot Web Console HAProxy IP

What You've Built

  • 4-node distributed cold cluster with erasure coding, backed by HDD/SAN storage
  • 4-node distributed hot cluster with erasure coding, backed by SSD/NVMe storage
  • Both clusters tolerate losing 1–2 nodes without data loss
  • HAProxy load balancer providing a single HA endpoint for both clusters
  • Automatic failover — if a node goes down, HAProxy routes around it
  • Both clusters run as a secure system user (not root)
  • Systemd services that auto-restart on crash and start on boot
🔜
Next Step: Data Tiering Now that both clusters are running, you can configure data tiering to automatically move infrequently accessed data from the hot cluster to the cold cluster — saving costs while keeping performance for active data.