Minio | S3 compatible storage on Kubernetes

Minio | S3 compatible storage on Kubernetes

In this tutorial we will walk through deploying a multi-node distributed and transport encrypted Minio cluster on Kubernetes. For a little background Minio is an open source implementation of the AWS S3 v2/v4 API specification. As such it can act as private AWS S3 solution that can be deployed into your own environment.


Using Minio as a private object storage backend provides a number of advantages, but mainly it can integrate with nearly anything that consumes the S3 API, which is a lot, as AWS S3 has been widely adopted as the default storage backend for many projects.


Prerequisites

To follow along with this tutorial you can use any conformant Kubernetes cluster with support for Ingress resources (see our previous tutorial on ingress-controllers). We will use Karrier, which is our own hosted solution. With Karrier you get immediate access to pre-built and fully managed Kubernetes clusters around the globe. Visit karrier.io to learn more.


Architecture

To help visualise what we are be building today, we have created the following diagram centered around the Kubernetes resources needed to deploy our Minio cluster.


Generating keys

First generate an access and secret key for Minio. For security purposes, it is important these keys are random. The following example command to generate random keys is far from perfect, but is sufficient enough for this tutorial and conveniently happens work on Windows, Mac, and Linux.

Run this command twice - once for the access key and again for the secret key.

1
2
3
date | md5sum

9479114d2f2ca38add892a0c2089e454
1
2
3
date | md5sum

31ce8ce38b358635a188a87481aa5e4f


Creating a Kubernetes Secret

Now create a Kubernetes Secret to store these keys. Run the following command but be sure to replace the key values with the keys you generated in the previous step. Also note that there is a single space in front of this command. When using BASH this little trick will ensure the command is not visible in your history.

1
2
3
 kubectl create secret generic minio-keys \
 --from-literal=access-key=9479114d2f2ca38add892a0c2089e454 \
 --from-literal=secret-key=31ce8ce38b358635a188a87481aa5e4f


Creating the Kubernetes manifest

Begin by defining a ServiceAccount, Role, and RoleBinding to ensure the Minio pods can access the Minio keys stored in the previously created Secret.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: minio-serviceaccount
  labels:
    app: minio

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: minio-role
  labels:
    app: minio
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  resourceNames:
  - "minio-keys"
  verbs:
  - get

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: minio-role-binding
  labels:
    app: minio
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: minio-role
subjects:
- kind: ServiceAccount
  name: minio-serviceaccount

Next let’s define a StatefulSet to manage the Minio pods. There are few important things to point out here.

  1. The server path “http://minio-{0…5}/data” in the “minio” container args is actually shorthand that the Minio daemon accepts to represent all 6 pod hostnames that will make up the Minio cluster. To best explain this relationship, pay close attention to the following example. If you were to set the serviceName value to “poodle” and change the replicas value to “10” you must replace “http://minio-{0…5}/data” with “http://poodle-{0…9}/data”.
  2. By defining a volumeClaimTemplate and setting the storage value to “50Gi” in your StatefulSet, Kubernetes will provision and attach a 50GB PersistantVolume for each our 6 pods. Note that Kubernetes does not yet support resizing volumes and your Minio cluster cannot exceed 64 pods. So be sure to set the storage value to no less than your expected usage divided by 128. This calculation will safely account for Minio’s maximum cluster size and erasure coding.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: minio
spec:
  serviceName: minio
  replicas: 6
  template:
    metadata:
      labels:
        app: minio
    spec:
      serviceAccountName: minio-serviceaccount
      containers:
      - name: minio
        image: karrier/minio:RELEASE.2018-09-01T00-38-25Z
        args:
        - server
        - http://minio-{0...5}/data
        env:
        - name: MINIO_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              name: minio-keys
              key: access-key
        - name: MINIO_SECRET_KEY
          valueFrom:
            secretKeyRef:
              name: minio-keys
              key: secret-key
        ports:
        - containerPort: 9000
        resources:
          limits:
            cpu: 200m
            memory: 400Mi
        volumeMounts:
        - name: data
          mountPath: /data
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 50Gi

Next define a Service for your Minio cluster. Note in the below example the clusterIP value has been set to “None”. This tells Kubernetes to create a headless Service. Unlike the default Service behaviour, headless Services do not load balance traffic over a single IP, instead Kubernetes will create a DNS record for each of the pods. Doing so will allow the Minio pods to find each other using their native service discovery technique, DNS.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
---
apiVersion: v1
kind: Service
metadata:
  name: minio
  labels:
    app: minio
spec:
  clusterIP: None
  selector:
    app: minio
  ports:
  - port: 9000
    name: minio

Now define another Service, but unlike the first one, this one will provide a single load balanced IP for clients to reach the Minio cluster on.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
---
apiVersion: v1
kind: Service
metadata:
  name: minio-service
  labels:
    app: minio
spec:
  type: ClusterIP
  selector:
    app: minio
  ports:
  - port: 80
    targetPort: 9000
    protocol: TCP

Define a NetworkPolicy to allow all inbound traffic into your minio pods. This ingress rule can be limited as required. More information on Network Policies can be found here.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: minio-network-policy
  labels:
    app: minio
spec:
  podSelector:
    matchLabels:
      app: minio
  ingress:
  - {}

Next define a Lets Encrypt certificate issuer. Replace “EMAIL” with your own address. This address will be used by Lets Encrypt to notify you about any expiring certificates.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: "EMAIL"
    privateKeySecretRef:
      name: letsencrypt-prod
    http01: {}

Now define a certificate that will be created by the Lets Encrypt certificate issuer defined above. Replace “s3.tuts.ninja” with your own DNS address.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: 's3.tuts.ninja'
spec:
  secretName: 's3.tuts.ninja'
  dnsNames:
  - 's3.tuts.ninja'
  acme:
    config:
    - http01:
        ingressClass: nginx
      domains:
      - 's3.tuts.ninja'
  issuerRef:
    name: letsencrypt-prod
    kind: Issuer

Finally lets define the Ingress resource. Replace “s3.tuts.ninja” with your own DNS address. If your not automating DNS with tools like the external-dns controller, be sure to create a corresponding DNS record with a value matching the public IP address of your Ingress Controller.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: minio
  labels:
    app: minio
spec:
  rules:
  - host: "s3.tuts.ninja"
    http:
      paths:
      - path: /
        backend:
          serviceName: minio-service
          servicePort: 80
  tls:
  - secretName: "s3.tuts.ninja"
    hosts:
    - "s3.tuts.ninja"


Submit manifest to Kubernetes

With the manifest written and presumably saved into your local directory as “minio.yaml” run the following command to submit this manifest into your Kubernetes cluster.

1
2
3
4
5
6
7
8
9
kubectl apply -f minio.yaml

serviceaccount/minio-serviceaccount created
role/minio-role created
rolebinding/minio-role-binding created
statefulset/minio created
service/minio created
service/minio-service created
ingress/minio created

Now run the following command to check to see if your Minio pods are running.

1
2
3
4
5
6
7
8
9
kubectl get pods -l app=minio

NAME      READY     STATUS    RESTARTS   AGE
minio-0   1/1       Running   0          1m
minio-1   1/1       Running   0          1m
minio-2   1/1       Running   0          1m
minio-3   1/1       Running   0          1m
minio-4   1/1       Running   0          1m
minio-5   1/1       Running   0          1m

Before continuing check the Minio logs for any errors.

1
kubectl logs -l app=minio


Test out Minio

To test out Minio simply open up your preferred web browser and visit the “host” address you set in your Ingress resource. Upon visiting the site you will be prompted to provide an access and secret key. Enter the access and secret keys you generated at the beginning of this tutorial. Once provided, you will be logged in and can begin testing out your new S3 compliant object storage cluster.

Happy storing!