Mongodb – How to replicate a mongodb pod with its persistent storage (Minikube – Kubernetes)

dockermongodb

I made these settings :

     apiVersion: v1
     kind: PersistentVolumeClaim
     metadata:
       labels:
         name: mongo-claim0
       name: mongo-claim0
       namespace: my-app
     spec:
       accessModes:
       - ReadWriteOnce
       resources:
         requests:
           storage: 100Mi
     status: {}

       apiVersion: v1
       kind: ReplicationController
       metadata:
         labels:
           name: mongo
         name: mongo-controller
         namespace: my-app
       spec:
         replicas: 1
         template:
           metadata:
             labels:
               name: mongo
           spec:
             containers:
             - image: mongo
               name: mongo
               ports:
               - name: mongo
                 containerPort: 27017
               volumeMounts:
               - mountPath: /data/db
                 name: mongo-claim0
             restartPolicy: Always
             volumes:
             - name: mongo-claim0
               persistentVolumeClaim:
                 claimName: mongo-claim0

    apiVersion: v1
    kind: Service
    metadata:
      name: mongo
      namespace: my-app
      labels:
        name: mongo
    spec:
      ports:
      - port: 27017
        targetPort: 27017
      selector:
        name: mongo

When I try to scale this pod, Minikube UI show :

mongo-controller-xr21r -> Waiting: CrashLoopBackOff Back-off
restarting failed container Error syncing pod

And I got an error on the new pod :

exception in initAndListen: 98 Unable to lock file:
/data/db/mongod.lock Resource temporarily unavailable

Could you help me to be able to scale a pod with a persistent storage ?

Best Answer

I believe this is because you have more than one pod trying to use the same using the same data directory which you can not do if the pod is running mongo, one data directory has to be exclusive to one mongo instance.

If the goal is to create a group of pods with the same data then you are looking at creating a replica set.

In order to do this first you will need to create multiple persistent volumes as so:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: mongo-persistent-0
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/lib/mongo/0"
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: mongo-persistent-1
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/lib/mongo/1"
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: mongo-persistent-2
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/lib/mongo/2"

Then you will want to create a statefulset binding to the persistent volumes which means that when the pods go down and come back up they attach to the correct persistent volume, the yaml will look something like this:

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: replica-set-a
spec:
  serviceName: "replica-set-a"
  replicas: 3
  template:
    metadata:
      labels:
        role: replica-set-a
        environment: test
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: replica-set-a
          image: mongo
          resources:
            limits:
              memory: 2Gi
            requests:
              memory: 2Gi
          volumeMounts:
          - name: mongo-persistent
            mountPath: /data/db
          command:
            - mongod
            - "--bind_ip"
            - "0.0.0.0"
            - "--replSet"
            - a
            - "--smallfiles"
            - "--oplogSize"
            - "1024"
          ports:
            - containerPort: 27017
  volumeClaimTemplates:
  - metadata:
      name: mongo-persistent
    spec:
      storageClassName: manual
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Gi

Your service can remain the same apart from changing it to look at replica-set-a. Once they are all up you will need to issue the rs.initiate(... command in order to tell the pods they are now working as one replica set, details of this can be found in the mongodb documentation: https://docs.mongodb.com/manual/tutorial/deploy-replica-set/

** Note, I have copied the example yaml's from my sharded cluster that has been deployed through kubernetes which means I have removed some parameters so there could potentially be a typo, also you might want to scale down the persistent volumes and memory limits before proceeding.