Kubernetes With NFS Storage
Server Side
Install NFS Server
apt-get install nfs-kernel-server
set up the volume location
mkdir /var/nfs/general -p
change owner for nfs client accessing.
chown nobody:nogroup /var/nfs/general
Export NFS Directory
Enable the volume and access right
cat /etc/exports
/var/nfs/general 172.16.155.0/24(rw,sync,no_subtree_check)
restart the nfs server
systemctl restart nfs-kernel-server.service
CLient
Install NFS Client for each Kubernetes Minion
apt-get install nfs-common
Simple Testing
sudo mkdir -p /nfs/general
mount 172.16.155.211:/var/nfs/general /nfs/general
The testing is just a testing, you don't need the above setting for kubernetes. In the kubernetes, you just need nfs client in minion.
Kubernetes NFS PV & PVC
The way is quite different with Local PV.
You must follow the step from PV to PVC, you then can mount nfs client to POD.
PV is created by admin. You can define multiple storage capacity allowed user to use by kubernetes scheduler.
# cat pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: nfs-pv
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
nfs:
path: /var/nfs/general
server: 172.16.155.211
readOnly: false
After admin create the pv.
By the real user, if you want to attach the nfs share storage to the pod.
You must claim first (pvc). This step as a request for admin to get the resource called nfs-pvc
shown as followed.
For NFS, you must set ReadWriteMany
.
And the Kuberntes will schedule a suitable pv for you.
The parameters as nfs client mount
mount 172.16.155.211:/var/nfs/general /nfs/general
Now you start to PV Claim. (That's the meaning why called PV Claim); Claim an Suitable PV for POD.
cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
After the Claim You can get the volume you can use called nfs-pvc
.
Check the result, You will see the pv is bounded by nfs-pvc
root@kubecontext:~/k8sdeployex/globaltest/volume/NFS-PV# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv 1Gi RWX Retain Bound jj/nfs-pvc 9m
Launching a stateful pod show as following.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nginxstor
spec:
serviceName: "nginxstor"
replicas: 1
template:
metadata:
labels:
app: nginxstor
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
containers:
- name: nginxstor
#image: 172.16.155.136:5000/uwebserverv6
image: 172.16.155.136:5000/uwebserverv6
ports:
- containerPort: 8000
volumeMounts:
- name: nfsvol
mountPath: /data/db
volumes:
- name: nfsvol
persistentVolumeClaim:
claimName: nfs-pvc
---
apiVersion: v1
kind: Service
metadata:
name: nginxstor
labels:
app: nginxstor
spec:
ports:
- port: 8000
name: nginxstor
clusterIP: None
selector:
app: nginxstor
The key point is you use the nfs-pvc
as your backend storage.
volumes:
- name: nfsvol
persistentVolumeClaim:
claimName: nfs-pvc
You will see the pvc
has been bounded by the pod.
root@kubecontext:~/k8sdeployex/globaltest/volume/NFS-PV# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc Bound nfs-pv 1Gi RWX 11m
Now you can use multiple pod and use the claim nfs-pvc
as your shared backend storage.
Now you can access to pod and adding some data in the directory /data/db
. You can launch more pods with
the same pvc, you will see the directory is shared.
Great!!!
Kuberntes NFS Directly
We just modify from the local PV based to fit the nfs pv, but we do not suggest this way, since for NFS share storage is for multiple pod used, so we shoud use pvc method to claim first to get the pv resources. Then attaching to the pod.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginxstor
spec:
serviceName: "nginxstor"
replicas: 1
selector:
matchLabels:
app: nginxstor
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: nginxstor
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
# soft antiaffinity for optimize placement
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginxstor
topologyKey: kubernetes.io/hostname
containers:
- name: nginxstor
#image: 192.168.51.130:5000/uwebserverv6
image: 172.16.155.136:5000/uwebserverv6
readinessProbe:
tcpSocket:
port: 8000
initialDelaySeconds: 5
periodSeconds: 5
stdin: true
ports:
- containerPort: 8000
resources:
limits:
cpu: 1
memory: 512Mi
requests:
cpu: 1
memory: 512Mi
volumeMounts:
- mountPath: /etc/localtime
name: hosttime
- mountPath: /nfs
name: nfs-volume
restartPolicy: Always
#terminationGracePeriodSeconds: 10
volumes:
- name: hosttime
hostPath:
path: /etc/localtime
- name: nfs-volume
nfs:
server: 172.16.155.211
path: /var/nfs/general
---
apiVersion: v1
kind: Service
metadata:
name: nginxstor
labels:
app: nginxstor
spec:
ports:
- name: http
protocol: TCP
#port is loadbalancer port
port: 8001
# targetport is container port
targetPort: 8000
selector:
app: nginxstor
The difference is we add
- name: nfs-volume
nfs:
server: 172.16.155.211
path: /var/nfs/general
The parameters as nfs client mount
mount 172.16.155.211:/var/nfs/general /nfs/general
You will get the share storage result.
Testing Failed Over
Let's shutdown the server running with target pod that with nfs client.
wait for evicted time is arrived (>5 mins)
before evicted time, you still can get the pods with running state
root@kubecontext:~/k8sdeployex/globaltest/volume/NFS-PV# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE nginxstor-0 1/1 Running 0 2m 192.168.58.2 172.16.155.209
after evicted time
root@kubecontext:~/k8sdeployex/globaltest/volume/NFS-PV# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE nginxstor-0 1/1 Unknown 0 7m 192.168.58.2 172.16.155.209
We now then start the shutdown server or delete the pod.
root@kubecontext:~/k8sdeployex/globaltest/volume/NFS-PV# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE nginxstor-0 1/1 Running 0 4s 192.168.61.2 172.16.155.208
access to the pod and check the NFS client is still there
root@nginxstor-0:/# ls /data/db/ baba haha testb
No comments:
Post a Comment