In this post we are exploring how to use PowerScale Container Storage Interface (CSI) driver. PowerScale is a highly scalable storage solution for unstructured data, supporting file and object storage.

Dell’s CSI drivers, including PowerScale CSI, are developed as a standard for exposing storage systems to containerized workloads. This way, the Kubernetes volume layer becomes truly extensible. It's really a way to combine the best of both worlds.

A good starting point is the PowerScale CSI GitHub repository. It provides all documentation and required software. Now, let's look closer at how to install and utilize the PowerScale CSI driver...

1. Preparation

Initially download PowerScale CSI repository. We need to prepare two yaml files ahead of the actiual installation. values.yaml contains CSI driver configuration parameters and secret.yaml includes PowerScale access details. You can find sample files in the repository.

https://github.com/dell/csi-powerscale/blob/main/helm/csi-isilon/values.yaml
https://github.com/dell/csi-powerscale/blob/main/samples/secret/secret.yaml

For ease of use this is the secret.yaml in my lab environment.

[ root @ DXB ] # cat secret.yaml
isilonClusters :
  - clusterName : "powerscale"
	username : " root "
	password : "SecretPassword123!"
	endpoint : " 192.168.1.222 "
	endpoint Port : 8080
	isDefault : true
	skipCertificateValidation : true
	isiPath : "/ifs/data/csi"

Another pre-requisite is the have Kubernetes snapshot custom resource definitions (CRDs) and Common Snapshot Controller installed. This might be already present in your cluster but if the installation is required you can follow the below steps.

Download the repository from https://github.com/kubernetes-csi/external-snapshotter

Snapshot CRDs Install:

kubectl kustomize client/config/crd | kubectl create -f -

Snapshot Controller Install:

kubectl -n namespace kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -

2. Installation

First we need to create a namespace and secret utilizing our secret.yaml.
[ root @ DXB ~ ] # kubectl create namespace powerscale
namespace/powerscale created
[ root @ DXB ~ ] # kubectl create secret generic isilon-creds -n powerscale --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl apply -f -
secret/isilon-creds created
If you want to validate the PowerScale API server’s certificates, follow these steps: https://dell.github.io/csm-docs/docs/csidriver/installation/helm/isilon/#certificate-validation-for-onefs-rest-api-calls
If not, create an empty secret using the following command as an empty secret must be created for the successful installation of CSI Driver for Dell PowerScale.
[ root @ DXB ~ ] # kubectl create -f empty-secret.yaml
secret/isilon-certs-0 created
Now we can finally proceed to the actual CSI driver installation. I’m using helm in my example but an offline installer is also available.
[ root @ DXB ~ ] #  ./csi-install.sh --namespace powerscale --values values.yaml 
-----------------------------------------------------
> Installing CSI Driver: csi-isilon on 1.20
-----------------------------------------------------
> Checking to see if CSI Driver is already installed 
-----------------------------------------------------
> Verifying Kubernetes and driver configuration
----------------------------------------------------- 
|- Kubernetes Version: 1.20 
|
|- Driver: csi-isilon 
|
|- Verifying Kubernetes version 
	|--> Verifying minimum Kubernetes version Success 
	|--> Verifying maximum Kubernetes version Success 
|
|- Verifying that required namespaces have been created Success 
|
|- Verifying that required secrets have been created Success 
|
|- Verifying that optional secrets have been created Success 
|
|- Verifying alpha snapshot resources 
	|--> Verifying that alpha snapshot CRDs are not installed Success 
|
|- Verifying snapshot support
	|-- > Verifying that snapshot CRDs are available Success
	|-- > Verifying that the snapshot controller is available Success
|
|- Verifying helm version Success
|
|- Verifying helm values version Success
-----------------------------------------------------
> Verification Complete Success
-----------------------------------------------------
- Installing Driver Success
	|-- > Waiting for Deployment isilon - controller to be ready Success
	|-- > Waiting for DaemonSet isilon - node to be ready Success
-----------------------------------------------------
> Operation complete
-----------------------------------------------------
Verify if pods are running as expected
[ root @ DXB ~ ] # kubectl get pods -n powerscale
NAME 				READY 	STATUS 	  RESTARTS   AGE
isilon - controller - 55c4d86  	6/6 	Running   58 	     246d
isilon - node - cmf4f 		2/2 	Running   22 	     246d

3. Using PowerScale CSI Drivers

We can now create a new persistent volume claim and deploy an application using it.
[ root @ DXB sample_files ] # cat pvcnew.yaml
apiVersion : v1
kind : PersistentVolume Claim
metadata :
	name : powerscalepvc1
spec :
	accessModes :
	- ReadWriteOnce
	resources :
		requests :
			storage : 50Gi
	storageClassName : isilon - plain

[ root @ DXB ] # kubectl create -f pvcnew.yaml
persistentvolumeclaim/powerscalepvcl created
If we now go to the File System Explorer as part of PowerScale's OneFS GUI we can see the created volume


Let's proceed with deploying a NGINX instance to consume our PVC.
[ root @ DXB ] # cat nginx.yaml
apiVersion : vl
kind : Pod
metadata :
	name : nginx-pv-pod
spec :
	containers :
	- name : task-pv-container
	  image : nginx
	  image PullPolicy : Never
	  ports :
	  - container Port : 80
	    name : "http-server "
	  volumeMounts :
	  - mount Path : " /usr/share/nginx /html "
	    name : task-pv-storage
	volumes :
	- name : task-pv-storage
	  persistentVolumeClaim :
	  claimName : powerscalepvc1
 	  
[ root @ DXB ] # kubectl create -f nginx.yaml
If we review the pod details, we can see that the volume from our PVC was mounted sucessfully.
[ root @ DXB ] # kubectl describe pod nginx-pv-pod -n default
----- Removed some output for better readability -----
Volumes :
	task - pv - storage :
		Type : PersistentVolumeClaim ( a reference to a PersistentVolumeClaim in the same namespace )
		ClaimName : powerscalepvc1
		ReadOnly : false
	default - token - xrbxk :
		Type : Secret ( a volume populated by a Secret )
		SecretName : default - token - xrbxk
		Optional : false
QoS Class : BestEffort
Node - Selectors : none
Tolerations :   node.kubernetes.io/not-ready :   NoExecute op - Exists for 300s
				node.kubernetes.io/unreachable : NoExecute op - Exists for 300s
Events :
Successfully assigned default / nginx - pv - p
AttachVolume.Attach succeeded for volume
Container image "nginx " already present
Created container task - pv- container
kubelet Started container task - pv - container

3. Snapshots

The advantage of CSI is that array native features can be exposed to containers. In this example we are using PowerScale SnapshotIQ snapshots for our PVC.
[ root @ DXB ] # cat snap.yaml
apiVersion : snapshot.storage.k8s.io/vl
kind : Volume Snapshot
metadata :
	name : pvcsnap
	namespace : default
spec :
	volumeSnapshotClassName : isilon-snapclass
	source :
	persistentVolume ClaimName : powerscalepvc1
[ root @ DXB ] # kubectl create -f snap.yaml
volumesnapshot.snapshot.storage.k8s.io/pvcsnap created
[ root @ DXB ] # kubectl get volume snapshot
----- Removed some output for better readability -----
NAME 	READYTOUSE 	SOURCEPVC 	SNAPSHOTCLASS  
pvcsnap true 		powerscalepvc1 	isilon-snapclass
We can now as well see the snapshot in the File System Explorer and in the SnapshotIQ dashboard.

Based on your requirement, you can restore or delete the snapshot at a desired time. We are going ahead and delete the snapshot.
[ root @ DXB ] # kubectl delete volumesnapshot pvcsnap
volumesnapshot.snapshot.storage.k8s.io "pvcsnap " deleted
I hope you enjoyed this quick guide on how to install and use the PowerScale CSI driver. For further examples on how to utilizes the PowerScale CSI driver, have a look at the GitHub repository.