A guest post by Martin Hayes
The interest in OpenShift and OpenShift virtualization in particular has increased steadily over the last year or so as administrators and IT decision makers are looking at more choice in the Hypervisor market (we don’t have to rehash the VMware/Broadcom thing) but also as containers have become mainstream and users wish to consolidate their infrastructure and co-host with their Virtual Machines, which by the way are not going anywhere soon. This just makes sense from a technical and TCO perspective.
Technically, we are seeing the emergence of hybrid modern applications, whereby the front-end may be based on a modern container architecture, but the back-end may still be an old and wrinkly database running SQL. Full end to end application re-factoring may just not be possible, even in the long run. Also financially and operationally it becoming increasingly difficult to justify two independent infrastructure stacks, one for your VM estate and one for your container workloads. Enter OpenShift Virtualization and its upstream close relation, Kubevirt, which marries the both containers and VM’s in one singular platform.
But what about the title of this blog, well of course every solution needs a Data protection and security wrapper. Container management and orchestration platforms based on Kubernetes have long since adopted data persistence and enterprise grade data protection. Data Protection, Security and availability were always essential pillars in an enterprise Virtual machine architecture. Dell Power Power Protect Data Manager has the ability to service the needs of both on a single platform.
In this series of blogs we deep dive into how we make this real, through the lens and persona of the Infrastructure architect coming from the world of virtualization. By the end of this series ( and I don’t know how long it will be yet!), we should hopefully get all the way to protecting/migrating and securing virtualized workloads resident on the OpenShift platform. Note: as it stands OpenShift Virtualization is currently under limited Customer Beta as part of the 19.18 PPDM release. Stay tuned for more detail/demos in the coming months.
First things first though, lets spend the rest of this post standing up the base infrastructure by showcasing how we integrate Dell Power Protect Data Manager with and a Red Hat OpenShift environment.
Starting Environment:
As always this is a ‘bolts and all’ integration summary. We will cover the integration between PPDM and the RedHat OpenShift cluster it is protecting in detail. I’m going to cheat a little with the initial setup, in that my lab environment has already been configured as follows:
- Dell Power Protect Data Manager running version 19.18.0-17. Which is the latest version as per time of writing.
- Dell Power Protect Data Domain Virtual Edition running 8.1.0.10
- Red Hat OpenShift Version 4.17.10 ( Kubernetes Version v1.30.7)
- Dell CSM Operator version 1.7.0 ( Instructions for the Operator Hub install can be found here)
- Dell CSI Driver for Powerstore version v2.12.0
- Dell PowerStore 9200T All Flash Array running 4.0.0.0 Release Build 2284811) – Presenting the file storage target for our OpenShift cluster
1000 Foot view of what we are doing:
If the diagrams are a little small, double click and it should open in a new tab. In short what we will demo is as follows:
- We will present a File based storage target to our OCP Cluster via the Dell CSM/CSI Module.
- We will spin up a new OCP namespace called ‘PPDM-Demo’. In this namespace we will deploy a demo application/pod ( Trust me this will be really simple) and configure this pod to consume our Powerstore File storage by using a Persistent Volume Claim or PVC.
- At this point our cluster has no secondary storage configured, so if anything should happen to our new application then we will have no means of recovering it. Enter PPDM! We will overview the process to attach PPDM to our OCP cluster.
- We will show how easy it is to configure a protection and recovery policy to enable Data Protection for our new application and namespace.
- Disaster strikes!!! We will accidentally delete our application namespace from the OCP cluster.
- Panic averted…. we will recovery our application workload to a net new namespace leveraging the backup copy stored on DDVE with an automated workflow initiated by PPDM.
We will break this post into two parts, this post will cover items 1 through 4 inclusive. In the next part of this post we will cover items 5 and 6.

Step 1: Create Test Namespace and Application Pod
As mentioned earlier, this won’t be anything too arduous. Log in to your OpenShift console as per normal, navigate to ‘Projects’, and click create ‘Project’.

I am going to give it the name ‘ppdm-demo’ and we are done!

Step 2: Verify Storage Class & VolumeSnapshotClass is configured.
Before we create our demo pod in the new ‘PPDM-Demo’ namespace, we will want to check if the Storage Class for Powerstore File and the VolumeSnapshotClass has been configured. This was preconfigured in my environment ( should be the job of your friendly storage admin perhaps)
Navigate to Storage -> StorageClasses. Here you can see we have two StorageClasses configured. The first for Block storage the second for File. Here you can see my Storage Class named powerstore-nfs provisioned by the dell PowerStore csi driver.

Step 3: Configure a PersistentVolumeClaim (PVC) and Verify.
Now that we have verified that our Storage Class and VolumeSnapshotClass are present, we will proceed to deploy some demo workload in our new namespace. As I said this is going to be very basic but it perfectly fine for demo purposes.
First up will will create a manual PersistentVolumeClaim. Navigate to Storage > PersistentVolumeClaims and then to ‘Create PersistentVolumeClaim’.

I have selected the nfs backed storage class, given my PersistentVolumeClaim a name ‘ppdm-claim-1’, and selected the RWX Access mode with a size of 5GB. Click ‘Create’.
Navigating to the YAML definition of the new PVC, I can see how it is configured, the access mode, the storage class name, its status which is immediately bound, volumeMode which is file. Note the new volume name ‘ocp11-7f9787520a’.

Navigate back to Persistent Volumes and you can see that a new Persistent Volume has been created with the Volume Name of ‘ocp11-7f9787520a’ associated with the PVC we just created ‘ppdm-claim-1’.

Next navigate to your PowerStore GUI and we can see that indeed we have created an NFS filesystem using CSI.

Step 4: Create your POD/Container/Application & attach to PVC
Now that we have our persistent storage setup, our PVC created and namespace configured, next up we will deploy our simple Linux based Pod. This time I will use the ‘Import YAML’ function in the OpenShift GUI. Navigate to the ‘+’ icon on the top right hand corner of the screen.

You can drag and drop directly into the editor, or just copy and paste directly.
Note the YAML file points to the claimName:ppdm-claim-1 and the namespace is ppdm-demo. Click ‘Create’.
apiVersion: v1
kind: Pod
metadata:
name: ppdm-test-pod
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: ppdm-test-pod
image: busybox:latest
command: ["sh", "-c", "echo Hello, OpenShift! && sleep 3600"]
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /mnt/storage
name: storage
volumes:
- name: storage
persistentVolumeClaim:
claimName: ppdm-claim-1
Verify that your new POD is in a running state. You should also be able to attach to the Terminal. In the video demo I will create a test file so we can demonstrate persistence after we do the backup testing.

Configure Power Protect Data Manager
So now we have our application running in our OpenShift cluster backed by PowerStore persistent storage. Next we want to protect this application using Dell PowerProtect Data Manager and point the backups to our PowerProtect Data Domain device.
We won’t run through how to do the initial standup of PPDM and DDVE as I have covered this in other blogs. Link is here. We will start with a clean build, with DDVE already presented to PPDM as the backup storage repository.
Add Kubernetes as an Asset Source and Perform Discovery
Log into PPDM, navigate to the left hand menu. Click Infrastructure -> Asset Sources. Scroll down until you find the Kubernetes Tile and then ‘Enable Source’.

You will be presented with an ‘Add’ Asset Sources under the new Kubernetes tab. Ignore that for now and we will come back to it, once we have our credentials configured.

Now navigate to the downloads section of the GUI, under the gear icon in the top right hand corner.

Select Kubernetes and the RBAC tile. Click Download and save and extract to your local machine.

You will be presented with 3 files:
- ppdm-controller-rbac.yaml
- ppdm-discovery.yaml
- README
We will use the two YAML files to setup the required service accounts, roles, role bindings and permissions, to allow PPDM discover, communicate and configure resources in the OpenShift environment. We will also create the secret for the ‘ppdm-discovery-serviceaccount’.
Navigate back to the OpenShift console and execute both YAML scripts, starting with the ‘ppdm-discovery.yaml’ file. Of course we could do this directly from the CLI also, but I like this feature of the GUI as it also allows you ‘drag and drop’ the raw files themselves.
Click ‘Create’ and this executes a ‘Kubectl apply’ command in the background.

All going well, you will be presented with a screen confirming that all resources were successfully created.

Follow the same process using the ppdm-controller-rbac.yaml file. You may get an error pointing to the fact that the ‘powerprotect’ namespace already exists. This is fine.

Next we need to generate the secret for the ‘ppdm-discovery-serviceaccount’. Using the console again execute the following YAML ( hint: if you read the README file it is in there !)
apiVersion: v1
kind: Secret
metadata:
name: ppdm-discovery-serviceaccount-token
namespace: powerprotect
annotations:
kubernetes.io/service-account.name: ppdm-discovery-serviceaccount
type: kubernetes.io/service-account-token
Import the YAML file into the console as per the previous step and click ‘Create’

OpenShift now generates the ‘ppdm-discovery-serviceaccount-token’ details. Scroll down to the bottom of the screen to the ‘TOKEN’ section and click the copy icon.

Now that we have the secret retrieved we can navigate back to our PPDM console and add the OpenShift Kubernetes cluster. Navigate to Infrastructure -> Asset Sources -> Add.

Follow the input form, pointing to the api of the openshift cluster. redact the https from the start. Leave the port as standard 6443. In the Host credentials field, select Add Credentials.

Give the credential set a name and paste the token you copied earlier into the Service Account Token field and click Save.

Verify and accept the cert and then click Save.


After a few seconds, the newly discovered Asset Source should appear and an automatic workload discovery will be initiated by PPDM.

Navigate to the Assts tab and after the automated discovery you can see the discovered namespaces in the cluster. Including the ‘ppdm-test’ namespace we created earlier!

Configure Protection Policy
The next logical step of course is to configure a Protection Policy for our OpenShift Project ‘ppdm-demo’ and protect all the underlying Kubernetes resources, PVC’s etc. under its control.
First step navigate to Protection -> Protection Policies and click ‘Add’. Follow the GUI guided path.

Select ‘Crash Consistent’ which snapshots the PVC bound to our application and backs it up on our target backup storage on Data Domain.

Add the asset namespace that we wish to protect.

Then step through the GUI to configure the primary back target, which of course is our backend DDVE. I have selected a Full backup, every 8 hours and we will retain for 1 day.

Follow through to the summary menu and click Finish.

Manually run the Protection Policy
We could wait until the policy kicks off the backup at the designated time, but we will want to verify it works and plus I am a little impatient. Thus, we are going to take the option to ‘Protect Now’. Navigate back to Protection -> Protection Policies and select our new policy. Click on ‘Protect Now’.

Step through the GUI, selecting the Full backup option and kick off the job. Navigate to the Jobs menu on the sidebar and monitor the Job as it completes. Dependent on the size of the namespace to be backed up this will of course take some time.

Eventually the Job completes successfully.

Up Next
Next week, I will follow up with our orchestrated disaster, whereby I will accidently delete my running application, namespace, POD and associated PVC.
I think this is probably deserving of a Video demo also which will capture the whole process end to end!