Quantcast
Channel: itzikr – Itzikr's Blog
Viewing all 202 articles
Browse latest View live

Heads Up on ESXI 6.7 Patch 02

$
0
0

VMware have just released a new build for customers who are on the 6.7 release (7.0 is already out!)

If you take a look at the release notes (https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202004002.html) it contains many fixes, some that are very important!

 

PR 2504887: Setting space reclamation priority on a VMFS datastore might not work for all ESXi hosts using the datastore

You can change the default space reclamation priority of a VMFS datastore by running the following ESXCLI command esxcli storage vmfs reclaim config set with a value for the –reclaim-priority parameter. For example,

esxcli storage vmfs reclaim config set –volume-label datastore_name –reclaim-priority none
changes the priority of space reclamation, which is unmapping unused blocks from the datastore to the LUN backing that datastore, to none from the default low rate. However, the change might only take effect on the ESXi host on which you run the command and not on other hosts that use the same datastore.

This issue is resolved in this release.

  • PR 2512739: A rare race condition in a VMFS workflow might cause unresponsiveness of ESXi hosts

    A rare race condition between the volume close and unmap paths in the VMFS workflow might cause a deadlock, eventually leading to unresponsiveness of ESXi hosts.

    This issue is resolved in this release. (i have seen this issue happening many times and i highly encourage you to install this fix asap)

  • PR 2387296: VMFS6 datastores fail to open with space error for journal blocks and virtual machines cannot power on

    When a large number of ESXi hosts access a VMFS6 datastore, in case of a storage or power outage, all journal blocks in the cluster might experience a memory leak. This results in failures to open or mount VMFS volumes.

    This issue is resolved in this release.

    If VMFS volumes are frequently opened and closed, this might result in a spew of VMkernel logs such as does not support unmap when a volume is opened and Exiting async journal replay manager world when a volume is closed

PR 2462098: XCOPY requests to Dell/EMC VMAX storage arrays might cause VMFS datastore corruption

Following XCOPY requests to Dell/EMC VMAX storage arrays for migration or cloning of virtual machines, the destination VMFS datastore might become corrupt and go offline.

This issue is resolved in this release. For more information, see VMware knowledge base article 74595.

 

 

 

 

The post Heads Up on ESXI 6.7 Patch 02 appeared first on Itzikr's Blog.


Dell EMC OpenManage Management Pack for vRealize Operations Manager, v2.1

$
0
0

We have just released a new management pack for VMware vROPS that allow you to monitor the health of Dell Servers directly from the vROPS console.

The first think you need to do, is to download the adapter itself which you can do by clicking the screenshot below

Once the adapter has been downloaded an it’s content, extract, you want to navigate to the Administration -> Repository and install it by clicking the Add / Upgrade button

Once the installation is done, you will see a new management pack installed

The next part is to configure an account to be used, so navigate to other accounts -> add accounts -> Dell EMC OpenManage Adapter

You then want to configure the IP address of the OMIVV appliance that needs to be installed as a pre-requisite, it’s and the vROPS credentials

If you get an error (like the one below), you need to configure extended monitoring at the OMIVV appliance

This is easy, navigate to it’s IP address at the appliance management tab, enable the extended monitoring

That’s it!, you now want to wait a bit and let vROPS do it’s magic and gather all the data, there are many important dashboards, here’s a screenshot of the main one

The post Dell EMC OpenManage Management Pack for vRealize Operations Manager, v2.1 appeared first on Itzikr's Blog.

What Is PowerStore – Part 1, Overview

$
0
0

Dell EMC PowerStore is a next-generation midrange data storage solution targeted at customers who are looking for value, flexibility, and simplicity.

  • PowerStore provides our customers with data-centric, intelligent, and adaptable infrastructure that supports both traditional and modern workloads. We accomplish this through:
  • Purpose-built all-flash Active/Active storage appliance that supports the new Non-Volatile Memory Express (NVMe Non-Volatile Memory Express (NVMe) is a communications protocol specifically developed for SSDs.) communications protocol.
    • Supports NVMe Solid State Drive (SSD) and NVMe Storage Class Memory (SCM) media types for storage
    • Supports NVMe NVRAM media type for cache
    • Supports SAS SSDs by expansion
  • Consolidates storage and virtual server environments. The PowerStore platform design includes two major configurations:
    • PowerStore T
      • Can be configured for Block only or Unified (Block and File) storage
        • Block uses FC and iSCSI protocols for access
        • File uses NFS and SMB protocols (SDNAS) for access
    • PowerStore X
      • Block only storage with hypervisor installed on the system
      • Capability to run customer applications on native virtual machines (VMs) with a separate VMware license. (Built-in ESXi hypervisor deployment)
    • Both configurations support VMware virtual volumes (Vvols)
  • Flexible scale-up/down and scale-out capabilities
    • Scale up: Base Enclosure and up to three Expansion Enclosures
    • Scale out: Two (up to four) Appliances (PowerStore T only in version 1)
  • Integrated data efficiencies and protection


PowerStore provides our customers with data-centric, intelligent, and adaptable infrastructure that supports both traditional and modern workloads. We accomplish this through:

  • Data-centric design that optimizes system performance, scalability, and storage efficiency to support any workload without compromise.
  • Intelligent automation through programmable infrastructure that simplifies management and optimize system resources, while enabling proactive health analytics to easily monitor, analyze, and troubleshoot the environment. 
  • Adaptable architecture that enables speed and application mobility, offers flexible deployment models, and provides choice, predictability, and investment protection through flexible payment solutions and data-in-place upgrades. 

    Use Cases for PowerStore

PowerStore Platform Configurations


PowerStore consist of two major configurations also called modes or personalities: PowerStore T and PowerStore X.

PowerStore T is storage centric and provides both Block and File services. The software stack starting with a CoreOS is deployed directly on bare metal hardware.

PowerStore X is designed to run applications and provide storage. PowerStore X systems are Block-only storage with a hypervisor (ESXi) installed on the bare metal and the software stack is deployed on the hypervisor. This design enables the deployment of customer VMs and custom applications.

The basic hardware on both configurations is called a Node. A node contains the processors and memory and is the storage processor or storage controller. Two nodes are housed in a Base Enclosure. The nodes are configured Active-Active (Each node has access to the same storage.) for high availability. You can build an Appliance that is based on a Base Enclosure. You can add Expansion Enclosures to each appliance for more storage capacity.

PowerStore Clustering

One, two, three, or four PowerStore T appliances can be connected in a cluster. The cluster can be made up of different PowerStore T models. For example, a PowerStore 1000 and a PowerStore 3000 can be combined in the same cluster.

In this initial release, PowerStore X can only have a one appliance cluster.

Scalability

Each PowerStore configuration and model provides different performance. PowerStore can be scaled up and scaled out.

Scaling up is adding more storage. An Appliance can be a single Base Enclosure or a Base Enclosure with up to three Expansion Enclosures. Each Appliance can accommodate up to 100 drives. Depending on the model, each appliance has either 2 or 4 NVMe drives used for cache, leaving 98 or 96 drives maximum available for data.

Scaling out is to add more Base Enclosures. Scaling out increases processing power and storage. One up to four Appliances can be grouped to form a four appliance cluster. PowerStore T supports up to four appliances in a cluster. In this initial release, PowerStore X only supports one appliance.

PowerStore Models

There are ten different models available within the PowerStore platform: Five PowerStore T models and five PowerStore X models. The higher the model number, the more CPU cores and memory per appliance.

he PowerStore series midrange storage solution includes:

  • PowerStore T, configured for block only or block and file storage.
  • PowerStore X, configured for block storage with hypervisor.
  • Both PowerStore T and PowerStore X come in 1000, 3000, 5000, 7000, and 9000 models. The bigger the number, the faster the processor and better performance.
  • PowerStore appliances are made up of a base enclosure and up to three optional expansion enclosures.
    • For PowerStore T, up to four appliances may for a cluster.
    • For PowerStore X, only one appliance is allowed in a cluster.

One of the most significant and exciting attributes of PowerStore is that it enables customers to continuously modernize their infrastructure over time as requirements change, without limits and on their terms. This allows IT organizations to eliminate future cost uncertainties to plan predictably for the future. ​

With PowerStore, customers can deploy the appliance and consolidate data and applications to meet their current needs. The scale up and scale out architecture allows them to independently add capacity and compute/performance as their workload requirements change over time. ​

​But we don’t stop there. PowerStore goes one step further to provide data-in-place upgrades, enabling the infrastructure to be modernized without a forklift upgrade, without downtime, and without impacting applications. This Adaptable architecture effectively spells the END of data migration.​

new03

PowerStore appliances offer deep integration with VMware vSphere such as VAAI and VASA support, event notifications, snapshot management, storage containers for Virtual Volumes (vVols), and virtual machine discovery and monitoring in PowerStore Manager. By default when a PowerStore T or PowerStore X model is initialized, a storage container is automatically created on the appliance. Storage containers are used to present vVol storage from PowerStore to vSphere. vSphere then mounts the storage container as a vVol datastore and makes it available for VM storage. This vVol datastore is then able to be accessible by internal ESXi nodes or external ESXi hosts. If an administrator has a multi-appliance cluster, the total capacity of a storage container will span the aggregation of all appliances within the cluster.

Claim Details:
The only purpose-built array with a built-in VMware ESXi hypervisor (AD #: G20000055​) – Based on Dell analysis of publicly available information on current solutions from mainstream storage vendors, April 2020.

The advantage of PowerStore’s Anytime Upgrade program is significant. Customers have multiple options to upgrade and enhance their infrastructure.

  • They can upgrade to the next higher model within their current family (for example, upgrading with more powerful nodes to convert their PowerStore appliance from a 1000 to a 3000 model)
  • Or they can upgrade their existing appliance nodes from the current generation to the next generation of nodes.
  • Both the next-generation and higher model node upgrades are performed non-disruptively while preserving existing drives and expansion enclosures, without requiring new licensing or additional purchases.
  • Alternatively, a third option allows customers to scale-out their existing environment with a second system equal to their current model. In short, the customers receives a discount towards their second appliance purchase (e.g. they have to pay for the media in the 2nd appliance).

The three big differentiators from other upgrade programs in the market are:

  • Flexible upgrade options, beyond just a next-gen controller swap.
  • The upgrades can be done anytime in contract as opposed to waiting three years or more.
  • No renewal is required when the upgrade is performed.

Finally, Dell Technologies On Demand features several pay-per-use consumption models that scale to align spending with usage – and optimize both financial and technological outcomes.

  • Pay As You Grow: This model was designed for organizations that have stable workload environments and predictable growth. It enables organizations to match payments for committed infrastructure as it is deployed over time. So, for example, customers can opt for a deferred payment that starts on the date the equipment is deployed or provide a step payment structure that is aligned with their forecast of future usage or their deployment schedule. The key here is the payment flexibility provided around a committed infrastructure.
  • The next two offerings are Flex On Demand and Data Center Utility. Both of these provide metered (or measured) usage and are applicable across our ISG portfolio.
  • Flex On Demand: First, with Flex On Demand, the customer selects the desired total deployed capacity – consisting of Committed Capacity plus Buffer Capacity – to create the right balance of flexibility and cost. Then, they can scale elastically up and down within the Buffer Capacity, as needed. The key here is that capacity is paid for only when it is consumed.
  • Data Center Utility: Delivers the highest degree of flexibility to address business requirements within and across the IT ecosystem. Customers can scale up or down as required. Capacity is delivered as needed. Procurement is streamlined and automated. Billing is simplified. Reporting is standardized. And a delivery manager is assigned and dedicated to the customer’s success. And managed services are most often delivered as part of the total solution.

All of these OPEX-structured flexible consumption solutions help organizations more predictably budget for IT spending, pay for technology as it is used, and achieve optimal total cost of ownership over the full technology lifecycle.

** Payment solutions provided by Dell Financial Services L.L.C. (DFS) or its affiliate or designee, subject to availability and may vary in certain countries. Where available, offers may be changed without notice.

** Payment solutions provided by Dell Financial Services L.L.C. (DFS) or its affiliate or designee, subject to availability and may vary in certain countries. Where available, offers may be changed without notice.

 

you can download the spec sheet from here https://www.dellemc.com/en-au/collaterals/unauth/data-sheets/products/storage/h18143-dell-emc-powerstore-family-spec-sheet.pdf

and the data sheet from here https://www.dellemc.com/en-au/collaterals/unauth/data-sheets/products/storage/h18234-dell-emc-powerstore-data-sheet.pdf

you can lunch a virtual hands-on lab from here https://democenter.dell.com/Event/PowerStoreOnline

and an interactive demo from here

2020-05-05_9-35-12

you can also download a technical primer, by clicking the screenshot below

new004

in the 2nd post (https://volumes.blog/2020/05/05/whats-is-powerstore-part-2-hardware/) we are going to cover some of the hardware aspects of the PowerStore family

The post What Is PowerStore – Part 1, Overview appeared first on Itzikr's Blog.

What Is PowerStore – Part 2, Hardware

$
0
0

In the first post, we gave an high level overview of the product (https://volumes.blog/2020/05/05/what-is-powerstore-part-1-overview/)

Now, lets dive a little bit deeper:

There are ten different models PowerStore Models

PowerStore is designed from the ground up to utilize the latest in storage and interface technologies in order to maximize application performance and eliminate bottlenecks. Each PowerStore appliance has two nodes and uses NVMe to take full advantage of the tremendous speed and low latency of solid-state devices, with greater device bandwidth and queue depth. PowerStore has been architected to maximize performance with NVMe flash storage and supports the even greater demands of Intel Optane Storage Class Memory (SCM) which provides performance approaching the speed of DRAM.
This performance-centric design enables PowerStore to deliver 6x more IOPs and 3x lower latency for real-world workloads compared to previous generations of Dell midrange storage.

PowerStore is a flexible design built to meet the requirements of different storage applications with support for high availability. The PowerStore platform design includes two major configurations: PowerStore T and PowerStore X. The table displays the available models and specifications for each platform.

There are ten different models within the PowerStore product line: Five PowerStore T models and five PowerStore X models. The higher the model number, the more CPU cores and memory per system. PowerStore systems consist of nodes, one or more base enclosures, one or more expansion enclosures, and appliances.

For high availability, PowerStore systems have:

  • Two redundant power supplies
  • Multiple redundant network ports with system bond
  • Two redundant nodes
  • RAID-protected disk drives

PowerStore T systems support clusters of up to four appliances for:

  • Constant uptime with intracluster migrations
  • Scale up
  • Simplified management
  • Automatic data placement

PowerStore Back Enclosure – Back View

The back view shows I/O modules and port that provide connectivity for system management, to front-end host, and back-end Expansion Enclosures (shelves)

new02

Management port (in red) is only used only with PowerStore T appliances. Two ports on the mezz card are used for management traffic with PowerStore X appliances.

Drive Slots

The Base Enclosure supports only NVMe devices with twenty-five (25) slots that are labeled Slots 0 to 24.

  • SAS SSDs can only be added to Expansion Enclosures.

In the Base Enclosure, the first 21 slots, slots 0 through 20 can be populated with either NVMe SSD or NVMe SCM drives for data storage.

The same drive types must be populated in the 21 slots. You cannot mix NVMe SSD, and NVMe SCM drives in the same base enclosure. Minimum of 6 SSDs must be used.

The last two or four slots (dependent on the model) must be populated with NVMe NVRAM devices and are used for write cache and vaulting.

  • On the
    PowerStore 1000 and 3000 the last two slots (23 and 24) are reserved for two (2) NVMe NVRAM devices. Since slots 21 and 22 are open, they can be used for data drives in the PowerStore 1000 and 3000.

    Drive Offerings

PowerStore supports four types of drives: NVMe SSD (Flash), NVMe Storage Class Memory (SCM) SSD,
NVMe NVRAM, and SAS SSD (Flash). The drive types must be installed in specific locations and enclosures.


The NVMe flash and NVMe SCM drives on the left are supported in the Base Enclosure, slots 0 to 20. The NVMe NVRAM type drives are supported in the Base Enclosure, slots 21 to 24 for write cache. SAS flash drives shown on the right are only supported in the PowerStore Expansion Enclosures.

Ethernet Switches

Connect PowerStore to a pair of Ethernet switches to ensure high availability, not single-switch configurations. This requirement applies to switches used for iSCSI, file, intercluster management, and intercluster data. Dell EMC does not process PowerStore orders that include only a single switch.

Each node must have at least one connection to each of the Ethernet switches. Multiple connections provide redundancy at the network adapter and switch levels.

It is recommended that you deploy the switches with Multi-Chassis Link Aggregation Group (MC-LAG). The Dell version of this is called Virtual Link Trunking interconnect (VLTi) topology. Alternative connectivity methods—including reliable L2 uplinks and dynamic LAG—should be used only a solution like VLTi is not a possibility.

The PowerStore supports Dell EMC Networking Top-of-Rack (ToR) switches running OS10 Enterprise Edition (OS10EE). Third-party switches with requisite features are supported. See the Support Matrix for a list of supported switches.

Dell EMC recommends the following supported Dell EMC PowerSwitch Ethernet switches:


(For information about OS10EE, go to Dell Support and search for OS10EE.)

PowerStore T and PowerStore X Switches

PowerStore T and PowerStore X have different switch configurations.

Considerations for OOB management configuration:

  • At least one OOB management switch is recommended for PowerStore T configurations. PowerStore X does not support OOB management.
  • Can be configured with or without a management VLAN.
  • Switch ports must support untagged native VLAN traffic for system discovery.

you can take a virtual tour by clicking the screenshot below

new01

you can also download the introduction to the platform white paper which contains much more in-depth information, by clicking the screenshot below

new005

in the 3rd post, we are going to cover PowerStore with AppsON

https://volumes.blog/2020/05/05/dell-emc-powerstore-part-3-x-appson-overview/

 

The post What Is PowerStore – Part 2, Hardware appeared first on Itzikr's Blog.

What Is PowerStore – Part 3, AppsON Overview

$
0
0

In the first posts of the series

https://volumes.blog/2020/05/05/what-is-powerstore-part-1-overview/ & https://volumes.blog/2020/05/05/whats-is-powerstore-part-2-hardware/ we gave an high level overview of PowerStore.

PowerStore utilizes a container-based software architecture, that provides unique capabilities for delivering and integrating advanced system services. The modularity of containers enables feature portability, standardization and rapid time-to-market for new capabilities and enables maximum deployment flexibility.

The flexibility of the architecture allows customers to use PowerStore in one of two deployment models: As a traditional external storage array that attaches to servers to provide storage (Known as the PowerStore T model), or as hypervisor-enabled appliance where the PowerStoreOS runs as a Virtual Machine (Known as the PowerStore X model). In the latter, the ESXi hypervisor, that many companies have standardized their IT infrastructure on, gets loaded onto each of the two active-active nodes, with PowerStoreOS running as virtual machine on each node.

This allows customers to run applications directly on the same appliance as storage without the need for external servers, a feature known as AppsOn. No matter which model PowerStore a customer chooses, the exact same exact capabilities, data services, and fully redundant all NVMe container based architecture runs on the same exact 2U hardware and is fully interoperable (for example, you can replicate between a PowerStore T model and an X model and vice-versa). Not only does the onboard hypervisor provide additional isolation and abstraction of the operating system, but it enables future deployment models where the storage software can be deployed independently from the purpose-built hardware.

The ideal use cases for this are storage intensive workloads (opposite of compute intensive workloads), where the workload demands are measured in terms of large number of IOPS and capacity, such as a database. Another is infrastructure consolidation use cases, where IT infrastructure is required in locations that don’t haven’t data centers and are very space constrained – PowerStore with AppsOn provides the ability to run applications in an active-active HA manner with enterprise data services and over 1PB effective capacity all in a 2U single appliance form factor. In addition, the PowerStore X allows not only to run Virtual Machines locally through AppsOn, but it also can simultaneously act as a SAN in providing storage to external servers via FC/iSCSI as well! Talk about true flexibility!

Before we go further, lets get familiar with the basics:

  • For PowerStore X, ESXi is installed directly on the purpose-built hardware that we just reviewed. As a quick refresher, it is a 2U 2 node, All NVMe Base enclosure solution with a dual socket Intel Xeon architecture.
  • The PowerStoreOS runs inside of a Virtual machine on that ESXi, this virtual machine is referred to as the Controller VM.
  • The PowerStore X is capable of supporting traditional storage resources such as SAN and vVols, while also embedding applications directly onto the array in the form of VMware Virtual Machines. Regardless of if the X or T model, PowerStore is designed with an Active – Active architecture, both nodes have access to all of the drives and access to all storage resources. With that said, PowerStore will present resources in an ALUA (active optimized / active non-optimized) manner to front end hosts.
  • PowerStoreOS is based on a Linux operating system. PowerStoreOS runs the software stack for PowerStore, which includes management functionality and endpoints, hosts the web browser (no external application needed for management), handles all of the storage functionality, and the serviceability components such as staging and executing upgrades and remote support via embedded SupportAssist.
  • The PowerStoreOS is implemented through multiple different docker containers. Docker is a defined environment for running containerized solutions that many are familiar with. Containerizing PowerStoreOS allows for easier serviceability as new containers can be quickly staged and brought online, and if a container needs to be rebooted or modified the entire stack does not need to come down. It also provides greater potential for integration across the Dell portfolio, as new features can be easily deployed into the docker environment for PowerStore to leverage.
  • In the PowerStore T model, 100% of the system CPU and memory are used by the PowerStoreOS. For the X model, 50% of CPU and memory are reserved for the PowerStoreOS, ensuring there is always guaranteed resources for serving storage, while the remaining 50% is available for user space to run Virtual Machines.

The screenshot above is used to help visualize the capabilities of PowerStore X. On the left, you have a more traditional setup. A physical server running ESXi, and then either an FC or iSCSI connection to a separate storage array (in this example it is a Unity system. You then have applications (VMs) with their compute on the server and the backend disks on the storage system. PowerStore X contains both the compute and storage components internally. The two native ESXi hosts (1 per node) form an ESXi Cluster for the Computer layer. The Controller VM runs PowerStoreOS which handles the storage across the backend disks for any embedded applications (VMs) or traditional storage served to an external host.

Now to showcase these capabilities. Just like a traditional storage array, and external server can create either an FC or iSCSI connection to the PowerStore X. PowerStore X can then expose storage to the host in the form of a VVol datastore, or as individual Volumes (LUNs). In this scenario, you have an application running on an external server using PowerStore X storage, exactly the same as any other storage system.

However, PowerStore X is also capable of running the customer apps (VMs) on itself (hence, the ‘AppsON”). In this scenario, you deploy the entire application directly onto PowerStore X. The compute portion will run on the ESXi host, and the storage will be on the backend, handled by the PowerStoreOS running inside the Controller VM.

Finally, because PowerStoreX uses ESXi, it automatically inherits the services that are offered through vSphere, such as vMotion. You can see the full potential of PowerStore X by seamlessly migrating your existing applications using a compute and storage vMotion entirely onto PowerStore X, and continue moving workloads in and out based on workload and business needs.

PowerStore X is not limited to it’s internal nodes that are running ESXi, should you need more compute, it can also provision volumes to external hosts. We offer both FC & iSCSI protocols to these hosts.

One of the key benefits of PowerStore X (and ‘T’), is the ability to create more than one storage container which allow you to have a multi-tenancy for vVols based environments, above, you can see a screenshot showing the different storage containers.

When you open up vCenter, this is how a typical configuration environment will look like, you have your VMware DataCenter, the PowerStore Cluster (Cluster-WX-H6121), your PowerStore ESXi nodes (10.245.17.120 / 121), each PowerStore ESXI host will have it’s controller VM (PSTX**) and then, you will have the actual customer VMS and their datastores they reside on. In the above we see a single appliance PowerStore X system, its two nodes exposed as vSphere hosts with the PowerStore Controller VM running on each node, and two user VMs running on these internal nodes. In addition, this PowerStore X system is also serving storage to two VMs running on external ESX servers via iSCSI, as a regular SAN array would.

PowerStore X Use Cases

There are many use cases, PowerStore X can accommodate and frankly, apart from the obvious ones, we can’t wait for you, our customers and partners to show us, what are YOU using it for.

Let’s look at some deployment scenarios where PowerStore can be utilized to modernize infrastructure, starting at the Edge.

Enterprises in a variety of industries are proactively deploying a wide range of IoT use cases to enable digital transformation initiatives. They are often challenged with analyzing large volumes of real-time IoT data and information in a secure, cost-effective manner using centralized analytic solutions. IoT devices often create a deluge of structured and unstructured data, including video images and audio content at the device level, which must be evaluated at the source of the data in real-time. Companies can aggregate and filter device data to remove insignificant data points, or identify the most valuable data to transport to the cloud. Gateways can collect data from edge devices and use applications or algorithms to determine if more complex analyses are needed, or to help companies comply with regulatory requirements that dictate local storage.

Organizations with requirements for edge-based IoT data analytics seek infrastructure solutions that are simple to manage, scalable, secure, and meet their network and data retention requirements. PowerStore offers unique capabilities for environments where infrastructure simplicity and density are desirable or critical, including edge computing, ROBO, mobile and tactical deployments. It’s small 2U footprint, ease of deployment, flexible architecture, ability to support multiple data types, centralized management, and advanced replication to core data centers make it an ideal solution for the Edge. Branch office and retail store locations where space and resources are at a premium will be able to take advantage of the smaller footprint resulting from PowerStore’s collapsed hardware stack where separate server and networking are eliminated. These same benefits are also applicable to mobile applications including tactical, shipboard and airborne deployments.

Powerstore can also be deployed in the core data center.

With AppsOn, PowerStore provides unparalleled flexibility and mobility for application deployment. PowerStore cluster management, combined with VMware tools including vMotion and storage vMotion, enable seamless application mobility between PowerStore and other VMware targets. Using a single storage instance, applications can be deployed on networked servers, hyperconverged infrastructure, or directly on the PowerStore appliance and migrated transparently between them. This unparalleled agility enables IT and application owners to quickly and seamlessly deploy and reassign workloads to the most effective environment based on current requirements and available resources.

AppsON further benefits IT organizations by providing additional flexibility while continuing to utilize existing infrastructure investments. It complements existing platforms, including HCI, by provide a landing zone for high capacity, high performance, storage-intensive workloads that require superior data efficiency, and always-on data reduction.

Finally, PowerStore can still be utilized as a more traditional storage appliance, providing capacity to existing networked servers.

In addition to deploying infrastructure at the Edge and Core, many organizations are utilizing public cloud for hybrid cloud solutions. PowerStore customers can easily integrate their on-premises infrastructure into these environments while maintaining operational consistency.

For VMware customers, VMware Cloud on AWS delivers a seamless hybrid cloud by extending their on-premises vSphere environment to the AWS Cloud, enabling users to modernize, protect and scale vSphere-based applications with AWS resources. With PowerStore’s AppsON capability through vSphere, users can easily migrate applications and data between PowerStore and AWS based on requirements, without requiring additional management tools for simple and consistent operations.

In addition to application mobility, PowerStore provides Cloud Data Services through Faction, a managed service provider that offers scalable, resilient cloud-attached storage with flexible multi-cloud access. A variety of public cloud options that are all continuously innovating and developing new services and capabilities creates complexity in determining which cloud is right for an organization. Cloud Storage Services offers agile, multi-cloud support allowing you to leverage multiple clouds and easily and quickly switch clouds based on applications’ needs to maximize business outcomes.

Organizations can avoid vendor lock-in by keeping data independent of the cloud, so you do not have to worry about high egress charges, migration risk, or time required to move data. Extending the data center to the cloud using enterprise-class storage empowers users to innovate in the cloud and easily scale cloud environments to hundreds of thousands of IOPS to support high-performance workloads, while reducing risk and maintaining complete control of data.

In VCF environments, PowerStore can provide external capacity (supplemental storage) for VCF workload domains and provide complementary data services for data-intensive applications. This is a perfect example of how PowerStore can be deployed along with HCI to address a wide range of applications and data requirements.

There are two configurations supported today.

Supported Config #1

Management Domain + VxRail + PowerStore (Supplemental storage)

Supported Config #2

Management Domain + vSAN Ready Nodes + PowerStore (Principal & Supplemental)

In addition to deploying infrastructure at the Edge and Core, many organizations are utilizing public cloud for hybrid cloud solutions. PowerStore customers can easily integrate their on-premises infrastructure into these environments while maintaining operational consistency.

For VMware customers, VMware Cloud on AWS delivers a seamless hybrid cloud by extending their on-premises vSphere environment to the AWS Cloud, enabling users to modernize, protect and scale vSphere-based applications with AWS resources. With PowerStore’s AppsON capability through vSphere, users can easily migrate applications and data between PowerStore and AWS based on requirements, without requiring additional management tools for simple and consistent operations.

In addition to application mobility, PowerStore provides Cloud Data Services through Faction, a managed service provider that offers scalable, resilient cloud-attached storage with flexible multi-cloud access. A variety of public cloud options that are all continuously innovating and developing new services and capabilities creates complexity in determining which cloud is right for an organization. Cloud Storage Services offers agile, multi-cloud support allowing you to leverage multiple clouds and easily and quickly switch clouds based on applications’ needs to maximize business outcomes.

Organizations can avoid vendor lock-in by keeping data independent of the cloud, so you do not have to worry about high egress charges, migration risk, or time required to move data. Extending the data center to the cloud using enterprise-class storage empowers users to innovate in the cloud and easily scale cloud environments to hundreds of thousands of IOPS to support high-performance workloads, while reducing risk and maintaining complete control of data.

PowerStore X is deeply integrated into VMware vCenter as we know many of our customers have standardized their virtualized infrastructure on vSphere. This means creating and managing Virtual Machines on PowerStore X is exactly the same as if it were an external ESXi server managed in vCenter. However, it was important for us to make the PowerStore manager VMware aware so that the PowerStore UI could display VMware objects consuming PowerStore resources. Not only does it offer an extremely simple turn-key vVol setup, but it also provides a lot of information about these VMs using vVols, such as VM performance metrics, all integrated into the PowerStore management UI.

below, you can see some high level video explanation about AppsON

and you can see a demo how it all looks

The post What Is PowerStore – Part 3, AppsON Overview appeared first on Itzikr's Blog.

What Is PowerStore – Part 4, vVols

$
0
0


So far, we have covered the following aspects of Dell EMC PowerStore:

High Level Overview

Hardware

AppsON

Now, let’s dive deeper into one of the features that excites me to the most, vVols. PowerStore is optimized for virtualized environments with its close integration with VMware vSphere or as Chris Mellor wrote “PowerStore.. and its integration with VMware means it should on that basis alone become the default midrange storage offering for existing customers.” & “Dell EMC has probably the best integration going with VMware..”

PowerStore provides the storage capabilities for the creation of specialized VMware datastores in the vSphere environment which are shared by ESXi hosts.

  • Volumes (Block) are discovered and mounted as VMFS datastores.
  • Exported file systems are mounted as NFS datastores (only on PowerStore T models).

    Storage containers on PowerStore appliances act as a logical grouping of vVols that enable vVols to map directly to the appliance.

Discovery and connection to ESXi hosts on the network is accomplished through host configurations using storage protocols, and VASA support configuration.

The PowerStore X configuration provides a hypervisor layer in addition to block storage.

  • VMware ESXi is the base operating system running on the PowerStore X hardware.
  • PowerStore X appliance uses a portion of the system resources and storage space to run the PowerStore software VMs.

The storage system works closely with existing VMware storage management and integration features.

  • VASA is a VMware-defined API meant to provide a common way for VMware to integrate with storage vendors. The API is implemented using an out-of-band, Simple Object Access Protocol (SOAP) over HTTPS.

  • VMware vSphere APIs for Array Integration (VAAI) are a set of APIs to enable communication between VMware vSphere ESXi hosts and storage devices. The APIs define a set of storage primitives that enable the ESXi host to offload certain storage operations to the array. These APIs reduce resource overhead on the ESXi hosts and can improve performance for storage-intensive operations.

What is Virtual Volumes (vVols) Technology?

vVols is an integration and management framework that virtualizes SAN/NAS arrays, enabling a more efficient operational model that is optimized for virtualized environments and centered on the application instead of the infrastructure. vVols simplifies operations through policy-driven automation that enables more agile storage consumption for virtual machines and dynamic adjustments in real time, when they are needed. It simplifies the delivery of storage service levels to individual applications by providing finer control of hardware resources and native array-based data services that can be instantiated with virtual machine granularity.

With vVols, VMware offers a paradigm in which an individual virtual machine and its disks, rather than a LUN, becomes a unit of storage management for a storage system. vVols encapsulate virtual disks and other virtual machine files, and natively store the files on the storage system.

Overview

vVols are VMDK granular storage entities exported by storage arrays. vVols are exported to the ESXi host through a small set of protocol end-points (PE). Protocol Endpoints are part of the physical storage fabric, and they establish a data path from virtual machines to their respective vVols on demand. Storage systems enable data services on vVols. The results of these data services are newer vVols. Data services, configuration and management of virtual volume systems is exclusively done out-of-band with respect to the data path. vVols can be grouped into logical entities called storage containers (SC) for management purposes. The existence of storage containers is limited to the out-of-band management channel.

vVols and Storage Containers (SC) form the virtual storage fabric. Protocol Endpoints (PE) are part of the physical storage fabric.

By using a special set of APIs called vSphere APIs for Storage Awareness (VASA), the storage system becomes aware of the vVols and their associations with the relevant virtual machines. Through VASA, vSphere and the underlying storage system establishes a two-way out-of-band communication to perform data services and offload certain virtual machine operations to the storage system. For example, operations such as snapshots and clones can be offloaded.

For in-band communication with vVols storage systems, vSphere continues to use standard SCSI and NFS protocols. This results in support with vVols for any type of storage that includes iSCSI, Fibre Channel, Fibre Channel over Ethernet (FCoE), and NFS.

  • vVols represent virtual disks of a virtual machine as abstract objects identified by 128-bit GUID, managed entirely by Storage hardware.
  • Model changes from managing space inside datastores to managing abstract storage objects handled by storage arrays.
  • Storage hardware gains complete control over virtual disk content, layout and management.

Many storage partners have added vVols support in their arrays. For end-to-end vVols support, HBA drivers need to support vVols-based devices. This necessitates availability of an API to get the second-level LUN ID (SLLID) and use by the SCSI drivers.

  • Virtual disk granular external storage consumption model for VMware vSphere.
  • Two primary aspects:
    • in-band: Fibre Channel, iSCSI, NFS and NVMe-oF (in the future) for storage media access.
    • out-of-band: vSphere API for Storage Awareness (VASA) as a machine to machine communication for storage management.
  • vVols technology is vendor agnostic.
  • Vendors need to certify implementation to get onto the Hardware Compatibility List:
    • Base vVols certification with 350 individual tests
    • vVols replication certification with 70 individual tests
    • vVols SCSI-3 persistent reservations support with 20 tests

Key Concepts: VASA API Endpoint

PowerStore includes an embedded VASA 3.0 provider. VASA gathers information about the storage system, focusing on the storage container properties and data services, and displays this information in vCenter. vCenter must be registered in PowerStore Manager to enable this functionality. Each PowerStore cluster can only be registered to a single vCenter instance.

PowerStore communicates with the ESXi server through APIs based on the VASA protocol. VASA enables PowerStore appliances to request and display basic information about the storage system, and the storage resources it exposes to the virtual environment.

PowerStore Manager is capable of monitoring the status of the provisioned storage resources:

  • Monitor events without requiring continuous polling.
  • Automation for configuration process

In general, a VASA session is created when a vCenter connects to the VASA Provider using the VASA protocol. The protocol allows (and enforces) only one session per vCenter. Sessions are created with information about the client (FC and iSCSI initiators) for use in filtering results. Sessions are maintained in memory only; they are not persisted across restarts. vCenter detects a failed session and automatically starts a new session.

Key Concepts: Storage Container

  • Pool of capacity exposed by storage system to create vVols.
  • Mounted on ESXi hosts as vVol datastore.
  • Depending on the implementation actual storage space may or may not be allocated for the storage container. For example, physical space is allocated from a storage pool in Unity and is not allocated in PowerMax or Trident.
  • In deployments with multitenancy requirements, storage containers can be used to partition tenants (no strong tenant isolation is offered by vSphere though).

Key Concepts: Storage Policy

  • Data structure defined by vSphere admin and supplied by ESXi host to storage system for vVol creation.
  • Combines capabilities exposed by the storage system to define set of requirements for creating vVols.
  • ESXi host initially supplies storage policy to discover matching storage containers/vVol datastores. This allows the storage system to restrict which vVols can be created where.
  • Storage system monitors compliance with the policy and reports non-compliance.

Key Concepts: Virtual Volume (vVol)

  • Storage system volumes hosting data for vSphere.
  • Accompanied with metadata supplied by vSphere.
  • Config – virtual machine configuration (up to 4 GB).
  • Data – virtual disk.
  • Swap – swap space for powered on virtual machines.

Key Concepts: vVol Snapshot

  • Block sharing snapshot of a base vVol.
  • Read-only or read-write.
  • Can be bound for IO access or restored from.
  • Fast-clone – a block sharing clone used for development and test or Virtual Desktop Infrastructure.
  • Full clone – used for deploying virtual machines from templates and storage vMotion.

Key Concepts: Protocol Endpoint

  • Protocol Endpoint is the access point for IO to vVols.
  • In case of FC or iSCSI it is a T10 administrative LUN under which vVols are bound as subordinate LUNs.
  • Reduces path management scale for ESXi storage stack.
  • In case of NFS it is an NFS exported path.
  • With NVMe-oF ANA Groups will likely be used instead of Protocol Endpoints to reduce path management scale.

PowerStore discovers the details about the virtual volumes that are stored in the storage container and displays them in PowerStore Manager.

Open the Storage Container page in PowerStore Manager by expanding the Storage submenu and selecting Storage Container.

Click the name of the storage container that is provisioned to the ESXi using VASA support.

Select the Virtual Volumes tab. The page displays the list of VVols stored in the storage container.

VVols are storage objects that are provisioned automatically on a storage container and store VM data.

You can monitor closely the status of any of the objects by selecting the ADD TO WATCHLIST, and gather support material.

Below, you can see the integration we have from the PowerStore GUI to the VMs that are residing on VVOLS

The area marked in read is showing the type of (v)volumes used by that VM, it has one config file, one vswp file (that gets created when the vm is powered on and gets deleted when it’s powered off) and one ‘data’ volume. You may ask why is it shown as ‘clone’, the reason is that if the parental vm template reside on the same vvol datastore as the one you are deploying (deploy from template, cloning etc) that vm to, it will utilize array based snapshot, that is also the reason, why it will be very fast to perform this operation.

The area marked in green is showing the storage container, this vm is residing in, you may asked yourself, “well, if there is only one storage container, why even bother?”, the answer to that is simple (and i think unique to PowerStore), we support more than one storage containers and there may be many reasons you want to create more than one, multi-tenancy etc.

The area marked in yellow is where things are starting to get interesting, it provides you metric relevant to that VM from both a storage & compute perspective, lets check them out

The compute tab will give you information on the selected VM, CPU usage, Memory Usage and System Uptime, all as it appear in vCenter.

The storage performance tab wil give you information about the latency, IOPS and Bandwidth consumed by that VM

The protection tab will both show you information about the protection policy of that VM and it’s associated, array based snapshots or it can also be used to take array based snapshots of that VM.

Finally, the virtual volumes tab will show you the associated volumes of that VM, note the ‘snapshot’ type added volume, it means that there is an array based snapshot, i just took for that VM.

And this is how the snapshot is visible from the vCenter interface. Pretty slick!

Modify Capacity of VVol Datastores

The capacity of VVol datastores, from provisioned PowerStore storage, can be increased or decreased by following these operations:

  • Change the storage container quota in PowerStore Manager.
  • Refresh the capacity in vSphere.

Open the Storage Container page in PowerStore Manager by expanding the Storage submenu and selecting Storage Container.

Click the name of the storage container that is provisioned to the ESXi using VASA support.

The storage container properties page opens. The available Information includes only storage consumption, and virtual volumes stored in the storage resource.

Select any of the tabs to see more details.

Quotas can be enabled on storage containers. A high water mark determines when an alert will be generated for the storage administrator.

From the properties page of the storage container that is attached to the ESXi host, perform the following operations:

  • Select the pencil icon on the right of the storage container name.
  • Change the value of the storage container quota. In the example, the storage container quota was increased.
  • Click APPLY to save the changes. The slide-out panel closes and the information on the capacity tab is updated.

Launch a vSphere Web Client session to the vCenter Server, and open the Storage view from the menu.

Select the VVol datastore and the Configure tab.

On the General page click REFRESH on the Capacity section. The VVol datastore capacity now reflects the change to the PowerStore storage container size.

Below, you can see a demo that will provide you a good overview of VVOLS when running on PowerStore

and here, you can watch a joint lightboard video we did with VMware around PowerStore vVols architecture

and another one here

lastly, here’s a demo showing how to protect (both locally and remotely) VMs running on vVols with RP4VMs (RecoverPoint for Virtual Machines)

you can also download the vSphere best practices white paper by clicking the screenshot below

in a future post, we are going to cover how to use the new PowerProtect software to have an enterprise grade backup for your VMs, hosted on PowerStore for both VMFS and vVols.

The post What Is PowerStore – Part 4, vVols appeared first on Itzikr's Blog.

What Is PowerStore – Part 5, File Capabilities

$
0
0

So far, we have covered the following aspects of Dell EMC PowerStore:

High Level Overview

Hardware

AppsON

vVols

so now it’s time to move to something “completely different” as they once said..

Dell EMC™ PowerStore™ offers a native file solution that is designed for the modern data center. The file system architecture is designed to be highly scalable, efficient, performance-focused, and flexible.
PowerStore also includes a rich supporting feature set, enabling the ability to support a wide array of use cases such as departmental shares or home directories. These file capabilities are integrated, so no extra hardware, software, or licenses are required. File management, monitoring, and provisioning capabilities are handled through the simple and intuitive HTML5-based PowerStore Manager.

PowerStore achieves new levels of operational simplicity and agility. It uses a container-based microservices architecture, advanced storage technologies, and integrated machine learning to unlock the power of your data. PowerStore is a versatile platform with a performance-centric design that delivers multidimensional scale, always-on data reduction, and support for next-generation media.
PowerStore brings the simplicity of public cloud to on-premises infrastructure, streamlining operations with an integrated machine-learning engine and seamless automation. It also offers predictive analytics to easily monitor, analyze, and troubleshoot the environment. PowerStore is highly adaptable, providing the flexibility to host specialized workloads directly on the appliance and modernize infrastructure without disruption. It also offers investment protection through flexible payment solutions and data-in-place upgrades.

PowerStore features a native file solution that is highly scalable, efficient, performance-focused, and flexible.
This design enables accessing data over file protocols such as Server Message Block (SMB), Network File System (NFS), File Transfer Protocol (FTP), and SSH File Transfer Protocol (SFTP).
PowerStore uses virtualized NAS servers to enable access to file systems, provide data segregation, and act as the basis for multi-tenancy. File systems can be accessed through a wide range of protocols and can take advantage of advanced protocol features. Services such as anti-virus, scheduled snapshots, and Network Data Management Protocol (NDMP) backups ensure that the data on the file systems is well protected.
PowerStore file is available natively on PowerStore T model appliances, which are designed as true unified storage systems. There are no extra pieces of software, hardware, or licenses required to enable this functionality. All file management, monitoring, and provisioning capabilities are available in the HTML5 based PowerStore Manager.

Why PowerStore File?

  • Natively available on the PowerStore platform
    • Takes full advantage of PowerStore architecture – inline data efficiencies, NVMe, etc.
    • Designed for high availability and consistent performance
    • Single interface for management and monitoring of your block and file environment
  • Rich and mature data services
    • Integration points with many NAS protocols and services
    • Easily shrink and grow file systems on-demand
    • Leverage quotas to limit capacity consumption
    • Improve data efficiency with always-on compression and deduplication
    • Leverage snapshots and thin clones for file restores and data repurposing

    PowerStore File enables clients to access data over file protocols:

    • Server Message Block (SMB)
    • Network File System (NFS)
    • File Transfer Protocol (FTP)
    • SSH File Transfer Protocol (SFTP)

    File is only available on PowerStore T model appliances

    File functionality is natively available on PowerStore T model appliances

    • No additional software, hardware, or licenses are required
    • Runs as a docker container

    File management, monitoring, and provisioning are done in the PowerStore Manager GUI

    File upgrades are included as part of the overall PowerStore upgrade process

    PowerStore T model appliances can be configured as Block Optimized or Unified (block and file)

    • Selection determines resource allocation on the appliance
    • PowerStore X model appliances do not have this option as they do not support NAS

NAS Installation

This screenshot shows the NAS installation process that is started after the cluster creation completes.

  • File functionality is only available on the master appliance in the cluster
    • Remaining appliances are configured as Block Optimized
  • Only the capacity on the master appliance is available for File
    • Capacity available on other appliances within the same cluster can be used for volumes and vVols
  • Both nodes on the master appliance are used for File
    • Active/active architecture enables load balancing and high availability

NAS Servers

  • NAS servers enable access to the data on file systems
    • Contains protocol and environmental configuration
    • Required before creating file systems
  • NAS servers are used to enforce multi-tenancy
    • NAS Servers are logically segregated from each other
    • Clients of one NAS Server do not have access to data on other NAS Servers
    • IP multi-tenancy is not available
  • Each NAS server has its own independent configuration
    • E.g., DNS, LDAP, NIS, interfaces, protocols, etc.


This screenshot shows how to create a new NAS server.

NAS Server Management


After a NAS server is configured, its settings can be modified at any time. When navigating to the properties of a NAS server, there are multiple cards displayed including Network, Naming Services, Sharing Protocols, NDMP, Kerberos, Antivrius, and Alerts.

Anti-Virus


Function—When a file is written and saved (scan on update) or the first read (scan on read), PowerStore places a block on that file until virus checking has been performed. It immediately issues a remote procedure call (RPC) to a virus-checking engine. This could be a single engine or many, depending on the volume of data being protected—thus providing a highly scalable solution. Because PowerStore can easily use multiple virus-checking servers, the performance impact of virus checking with the system is a small fraction of the total throughput of the system and of systems that use a single virus-checking server (see “Scalability” below).

On receipt of the request, an access is initiated from a filter driver, and the virus-checking server performs a standard check on the file. Understand that standard virus checkers request only a small amount of data (signatures of a few kilobytes each) to establish the presence of a virus, so the overhead is relatively small. The exception to this is with compressed files, in which case the entire file must be shipped across the network. The implementation may be through the normal user network; in the case of heavy-load environments, you may wish to dedicate a network interface to the virus-checking server farm. If a virus is detected, the user and the Administrator will see a customizable pop-up message.

The scan-on-read functionality is triggered when a file is opened for read that was last scanned before a set “access time.” This “access time” is typically set when a new virus-definition file is loaded to rescan old files (once) that may contain undetected viruses. You may also wish, under certain circumstances, to run anti-virus in scan-on-read mode—for instance, after a restore of data that may be infected with a latent virus, or following migration from a general-purpose NT server onto an PowerStore system.

Scalability—You can scale the solution by adding virus-checking servers as required. Your server vendors should be able to provide you with an understanding of how many dedicated servers you would need. You can also use different server types (e.g., McAfee, Symantec, Trend Micro, CA, Sophos and Kaspersky) concurrently, as per their original anti-virus implementation.

Performance of anti-virus solutions tend to be measured in server overhead, and come with the typical “your mileage may vary” qualification, depending on application and workload.

NAS Server Interfaces

NAS servers interfaces are configured on the first two bonded ports on the 4-port card. Pan be used to test connectivity to other devices for troubleshooting purposes. Also, custom host and network routes can be configured for each NAS server interface.

NAS server interfaces cannot reside on the same VLAN as the storage network.

One interface is designated as the preferred interface for outgoing communication to external services.

Supported Protocols

  • NFS
    • NFSv3
    • NFSv4 – 4.1
    • Secure NFS
  • SMB – Standalone or Domain Joined
    • SMB1
    • SMB2
    • SMB3 – 3.1.1
  • Multiprotocol – Access using both SMB and NFS simultaneously
    • Automatically enabled when both the SMB and NFS protocols are enabled on the NAS Server
  • FTP/SFTP

Secure NFS

NFSv3 is not considered to be a secure protocol since it’s designed to trust the host to authenticate users, build their credentials, and transfer them over the network in clear text.

For security-conscious customers, Secure NFS can be used instead. This enables secure data transmission by using Kerberos instead of individual clients for authentication.

There are three supported modes for Secure NFS:

  • krb5 – Use Kerberos for authentication only
  • krb5i – Use Kerberos for authentication and include a hash to ensure data integrity
  • krb5p – Use Kerberos for authentication, include a hash, and encrypt the data in-flight

In order to configure Secure NFS, the NAS server must meet these requirements:

  • DNS must be configured
  • UNIX Directory Service must be configured
  • Kerberos realm must be configured
    • If an AD-joined SMB server exists on the NAS Server, that Kerberos realm can be used

NAS Server High Availability

In the event of a PowerStore node failure, NAS Servers automatically failover from one node to the other. The failover process generally completes within 30 seconds on most moderate-sized configurations to avoid host timeouts. NAS servers area also automatically moved to the peer node and back again during the upgrade process. Note that after recovering a from a reboot or failure, failing back the NAS servers is a manual process.

When creating new NAS servers, they are automatically assigned to the nodes in round-robin fashion. All file systems that are associated with a NAS server are served by the NAS server’s current node.

There are two properties available in PowerStore Manager:

  • The Current Node indicates the node that the NAS server is currently running on. Changing a NAS server’s current node moves the NAS server to run on a different node.
  • The Preferred Node indicates the node that the NAS server should ideally be running on. This acts as a marker that is based on the round-robin algorithm when the NAS server was first provisioned, which can be used for failback purposes. This property never changes after a NAS server is provisioned.


This screenshot shows how to move a NAS server to run on a different node.

NDMP Backups

NDMP is a backup and recovery protocol that is used to transfer data between file systems and backup targets.

There are two components in an NDMP configuration:

  • Primary Storage – Source system to be backed up, such as PowerStore
  • Data Management Application (DMA) – Backup application that orchestrates the backup sessions, such as Dell EMC NetWorker
  • Secondary Storage – The backup target, such as PowerProtect

PowerStore supports 3-way NDMP backups. 3-way NDMP transfers both the metadata and backup data over the LAN. 2-way NDMP is not supported.

Both full and incremental backups are supported on PowerStore file systems.

3-Way NDMP


With 3-way NDMP, both the data and metadata are transferred over the LAN. The backup data is first transferred to the DMA and then sent to the Secondary Storage.

Configuring NDMP


In order to configure NDMP, navigate to the NAS server properties à Protection & Events à NDMP Backup.

File Systems

  • A file system can be created once a NAS Server is available
    • Once created, file systems cannot be moved from one NAS Server to another
  • The file system creation wizard prompts for:
    • NAS Server
    • File System Details
      • Name
      • Description (optional)
      • Size
    • NFS Export Details (if enabled on the NAS Server)
      • Name
      • Description (optional)
      • NFS access configuration
    • SMB Share Details (if enabled on the NAS Server)
      • Name
      • Description (optional)
      • Advanced SMB options
    • Protection Policy

SMB Options

  • Advanced SMB Settings (File System)
    • Sync Writes Enabled – Synchronous writes are required when using SMB shares for databases
    • Oplocks Enabled (Default) – Allows SMB clients to buffer file data locally before sending to the system
    • Notify on Write Enabled – Enables applications to be notified using the Windows API when files are written
    • Notify on Access Enabled – Enables applications to be notified using the Windows API when files are accessed
    • Advanced SMB Settings (Share)
    • Continuous Availability (SMB3) – Allows persistent access to the share without loss of the session state
    • Protocol Encryption (SMB3) – Encrypts data in-flight between clients and the system
    • Access-Based Enumeration – Restricts the display of files and folders based on the user’s access privileges
    • Brach Cache Enabled – Allows users to access data stored on a remote NAS Server without traversing the WAN
    • Offline Availability – Determines if users can cache a copy of the share for offline access
      • None (Default)
      • Manual
      • Documents
      • Programs
    • UMASK (022 Default) – Bitmask that enables the ability to control the default UNIX permissions for newly created files and folder

NFS Options

  • Minimum Security – Minimum security allowed when connecting to the NFS export
    • Sys – Allows clients with standard NFS security to connect
    • Kerberos – Allows clients using any Kerberos flavor to connect
    • Kerberos with Integrity – Allows clients that have Kerberos with data integrity or encryption to connect
    • Kerberos with Encryption – Allows clients that have Kerberos with encryption enabled to connect
    • Access Levels – The access level for NFS clients
    • No Access – Access is denied
    • Read/Write – Users have read/write access to the export
    • Read-Only – Users have read-only access to the export
    • Read/Write, allow Root – Users have read/write access and root has root privileges on the export
    • Read/Only, allow Root – Users have read/only access and root has root privileges on the export

NFS Access

When configuring access for NFS, you can configure a Default Access option. This configures the access level for clients that are not explicitly listed in the list export list.

You can configure the export list exceptions using hosts. Hostnames and IP addresses can be entered directly into the export list. Multiple hosts can also be entered simultaneously by separating them with a comma.

You can also configure the export list by importing a list of hosts and their respective access levels. The system provides a template to show the expected format. The CSV file should contain a list of hostnames or IP addresses along with the access level for each host. This feature is useful when configuring the same access settings for multiple NFS exports, even if they are on different clusters.

File System Creation


File System Management


This screenshot shows the options for managing and monitoring an existing file system.

File System Shrink and Extend


File systems can be shrunk and extended at any time. Note that you cannot shrink the file system to size that’s lower than the Used size.

Shrink and extend operations take effect immediately. You can see the changes as soon as you refresh the client.

The minimum size of a file system is 3GB and the maximum size is 256TB.

File System Metrics

There are file system level metrics available in PowerStore Manager and REST API. These metrics include latency, IOPS. Bandwidth, and IO size.

The age of the data determines how granular the data is:

  • Last Hour – 20 seconds
  • Last Day – 5 minutes
  • Last 2 Months – 1 hour
  • Last 2 Years – 1 day

PowerStore Manager Metrics


This screenshot shows a screenshot of the file system metrics in PowerStore Manager.

File System Quotas

Quotas are available to regulate the capacity consumption on the file system.

User quotas limit the capacity consumed by an individual user on the file system. Since PowerStore file leverages a UNIX-based file system, these users are identified by the their UNIX UID regardless of the actual access protocol.

Tree quotas limit the capacity consumed on a specific directory on the file system. All files in the directory and subdirectories contribute towards the limit.

Default quotas are applied to all users on the file system automatically. This negates the need to configure a user quota for every user. You can configure exceptions to the default as well.

You also can configure user quotas inside of a tree quota. This limits the capacity by specific users in a specific directory.

Soft and Hard Limits

Quotas have soft and hard limits. The soft limit is a limit that can be passed temporarily.

The grace period determines for how long the soft limit can be exceeded. Once the grace period expires, the user is prevented from writing any additional data. When this happens, they must free up space so that they are under the soft limit again before they are allowed to write.

The hard limit is an absolute limit on storage usage. Once the hard limit is reached, the user cannot write any additional data until space is freed up.

Quota settings can be managed on the storage system and also on Windows clients.

How Quotas Work


This screenshot shows how quotas work. In this example, data is being written to a directory with a tree quota on it. The file system usage is increasing and climbing towards the soft limit.


When the soft limit is reached, the grace period is invoked.


If the grace period expires and the soft limit is still exceeded, write requests from clients are denied. They must remove data to go under the soft limit again in order to write again.

Configuring Quotas


This screenshot shows how to configure quotas, grace period, default settings, and an individual user quota.

Below, you can see a short demo of the PowerStore NAS UI

and a longer one..

A NAS MMC Plug-IN video

you can also download a white paper by clicking the screenshot below

The post What Is PowerStore – Part 5, File Capabilities appeared first on Itzikr's Blog.

What Is PowerStore – Part 6, PowerStore User Interface

$
0
0

So far, we have covered the following aspects of Dell EMC PowerStore:

High Level Overview

Hardware

AppsON

vVols

File Capabilities

PowerStore Management Software

Use PowerStore Manager to access, configure, and manage individual PowerStore appliances and clusters.

PowerStore Manager opens to the Dashboard page by default with three categories of information about the manage cluster/appliances. These categories are divided in tabs: Overview, Capacity, and Performance. Many tabs provide interactive filtering with persistent selections, and auto data refresh as status changes.

  • The Overview tab provides monitoring on critical resources, and a summary of provisioned block and file storage resources. Select a subtopic from these sections, such as Volumes, which is shown on the Example—Volumes tab of this presentation.
  • The Capacity tab displays information about how much space is being used on the cluster, including savings from data reduction. It also shows an estimate of when the system is due to reach 100% capacity.
  • The Performance tab displays performance information, as shown on the Example—Performance tab of this presentation.

A summary of the system is displayed:

  • Block
  • File (not on PowerStore X)

  • To display a list of volumes, select Storage > Volumes from the dashboard.
    • To create one or multiple volumes, select Create.
    • Selecting the Volumes widget on the Overview tab of the Dashboard page opens the Volumes page under the Storage section. The Volumes page displays a list of volumes provisioned in the cluster/appliances. Selecting one item of the list displays the volume properties information such as capacity, performance, and host mappings. You can continue to drill down by selecting a volume from the list. This procedure allows you to view information specific to that volume, including:
      • Current usage
      • Historical usage
  • To make bulk changes to multiple volumes, select the check boxes and then More Actions.

  • To view a specific area, such as Performance, click the tab.
  • The Performance tab displays a summary view of the overall system performance on the last hour. You can optimize the view for different intervals and different performance data, such as latency or IOPS.
  • Data updates automatically.

    PowerStore CLI

You can manage the PowerStore system using PowerStore CLI (PSTCLI) instead of a GUI interface.

  • Intended for advanced users who want to run scripts to automate routine tasks.
  • Supported tasks include:
    • Configuring and monitoring the system
    • Managing users
    • Provisioning storage
    • Protecting data
    • Controlling host access to storage
  • You can also use it for data exchange protocols, such as SNMP.

    REST API

REST API is another way to manage the system.

  • What is REpresentational State Transfer (REST)?
    • A web-friendly API protocol style
    • Follows standard HTTP conventions
    • Commonly used in web services
  • Use cases:
    • Automated remote management, including replication.
    • DevOps integrations
    • Third-party tool integrations
    • GUI and CLI backend—you can add your own extensions.
    • Uses:
      • Monitoring alerts—low space, performance issues, or hardware failures
      • Historical metrics collection—billing, forecasting
      • Data center integration
      • Adding common storage task options not in the standard UI

Below, you can see a demo of the PowerStore UI

and below, you can see a demo, showing specific volumes related operations

from the demo below, you can see specific performance information

and from the demo below, you can see how to create hosts & volumes

you can also download the white paper by clicking the screenshot below

The post What Is PowerStore – Part 6, PowerStore User Interface appeared first on Itzikr's Blog.


Accelerate your IT applications with PowerMax AI/ML and SCM persistent storage Webcast

$
0
0

Some days ago , Vince Westin held a webcast about PowerMax, one that I highly encourage you to attend

Abstract: Learn all about PowerMax – the world’s fastest storage array.  PowerMax delivers unprecedented levels of performance and scale via NVMe-oF, SCM persistent storage, and automated data placement through an advanced AI/ML engine.  Also, learn techniques for data consolidation at scale and data reduction to accelerate your business outcomes.

You can watch the webcast on demand by clicking the link below

https://www.delltechnologies.com/en-us/events/webinar/home.htm?commid=399500#webinar=399500

 

 

The post Accelerate your IT applications with PowerMax AI/ML and SCM persistent storage Webcast appeared first on Itzikr's Blog.

Upcoming Dell EMC PowerStore Webinars

$
0
0

We have just released PowerStore to the world and as such, there are many questions being asked about every aspect of the array and there are a lot!

Below you can see the list of the PowerStore related Webinars for the upcoming months

May 12, 1PM CDT

Introduction to Dell EMC PowerStore

The future of storage is here! Join us for an overview of PowerStore, our new modern storage appliance that provides customers with data-centric, intelligent, and adaptable infrastructure to support both traditional and modern workloads. Built from the ground up to unlock the power of data, PowerStore is highly differentiated from competitive solutions and eliminates the typical tradeoffs in performance, scalability and storage efficiency so that IT organizations can meet the increasing demands of data era.

May 12, 9:00 PM

Introduction to Dell EMC PowerStore

The future of storage is here! Join us for an overview of PowerStore, our new modern storage appliance that provides customers with data-centric, intelligent, and adaptable infrastructure to support both traditional and modern workloads. Built from the ground up to unlock the power of data, PowerStore is highly differentiated from competitive solutions and eliminates the typical tradeoffs in performance, scalability and storage efficiency so that IT organizations can meet the increasing demands of data era.

May 26, 9:00 PM

Dell EMC PowerStore: Technical Deep Dive

Get a detailed view of the PowerStore architecture – including end-to-end NVMe, always-on inline data reduction, unified block/file/vVOL support, container-based architecture and an AI/ML-based Resource Balancer for clusters. Learn how Dell EMC’s revolutionary “data-first” design simplifies IT operations while transforming the agility and mobility of current and new workloads alike.

June 11, 9:00 PM

PowerStore AppsON: Virtualized Applications inside a Modern Storage Platform

Explore the new PowerStore AppsON capability, which enables deployment of data-intensive VMs and applications directly on the appliance. PowerStore brings workloads closer to the data they need, delivering enhanced performance, operational simplicity, and a reduced datacenter footprint – while at the same time supporting applications deployed on external hosts. Learn how AppsON complements existing infrastructure to enable application mobility and consolidation, while maintaining consistent operations across flexible deployment options.

June 23, 9:00 PM

Dell EMC PowerStore: Intelligent Scaling & Migration

Not only does Powerstore independently scale up and scale out, it does so intelligently, automatically discovering new appliances when they are added to a cluster. Learn how PowerStore’s onboard machine learning engine uses intelligent data placement to improve system utilization and performance through the balanced provisioning of new appliance storage volumes. By significantly reducing the IT staff time that’s required to analyze and rebalance volumes, you’re free to manage the rest of your IT infrastructure.

July 14, 9:00 PM

Dell EMC PowerStore: Management & Ecosystem Integration

Whether your IT staff focuses on managing from a storage, hypervisor or application perspective, PowerStore is integrated with all these approaches. This session will review PowerStore Manager, the intuitive and streamlined user interface for the PowerStore appliance and cluster. Learn about the comprehensive GUI functionality as well as CLI and REST API options for system management. We will also discuss PowerStore support for vVols, extensive integration with the DevOps and management environments popular in today’s data center environments, and available plugins for Kubernetes, Ansible, CSI, vRealize Orchestrator and others

July 28, 9:00 PM

Dell EMC PowerStore: Data Mobility, Disaster Recovery, and Business Continuity

PowerStore is a next-generation architecture, but it builds on a broad range of mature services proven across the Dell EMC storage portfolio. Learn how PowerStore extends advanced replication, local and remote data protection, policy-based management and broad ecosystems integrations for business continuity and disaster recovery. The discussion also covers key “Destination PowerStore” capabilities to make migration from other Dell EMC storage platforms simple and quick whenever you decide to add PowerStore to your current environment.

August 6, 9:00 PM

Dell EMC PowerStore: Hardware Overview

PowerStore is designed to take advantage of the performance and economics of the next wave of storage media. At its core, PowerStore is a highly optimized I/O stack with inline data services including deduplication, compression, as well as QOS. With the advantage of integrated data protection, the flexibility to scale-up/down and scale-out, PowerStore delivers industry leading economics and simplicity, with predictable performance as a result.

August 18, 9:00 PM

Dell EMC PowerStore: File-Based Workloads

PowerStore has a robust feature set of native file capabilities so administrators can easily implement a highly scalable, efficient, performant, and flexible solution that is designed for the modern data center. Learn more about how the rich feature set and mature architecture enables support for a wide range of use cases. PowerStore file provides immense value to environments that leverage block, file, or a mixture of both.

August 25, 9:00 PM

Dell EMC PowerStore: Ease of Use & Serviceability

PowerStore takes ease of use and simplicity to the next level, from the built in discovery process for onboarding appliances to the simplified connect home experience and seamless integration with CloudIQ. As applications and workloads become more data intensive, it’s imperative to have powerful monitoring capabilities to quickly analyze issues and provide detailed repair flows, reducing the time spent troubleshooting. With advanced AI/ML capabilities, PowerStore proactively makes recommendations and assists with forecasting, so you can better utilize your time. Join this webinar to learn more about these features and more!


The post Upcoming Dell EMC PowerStore Webinars appeared first on Itzikr's Blog.

What Is PowerStore – Part 7, Importing External Storage

$
0
0


So far, we have covered the following aspects of Dell EMC PowerStore:

High Level Overview

Hardware

AppsON

vVols

File Capabilities

User Interface

PowerStore has a native migration capability that is known as Orchestrator, which can be used to import storage resources from Dell EMC storage systems non-disruptively. This capability is integrated in the PowerStore system without the need of any external appliance; however it does requires installing a host plug-in for a non-disruptive import to PowerStore. The host plug-in enables the Orchestrator to communicate with the host multipath software to perform import operations. The combination of native functionality with the host plug-in automates many manual operations that take place during migration. For example, the system automatically completes mapping the hosts, creating the storage resources, and checking the validation.
Supported source block-only storage resources include the following:
• LUNs/volumes
• Thick and Thin Clones
• Consistency groups
• VMFS datastores (PS Series only)
• Windows RDM (PS Series only)
Importing file resources using the native-migration capability is not supported.
Supported source systems include the following:
• Dell EMC VNX2
• Dell EMC Unity™
• PS Series (EqualLogic)
• SC Series (Dell™ Compellent™)
For the details about supported storage resources, source systems, and system versions, see the document
Importing External Storage to PowerStore Guide on the PowerStore Info Hub.

Requirements

The following requirements must be met before migrating to PowerStore:
• The source system must be in a good state and is not running a software upgrade.
• Software or operating environment (OE) version for the source system must be supported.
o See the document Importing External Storage to PowerStore Guide.
o A software upgrade may be required before starting the import.
• Front-end connectivity:
o Connectivity between the client and source system, and the client and PowerStore can be either iSCSI or Fibre Channel (FC).
o For FC, zoning may be required.
o The protocols must match between the source and destination

Back-end connectivity:
o iSCSI is used for the data transfer between the source storage system and PowerStore.
o No support for Fibre Channel (FC) for the data transfer between the source and PowerStore system
• A host plug-in must be installed in the clients that access the data to be migrated. Reboot may be required for which reason; it is recommended to do the installation in conjunction to any required software upgrade of the operating system of the client.

Import workflow.


Step 1: Setup
The following are the actions that the user should perform before trying to import any storage resources:
1. Configure zoning for the front-end connectivity between the client and the PowerStore system (if required).
2. Add iSCSI connectivity between the source system and PowerStore system (If not available already).
3. Install the host plug-in in each of the clients that requires access to the data during the import. This way ensures that the import is non-disruptive. A reboot may be required as part of the installation of the host plug-in. PowerStore supports three types of host operating systems for the host plug-in:
Linux, Windows, and VMware.

Step 2: Import
The following are the actions that make up the import step:
1. The user adds the source system to the PowerStore Manager
2. Select the added source system and click the Import Storage button, which gives the following options:
a. Select Volumes: Allows selecting the source resources, either as volumes or volume groups, to be imported.
b. Add to Volume Group: Allows the user to group the source resources into an existing volume group or to a new volume group.
c. Add Host: Allows adding the clients in which the user has already configured the host plug-in.
d. Verify Host Mapping: Verifies the host mapping between the selected source resources and the added hosts.
e. Set Import Schedule: Allows setting when will the import begin, either immediately or at a set date and time and with the option to set an automatic cutover is option.
f. Assign Protection Policy: Allows setting an existing protection policy to the selected source resources and applies it once the import completes.
g. Begin Import: The last step of the wizard that shows a summary of the selected options, gives the option to review the source array assigned policies, and shows the Begin Import button to start or schedule the import.
3. Once the Begin Import button is clicked, the following actions are taken by the system.
a. An import session is created.
b. The system request to the host plug-in a path flip, making the paths from the client to the source
system as inactive and activates the paths from the client to the PowerStore system.
c. A background copy of the data from the source system to the PowerStore system starts. Any new writes are made to PowerStore and are forwarded to the source system to ensure rollback.

Step 3: Cutover
1. A cutover is allowed once the import session is in a Ready to Cutover state (the source system and PowerStore are synchronized).
a. The paths from the client to the source system are removed
b. The background copy and the forwarding of writes stops.
c. Once the systems cut over, there is no rollback.


The Internal Migration option from the PowerStore Manager can be used prior to removing or shutting down an appliance for service. This feature is used to move volumes or volume groups to another appliance in the cluster without any disruption. When you migrate a volume or volume group, all associated snapshots and thin clones also migrate with the storage resource. During the migration, additional work space is allocated on the source appliance to facilitate data movement. The amount of space that is needed depends on the number of storage objects and amount of data being migrated. This work space is released and freed up after the migration finishes.

You can either manually migrate storage resources or use the recommendations in PowerStore Manager:

  • Manual migration – You can choose to provision a volume or a volume group on a specific appliance or have it automatically get provisioned on an appliance. You can choose to migrate the storage resources to another appliance in the cluster later.
  • Assisted migration – In the background, the system periodically monitors storage resource utilization across the appliances. Migration recommendations are based on factors such as drive wear, appliance capacity, and health. If you accept a migration recommendation, a migration session gets automatically created. Migrating resources may require a rescan of all host adapters. I applicable, before you begin the migration, a prompt to acknowledge rescan will appear and list the hosts that require a storage rescan.

Note: migration always requires user action. In no case do storage resources migrate automatically.

You may choose to manually migrate a volume to balance resources across appliances.

In the example, Vol_1 is checked. The volume resides on TwoApplianceCluster-appliance-1. Select the Migrate option to start the task.

The Migrate Volume window appears and provides the user with some useful information about the migration. The example shows TwoApplianceCluster-appliance-2 has been checked and is the destination appliance. Select Start Migration to begin the task.

The Rescan Hosts window gives you a chance to rescan the host to ensure the storage being migrated is still accessible after the migration completes. To rescan the host, use the Rescan Disks option from the Computer Management window, then check the box for Yes, the associated hosts have been rescanned. Once the box is selected, the Start Migration button is available to select.

After starting the migration, you are presented with a message stating that system performance may be impacted for several minutes during the migration. Select the Migrate Now button to begin the migration. View the migration status from the Internal Migration > Migration page.

Once the migration process starts, monitor the progress by viewing the Status columns under the migrations window. Look for a Completed status indicating the resource has been migrated.

The migration process goes though several states the first of which is synchroninzing. During this phase, the majority of the background copy is completed and there are no interruptions to any services. Sync can be run multiple times to reduce the amount of data that must be copied during the cutover.

The cutover phase is the final phase of the migration, when ownership of the volume or volume group is transferred to the new appliance. Active I/O is supported during the migration, however as a best practice, stop I/O to the volume being migrated. Migration is asynchronous until the cutover occurs and can be paused or cancelled anytime during the migration. Before cutover all volumes are fully synchronized.

Assisted Migration

You can manually move a storage resource. However, the PowerStore GUI can also help with this process. In the example, there are two appliances, PS-2 and PS-3. PS-2 is nearing full capacity and is forecast to run out of space in eight days. A Major Alert is generated. PS-3 still has plenty of space available. Selecting the Alert launches the Alert page.

Selecting the Assisted Migration option from the Repair Flow section returns a list of volumes recommended for migration. The process chooses volumes that impact performance and workloads the least. For example any unmapped volumes or volumes that are mapped but are offline (in MS Disk Manager) or unmounted (Linux) from the host perspective. The recommendations are refreshed every 24 hours. Note the message to warn you that rescan of all HBAs may be required prior to the migration.

There may be a situation where after the migration, there are still capacity issues. In this case, manually migrate the storage resource to solve the issue.

To view the results of the the migration, look at the capacities of the appliances. The PS-2 appliance still displays a capacity over the optimal 80 percent. In this case, further manual remediation may be required.

Once a migration is started, Vol_3 displays a blue dot telling the user a migration job is in progress on the volume.


Once the job starts, monitor the progress by navigating to the Migration > Internal Migrations > Migrations tab to display the status. Selecting the Jobs icon also displays the status of the migration.

The Delete and Pause buttons are available when the migration job starts. Select the migration session and click on the appropriate button. Both options are available while the job is in progress up to the point where the migration displays the Cutting Over status.

Below you can see a demo, showing how to import external volumes to PowerStore

and below, you can see a demo, showing how to migrate volumes internally, from one PowerStore appliance to another

you can also download the white paper by clicking the screenshot below

The post What Is PowerStore – Part 7, Importing External Storage appeared first on Itzikr's Blog.

What Is PowerStore – Part 8, Local Protection

$
0
0

As data becomes increasingly important to organizations of all types, these organizations continually strive to find the safest and most effective ways to protect their data. While many methods of data protection exist, one of the simplest and most-effective methods involves using snapshots. Snapshots allow recovery of data by rolling back to an older point-in-time or copying select data from the snapshot. Snapshots continue to be an essential data-protection mechanism that is used across a wide variety of industries and use cases.
Snapshots can preserve the most important mission-critical production data, sometimes with other dataprotection technologies.
Dell EMC™ PowerStore™ provides a simple but powerful approach to local data protection using snapshots.

PowerStore uses the same snapshot technology across all the resources within the system, including volumes, volume groups, file systems, virtual machines, and thin clones. Snapshots use thin, redirect-on-write technology to ensure that system space is used optimally and reduces the management burden by never requiring administrators to designate protection space. Snapshots can be created manually through PowerStore Manager, PowerStore CLI, REST API, or automatically using protection policies. Protection policies can be created and assigned to quickly create local and remote protection on supported resources.
A thin clone is a read/write copy of a volume, volume group, or file system. Thin clones use the same underlying pointer-based technology that snapshots use to create multiple copies of storage resources. Thin clones support many data services, which engineers and developers can leverage in their environments.
When users create a thin clone, it acts as a regular resource and is listed with the other resources of the system. Like snapshots, users can create, manage, and destroy thin clones through PowerStore Manager, PowerStore CLI, and REST API.

Snapshots are the local data protection solution within a PowerStore system. They provide a method of recovery for data that has been corrupted or accidentally deleted. Snapshots are pointer-based objects that provide point-in-time copies of data that is stored in volumes, volume groups, file systems, thin clones, or virtual machines. As snapshots are not full copies of the original data, they should not be relied upon as a backup or as the disaster recovery solution. Snapshots also consume overall system storage capacity to preserve the point-in-time. Ensure that the appliance has enough capacity to accommodate snapshots.
Snapshots can be created either manually or automatically within a PowerStore system and are considered write-order/crash-consistent. A write-order/crash-consistent snapshot is not considered application consistent since the snapshot may not be a full representation of the application dataset at that point-in-time. Typically, a host/client caches data with the intention to write it to the storage resource. Cached data is not available within the storage when a snapshot is taken. To create application-consistent snapshots, use Dell EMC AppSync™ where supported. AppSync ensures all incoming I/O for a given application is quiesced and flushed before a snapshot is taken.
While the following sections outline the creation and management of snapshots in PowerStore Manager, snapshots can also be created and managed using the PowerStore CLI and REST API. Whether administrators take manual snapshots through PowerStore Manager, use the customizable snapshot rules, or create advanced data protection scripts, they can fully manage their storage environments using whichever method that they prefer. This ability leads to a powerful, flexible foundation for managing data protection regardless of the complexity of the use case or environment.

Snapshots overview

Snapshots are the local data protection solution within a PowerStore system. They provide a method of recovery for data that has been corrupted or accidentally deleted. Snapshots are pointer-based objects that provide point-in-time copies of data that is stored in volumes, volume groups, file systems, thin clones, or virtual machines. As snapshots are not full copies of the original data, they should not be relied upon as a backup or as the disaster recovery solution. Snapshots also consume overall system storage capacity to preserve the point-in-time. Ensure that the appliance has enough capacity to accommodate snapshots. Snapshots can be created either manually or automatically within a PowerStore system and are considered write-order/crash-consistent. A write-order/crash-consistent snapshot is not considered application consistent since the snapshot may not be a full representation of the application dataset at that point-in-time. Typically, a host/client caches data with the intention to write it to the storage resource. Cached data is not available within the storage when a snapshot is taken. To create application-consistent snapshots, use Dell EMC AppSync™ where supported. AppSync ensures all incoming I/O for a given application is quiesced and flushed before a snapshot is taken.
While the following sections outline the creation and management of snapshots in PowerStore Manager, snapshots can also be created and managed using the PowerStore CLI and REST API. Whether administrators take manual snapshots through PowerStore Manager, use the customizable snapshot rules, or create advanced data protection scripts, they can fully manage their storage environments using whichever method that they prefer. This ability leads to a powerful, flexible foundation for managing data protection regardless of the complexity of the use case or environment.

Redirect-on-write technology


In this example a storage resource contains four blocks of data: A, B, C, and D. A snapshot is taken of the storage resource to preserve this point-in-time, and points to blocks A, B, C, and D. When the host/client modifies blocks B, A, then D, the data is written to new locations on the system. The pointers for the storage resource are then updated to reflect the new locations for B’, A’, and D’. This example assumes that no data reduction savings are achieved. For more information about data reduction within PowerStore, view the document Dell EMC PowerStore: Data Efficiencies on Dell.com/StorageResources.

Snapshot operations
The following operations are supported on snapshots for all storage resource types unless otherwise noted.
These operations can be completed using PowerStore Manager, PowerStore CLI, or REST API. Usually, the snapshot operations below for volumes, volume groups, file systems, thin clones, and virtual machines are the same. Differences in behavior are explained.

Create
When a snapshot is created, the snapshot contains the state of the storage resource and all files and data within it at that point-in-time. A snapshot is essentially a picture of the resource at that moment in time. After creation, the space that is consumed by the snapshot is virtually zero, since pointer-based technology is used and all data within the snapshot is shared with the parent resource. The amount of data that is uniquely owned by the snapshot increases over time as overwrites to the parent resource occur as previously shown in
screenshot below. In that example, after changes to the parent storage resource were made, blocks A, B, and D are only owned by the snapshot.
Users may manually create snapshots of a storage resource at any time or have them created by the system on a user-defined schedule. To have snapshots created automatically, a user must create and assign a protection policy containing a snapshot rule to a resource. Protection policies and snapshot rules are further explained in this post. The following outlines the process to manually create snapshots on the various resources within a PowerStore system. To create a snapshot on a resource within PowerStore Manager, go to the properties window of the resource, click the Protection tab, click the Snapshots tab, and click Take Snapshot. Figure 2 shows an example of the location of the Take Snapshot button, which is used to create a manual snapshot. This process is the same for all storage resource types, whether the resource is a volume, volume group, file system, thin clone, or virtual machine within PowerStore. In this example, the properties window for a volume is displayed.


Modify
The Modify option is used to update several attributes of an existing snapshot. This can be completed by going to the properties page of a resource within PowerStore Manager, selecting the Protection tab, selecting a snapshot, and clicking Modify. The specific attributes that can be edited are resource-dependent and are further detailed below. For virtual machine snapshots, edits can only be made from vCenter. For volumes, volume groups, and their thin clones, users can view and edit the details of a snapshot by selecting a specific snapshot on the Protection tab within the properties window of the parent resource and clicking Modify. This opens the Details of Snapshot page. An example of the Details of Snapshot page for a volume group snapshot is shown in Figure 6. The user can choose to update the snapshot Name,
Description, and Local Retention Policy. For the Local Retention Policy, the user has the option of selecting No Automatic Deletion or setting a Retain until date and time. In certain situations, changing a snapshot to no automatic deletion may be required, preserving the snapshot until it is determined that it is no longer needed. For volumes and thin clones of volumes and volume groups, the same information can be changed.

Delete
A user can select one or more snapshots of a resource and delete them on demand. From PowerStore Manager, if a single snapshot is chosen within the Protection tab in the properties of a resource and Delete is selected, a confirmation window appears listing the snapshot name and if the user wants to delete thesnapshot. When multiple snapshots are selected, the confirmation window displays a full list of all selected
snapshots when the show more option is used. If the snapshot is of a virtual machine, the snapshot is also removed from vSphere. Deleting a snapshot within PowerStore may return free space back to the appliance. If the snapshot was recently created, the snapshot has pointers to most, if not all, data contained within the parent resource. Also, as PowerStore uses deduplication and compression mechanisms to reduce the amount of data stored within the system, a snapshot may not only have blocks in common with the parent resource, but other resources within the system. Blocks of data only unique to a given snapshot are deleted and space is returned to the system for use by other resources.

Refresh
Volumes and volume groups
The refresh operation has different meanings depending on the resource type. For volumes and their thin clones, the refresh operation replaces the contents of an object with the data of another resource within the same family. For volume groups and volume group thin clones with write-order consistency enabled, the
contents for all members of the group are replaced. When write-order consistency is disabled, individual volumes within a volume group can be refreshed. After a refresh operation is started, the process quickly completes since only pointer updates for the resource are changed. The refresh operation differs from arestore operation, which returns the object to a previous point-in-time copy of itself. A storage resource family consists of the parent storage resource, which is the original resource, any thin clones, and snapshots in the tree. An example is shown in the screenshot below.


Restore
A restore operation reverts a parent resource dataset to a previous point-in-time when a snapshot was taken.
Only snapshots directly taken of the resource can be used as the source for the restore operation. When a restore operation is started, pointer updates occur, and the entire resource dataset is reverted to the previous point-in-time contained within the snapshot. Restore is supported on volumes, volume groups, file systems, and any thin clones of these resources. The restore operation is not supported on virtual machines, but users can use the Revert option in vCenter. If you restore a volume group or volume group thin clone, all member volumes are restored to the point-in-time associated with the source snapshot. More information about volume groups can be found in the section Snapshot interoperability. As mentioned, a restore operation reverts the entire resource back to a previous point-in-time copy of itself. If only a select amount of data must be recovered from a volume or volume group snapshot, accessing a thin clone created using the snapshot in question avoids losing any data that is updated after the snapshot was created. If the resource is a file system or file system thin clone, accessing the protocol (read-only) snapshot through an SMB share or NFS export also avoids the Restore operation when only a subset of data is needed. Accessing file system and file system thin clone snapshots is discussed in detail in the section Snapshot access.
Volume shrink is not supported on PowerStore. Restoring a volume, volume group, or thin clone from a snapshot does not reduce the size of the resource even if the snapshot was taken when the resource was the previous size. Instead, the resource size remains at the current size, but with the original dataset restored. For instance, if the snapshot was taken of the parent volume when it was 500 GBs, and it is now 750 GBs, the operation restores the data to the 750 GB volume. For file systems and thin clones, this behavior is different since file system shrink is supported. The size of the object being restored changes based on the size of the resource when the snapshot was taken. For instance, if the snapshot was taken of the parent file system when it was 100 GBs, and it is now 200 GBs, the restore operation updates the size of the resource to be 100 GBs and the original data is restored.

Volume groups
Snapshots are fully supported with volume groups on a PowerStore system. A protection policy containing a snapshot rule can be assigned to the volume group to take snapshots at a defined interval. Snapshots can also be taken manually on the volume group or on individual volumes within the volume group at any time. This task can be done from the Snapshots tab within the Protection tab of the volume group or member volume.
Volumes can also be added or removed from a volume group without affecting data protection on the group. When a volume is removed from a volume group, no snapshots on the group are deleted or otherwise changed. If replication is configured, it continues and any changes to the group is propagated to the destination during the next sync. If the volume group has a protection policy that is assigned to it and a volume is removed, the policy is automatically assigned to the volume that is removed from the group to continue data protection. Replication on the volume that is being removed from the volume group will continue once a sync occurs on the volume group it was removed from. When attempting the restore or refresh operations on a snapshot of a volume group, ensure that the number of volumes that were in the group when the snapshot was taken match the number of volumes in the volume group that is being restored or refreshed. For instance, if the snapshot was taken when the group had five members, it cannot be used for a restore if the group does not currently contain the five original members. To
access this data, you can create a thin clone from the snapshot. To view the number of members of the group when the snapshot was taken, reference the Volume Members column on the snapshot tab. The write-order consistency setting is a property of the volume group. This setting is enabled by default but can be changed at the creation of the volume group or later. The write-order consistency setting controls whether a snapshot is created at a consistent time across all members of the group. If enabled, the system takes a snapshot at the exact same time across all objects to keep the point-in-time image consistent for the entire group. If disabled, there is a chance that the snapshots on individual volumes within volume group are taken at slightly different times with possibly newly written data. When the snapshot is taken, the write-order
consistency setting is marked as a property of the snapshot, and affects what operations can be done on the snapshot. A column in the snapshot list for volume groups exists to view the write-order consistent property for each snapshot.
When write-order consistency is Yes for a snapshot, the restore and refresh operations have different capabilities than when it is No. When enabled on the snapshot, the restore and refresh operations affect the entire volume group, regardless of the current setting on the volume group. For instance, if Restore is used, all members of the group are restored from the snapshot image. This behavior is the same for the refresh operation. If write-order consistency is No for the snapshot, the restore and refresh operations can be issued to individual volumes within a volume group. The write-order consistency setting also affects the ability to assign a protection policy to a volume group and its members. When write-order consistency is enabled on the group, users can only assign a protection policy to the volume group itself. Assigning a protection policy to an individual member is not supported. When writeorder consistency is disabled on the volume group, users can choose to assign a policy to the group, or its individual members, but not both. This action provides flexibility for protecting the various members of the group with different protection policies.
When a volume group is deleted, the user can delete the volume group and retain its members or delete the group along with its members. When only the volume group is deleted, all snapshots taken of the group are also deleted. Any snapshots that are taken of the individual volumes remain. In either case, any thin clones that are created of the volume group or from a snapshot of the volume group also remain unaffected.

A protection policy is a simple, named container of protection rules.

Protection policies automatically manage snapshots or replication operations based on the included rules.

You create policies for your implementation and apply a specific policy to a storage resource based on the business need or criticality of the data.

In the end, what makes a protection policy are the rules that it contains.

Each protection policy can include up to four snapshot rules, and no more than one replication rule.

You apply a protection policy to a storage resource. For any one storage resource, you can apply only one protection policy.

Storage resources include:

You can re-use the same protection policy on many storage resources. This avoids the need to create specific snapshot and/or replication rules for each storage resource.

The ability to re-use a protection policy provides:

  • Efficiency: Create once and use everywhere.
  • Consistency: Use same policy for all objects.
  • Simplicity: Single point of management.

A protection policy must contain at least one rule. You can create rules and then add them to a policy, or you can create the rules at the same time that you create the policy.

This example shows creating a rule before a policy.

To create a snapshot rule:

  • Under Protection, select Protection Policies.
  • Select Snapshot Rules.
  • Click Create.
  • In the Create Snapshot Rule slideout, specify the details for the new rule.
    • Rule Name
    • Days
    • Frequency/Start Time
    • Retention
    • File Snapshot Access Type
  • Click Create.

    The new Snapshot Rule appears in the list:

To create a policy and add a rule:

  • Under Protection, select Protection Policies.
  • Click Create.
  • In the Create Protection Policy slideout, enter a Policy Name.
  • Enter a Description for the policy.
  • Check the box to select at least one Snapshot Rule.
  • Click Create.

To assign a protection policy to a volume:

  • Go to Storage > Volumes.
  • From the list of volumes , check the box next to the storage
    resource to be protected.
  • From the More Actions menu, select Assign Protection Policy.
  • In the Assign Protection Policy slideout, check the box next to the policy you wish to apply.
  • Click Apply.

Thin clone overview
A thin clone is a read/write copy of a volume, volume group, file system, or a snapshot of these resource types. Thin clones are essentially thin copies of the object it was created from. As with snapshots, thin clones are thin, pointer-based objects which use redirect-on-write technology that provides immediate access to the data contained in the source of the thin clone. Thin clones are not full copies of the original source and should not be used for disaster recovery scenarios. The screenshot below, shows an example of a thin clone that is created from a supported resource. When initially created, the thin clone shares all blocks with the resource it was created from. Due to redirect-on-write technology, as new writes to the original resource or the thin clone are made, new space is consumed, and original data remains until it is no longer in use.

Thin clones also support local and remote data protection. For a thin clone to be protected, manual snapshots can be taken at any time, or a protection policy can be assigned to it. The screenshot below, shows an example of a thin
clone with a protection policy assigned. It contains a snapshot rule and an RPO-based replication rule. The resource is also mapped to a host for access.


Thin clones within PowerStore are treated as an autonomous resource, as if they were a separate volume, volume group, or file system. When created, they are listed on the main resource page, such as the Volumes or File Systems page. The properties window for a thin clone contains the same information as other resources, and the method to delete a thin clone is also the same. As an added benefit, parent resources can be deleted without deleting their thin clones. This action does not impact the thin clone or any snapshots the thin clone may have.
Use thin clones to create and manage space-efficient copies of production environments, which is beneficial for the following types of activities:
Development and test environments: Thin clones allow test and development personnel to work with real workloads and use all data services that are associated with production storage resources without interfering with production. They also allow development personnel to promote a test thin clone to production.
Parallel processing: Parallel processing applications that span multiple servers can use multiple thin clones of a single production data set to achieve results more quickly.
Online backup: Use thin clones to maintain hot backup copies of production systems. If there is corruption in the production data set, the read/write workload can be immediately resumed using the thin clones.
System deployment: Use thin clones to build and deploy templates for identical or near-identical environments. For example, create a test template that is thin cloned as needed for predictable testing.

Below, you can see a demo, showing how to create a protection policy with snapshots

and another demo, explaining the differences between Restoring And Restoring data on PowerStore

you can also download the following white papers:

Clustering and High Availability

Snapshots and Thin Clones

In the next post, we are going to cover remote protection (replication)

The post What Is PowerStore – Part 8, Local Protection appeared first on Itzikr's Blog.

Integrating Dell PowerProtect Data Manager with Red Hat OpenShift and Dell Powerstore

$
0
0

A guest post by Martin Hayes

The interest in OpenShift and OpenShift virtualization in particular has increased steadily over the last year or so as administrators and IT decision makers are looking at more choice in the Hypervisor market (we don’t have to rehash the VMware/Broadcom thing) but also as containers have become mainstream and users wish to consolidate their infrastructure and co-host with their Virtual Machines, which by the way are not going anywhere soon. This just makes sense from a technical and TCO perspective.

Technically, we are seeing the emergence of hybrid modern applications, whereby the front-end may be based on a modern container architecture, but the back-end may still be an old and wrinkly database running SQL. Full end to end application re-factoring may just not be possible, even in the long run. Also financially and operationally it becoming increasingly difficult to justify two independent infrastructure stacks, one for your VM estate and one for your container workloads. Enter OpenShift Virtualization and its upstream close relation, Kubevirt, which marries the both containers and VM’s in one singular platform.

But what about the title of this blog, well of course every solution needs a Data protection and security wrapper. Container management and orchestration platforms based on Kubernetes have long since adopted data persistence and enterprise grade data protection. Data Protection, Security and availability were always essential pillars in an enterprise Virtual machine architecture. Dell Power Power Protect Data Manager has the ability to service the needs of both on a single platform.

In this series of blogs we deep dive into how we make this real, through the lens and persona of the Infrastructure architect coming from the world of virtualization. By the end of this series ( and I don’t know how long it will be yet!), we should hopefully get all the way to protecting/migrating and securing virtualized workloads resident on the OpenShift platform. Note: as it stands OpenShift Virtualization is currently under limited Customer Beta as part of the 19.18 PPDM release. Stay tuned for more detail/demos in the coming months.

First things first though, lets spend the rest of this post standing up the base infrastructure by showcasing how we integrate Dell Power Protect Data Manager with and a Red Hat OpenShift environment.

Starting Environment:

As always this is a ‘bolts and all’ integration summary. We will cover the integration between PPDM and the RedHat OpenShift cluster it is protecting in detail. I’m going to cheat a little with the initial setup, in that my lab environment has already been configured as follows:

  • Dell Power Protect Data Manager running version 19.18.0-17. Which is the latest version as per time of writing.
  • Dell Power Protect Data Domain Virtual Edition running 8.1.0.10
  • Red Hat OpenShift Version 4.17.10 ( Kubernetes Version v1.30.7)
  • Dell CSM Operator version 1.7.0 ( Instructions for the Operator Hub install can be found here)
  • Dell CSI Driver for Powerstore version v2.12.0
  • Dell PowerStore 9200T All Flash Array running 4.0.0.0 Release Build 2284811) – Presenting the file storage target for our OpenShift cluster

1000 Foot view of what we are doing:

If the diagrams are a little small, double click and it should open in a new tab. In short what we will demo is as follows:

  1. We will present a File based storage target to our OCP Cluster via the Dell CSM/CSI Module.
  2. We will spin up a new OCP namespace called ‘PPDM-Demo’. In this namespace we will deploy a demo application/pod ( Trust me this will be really simple) and configure this pod to consume our Powerstore File storage by using a Persistent Volume Claim or PVC.
  3. At this point our cluster has no secondary storage configured, so if anything should happen to our new application then we will have no means of recovering it. Enter PPDM! We will overview the process to attach PPDM to our OCP cluster.
  4. We will show how easy it is to configure a protection and recovery policy to enable Data Protection for our new application and namespace.
  5. Disaster strikes!!! We will accidentally delete our application namespace from the OCP cluster.
  6. Panic averted…. we will recovery our application workload to a net new namespace leveraging the backup copy stored on DDVE with an automated workflow initiated by PPDM.

We will break this post into two parts, this post will cover items 1 through 4 inclusive. In the next part of this post we will cover items 5 and 6.

Step 1: Create Test Namespace and Application Pod

As mentioned earlier, this won’t be anything too arduous. Log in to your OpenShift console as per normal, navigate to ‘Projects’, and click create ‘Project’.

I am going to give it the name ‘ppdm-demo’ and we are done!

Step 2: Verify Storage Class & VolumeSnapshotClass is configured.

Before we create our demo pod in the new ‘PPDM-Demo’ namespace, we will want to check if the Storage Class for Powerstore File and the VolumeSnapshotClass has been configured. This was preconfigured in my environment ( should be the job of your friendly storage admin perhaps)

Navigate to Storage -> StorageClasses. Here you can see we have two StorageClasses configured. The first for Block storage the second for File. Here you can see my Storage Class named powerstore-nfs provisioned by the dell PowerStore csi driver.

Step 3: Configure a PersistentVolumeClaim (PVC) and Verify.

Now that we have verified that our Storage Class and VolumeSnapshotClass are present, we will proceed to deploy some demo workload in our new namespace. As I said this is going to be very basic but it perfectly fine for demo purposes.

First up will will create a manual PersistentVolumeClaim. Navigate to Storage > PersistentVolumeClaims and then to ‘Create PersistentVolumeClaim’.

I have selected the nfs backed storage class, given my PersistentVolumeClaim a name ‘ppdm-claim-1’, and selected the RWX Access mode with a size of 5GB. Click ‘Create’.

Navigating to the YAML definition of the new PVC, I can see how it is configured, the access mode, the storage class name, its status which is immediately bound, volumeMode which is file. Note the new volume name ‘ocp11-7f9787520a’.

Navigate back to Persistent Volumes and you can see that a new Persistent Volume has been created with the Volume Name of ‘ocp11-7f9787520a’ associated with the PVC we just created ‘ppdm-claim-1’.

Next navigate to your PowerStore GUI and we can see that indeed we have created an NFS filesystem using CSI.

Step 4: Create your POD/Container/Application & attach to PVC

Now that we have our persistent storage setup, our PVC created and namespace configured, next up we will deploy our simple Linux based Pod. This time I will use the ‘Import YAML’ function in the OpenShift GUI. Navigate to the ‘+’ icon on the top right hand corner of the screen.

You can drag and drop directly into the editor, or just copy and paste directly.

Note the YAML file points to the claimName:ppdm-claim-1 and the namespace is ppdm-demo. Click ‘Create’.

apiVersion: v1
kind: Pod
metadata:
  name: ppdm-test-pod
spec:
  securityContext:
    runAsNonRoot: true
    seccompProfile:
      type: RuntimeDefault
  containers:
  - name: ppdm-test-pod
    image: busybox:latest
    command: ["sh", "-c", "echo Hello, OpenShift! && sleep 3600"]
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsNonRoot: true
      seccompProfile:
        type: RuntimeDefault
    volumeMounts:
    - mountPath: /mnt/storage
      name: storage
  volumes:
  - name: storage
    persistentVolumeClaim:
      claimName: ppdm-claim-1

Verify that your new POD is in a running state. You should also be able to attach to the Terminal. In the video demo I will create a test file so we can demonstrate persistence after we do the backup testing.

Configure Power Protect Data Manager

So now we have our application running in our OpenShift cluster backed by PowerStore persistent storage. Next we want to protect this application using Dell PowerProtect Data Manager and point the backups to our PowerProtect Data Domain device.

We won’t run through how to do the initial standup of PPDM and DDVE as I have covered this in other blogs. Link is here. We will start with a clean build, with DDVE already presented to PPDM as the backup storage repository.

Add Kubernetes as an Asset Source and Perform Discovery

Log into PPDM, navigate to the left hand menu. Click Infrastructure -> Asset Sources. Scroll down until you find the Kubernetes Tile and then ‘Enable Source’.

You will be presented with an ‘Add’ Asset Sources under the new Kubernetes tab. Ignore that for now and we will come back to it, once we have our credentials configured.

Now navigate to the downloads section of the GUI, under the gear icon in the top right hand corner.

Select Kubernetes and the RBAC tile. Click Download and save and extract to your local machine.

You will be presented with 3 files:

  • ppdm-controller-rbac.yaml
  • ppdm-discovery.yaml
  • README

We will use the two YAML files to setup the required service accounts, roles, role bindings and permissions, to allow PPDM discover, communicate and configure resources in the OpenShift environment. We will also create the secret for the ‘ppdm-discovery-serviceaccount’.

Navigate back to the OpenShift console and execute both YAML scripts, starting with the ‘ppdm-discovery.yaml’ file. Of course we could do this directly from the CLI also, but I like this feature of the GUI as it also allows you ‘drag and drop’ the raw files themselves.

Click ‘Create’ and this executes a ‘Kubectl apply’ command in the background.

All going well, you will be presented with a screen confirming that all resources were successfully created.

Follow the same process using the ppdm-controller-rbac.yaml file. You may get an error pointing to the fact that the ‘powerprotect’ namespace already exists. This is fine.

Next we need to generate the secret for the ‘ppdm-discovery-serviceaccount’. Using the console again execute the following YAML ( hint: if you read the README file it is in there !)

apiVersion: v1
kind: Secret
metadata:
  name: ppdm-discovery-serviceaccount-token
  namespace: powerprotect
  annotations:
    kubernetes.io/service-account.name: ppdm-discovery-serviceaccount
type: kubernetes.io/service-account-token

Import the YAML file into the console as per the previous step and click ‘Create’

OpenShift now generates the ‘ppdm-discovery-serviceaccount-token’ details. Scroll down to the bottom of the screen to the ‘TOKEN’ section and click the copy icon.

Now that we have the secret retrieved we can navigate back to our PPDM console and add the OpenShift Kubernetes cluster. Navigate to Infrastructure -> Asset Sources -> Add.

Follow the input form, pointing to the api of the openshift cluster. redact the https from the start. Leave the port as standard 6443. In the Host credentials field, select Add Credentials.

Give the credential set a name and paste the token you copied earlier into the Service Account Token field and click Save.

Verify and accept the cert and then click Save.

After a few seconds, the newly discovered Asset Source should appear and an automatic workload discovery will be initiated by PPDM.

Navigate to the Assts tab and after the automated discovery you can see the discovered namespaces in the cluster. Including the ‘ppdm-test’ namespace we created earlier!

Configure Protection Policy

The next logical step of course is to configure a Protection Policy for our OpenShift Project ‘ppdm-demo’ and protect all the underlying Kubernetes resources, PVC’s etc. under its control.

First step navigate to Protection -> Protection Policies and click ‘Add’. Follow the GUI guided path.

Select ‘Crash Consistent’ which snapshots the PVC bound to our application and backs it up on our target backup storage on Data Domain.

Add the asset namespace that we wish to protect.

Then step through the GUI to configure the primary back target, which of course is our backend DDVE. I have selected a Full backup, every 8 hours and we will retain for 1 day.

Follow through to the summary menu and click Finish.

Manually run the Protection Policy

We could wait until the policy kicks off the backup at the designated time, but we will want to verify it works and plus I am a little impatient. Thus, we are going to take the option to ‘Protect Now’. Navigate back to Protection -> Protection Policies and select our new policy. Click on ‘Protect Now’.

Step through the GUI, selecting the Full backup option and kick off the job. Navigate to the Jobs menu on the sidebar and monitor the Job as it completes. Dependent on the size of the namespace to be backed up this will of course take some time.

Eventually the Job completes successfully.

Up Next

Next week, I will follow up with our orchestrated disaster, whereby I will accidently delete my running application, namespace, POD and associated PVC.

I think this is probably deserving of a Video demo also which will capture the whole process end to end!

Dell PowerScale Snapshot Use Cases with Terraform

$
0
0

A post by Parasar Kodati

In version 1.6 of the Terraform provider for PowerScale, new resources (writable_snapshot, snapshot_restore) and datasources (writable_snapshot) have been introduced to support workflows with PowerScale snapshots. This blog post presents day-2 use cases for snapshot management that can be automated with Terraform configurations.

Dell PowerScale provides a highly flexible and scalable way to manage unstructured data. As organizations grow and data footprints expand, the importance of streamlined snapshot management becomes key for ensuring data protection, business continuity, and compliance. However, manually configuring and maintaining snapshots, especially across multiple filesystems, pools, and export policies, can be both time-consuming and error-prone.

In this blog post, we’ll explore four specific use cases that demonstrate how to leverage Terraform for comprehensive snapshot management on PowerScale:

1. Automated Snapshot Management with Schedules and Restores – Create and update snapshot schedules, apply new retention policies, and configure restorations for quick data recovery

2. Integration of Writable Snapshots with Access Control List (ACL) Configurations – Create writable snapshots and secure them with well-defined Access Control Lists, ensuring that only certain users can modify snapshot data

3. Snapshot Management with NFS Export Integration – Handle snapshots in environments that rely on NFS exports, from initial creation to applying specific export rules for snapshot directories

4. Snapshot Policy Automation with SmartPool Settings – Tie snapshot policies to storage tiers, including leveraging SmartPool and filepool policies to optimize data placement and retention

Let’s dive in and discover how you can automate and simplify your snapshot workflows in PowerScale using Terraform.
 

1. Automated Snapshot Management with Schedules and Restores

Use Case

In modern IT environments, managing data efficiently and reliably is crucial. Automated Snapshot Management with Schedules and Restores is designed to streamline the process of data protection for Dell PowerScale systems using Terraform. This use case entails setting up a new snapshot schedule for a filesystem, modifying existing snapshots to adhere to updated retention policies, and configuring restoration settings for specific use cases. By leveraging Terraform, you can automate these tasks, ensuring consistent and reliable snapshot management.

Resources and Data Sources

Resources

  • snapshot_schedule: Defines the schedule for automatic snapshot creation
  • snapshot: Represents the actual snapshot of the filesystem
  • snapshot_restore: Used to restore a filesystem to a previous state using a snapshot

 Data sources

  • filesystem: Provides information about the filesystem being managed
  • snapshot: Offers details about specific snapshots
  • snapshot_schedule: Supplies information about the snapshot schedules


Terraform Configuration

This configuration involves setting up a snapshot schedule, creating a snapshot, and configuring restoration, along with querying necessary data. 

# Create a snapshot schedule
resource "powerscale_snapshot_schedule" "snap_schedule" {
   name            = "automated_snap_schedule"
   path            = "/ifs/your_filesystem_path"
   retention_time  = "1 Month"
   schedule        = "Every 1st day at 2:00 AM"
   alias           = "daily_backup"
   pattern         = "backup-%Y-%m-%d"
}
 
# Create a snapshot for the filesystem
resource "powerscale_snapshot" "snap" {
   path        = "/ifs/your_filesystem_path"
   name        = "manual_snapshot"
   set_expires = "1 Day"
}
 
# Restore snapshot using snaprevert operation
resource "powerscale_snapshot_restore" "restore" {
   snaprevert_params = {
     allow_dup   = true
     snapshot_id = powerscale_snapshot.snap.id
  }
}
 
# Query information for the filesystem
data "powerscale_filesystem" "system" {
   directory_path = "/ifs/your_filesystem_path"
}
 
# Query snapshot details
data "powerscale_snapshot" "snapshot_data" {
  filter {
    path = "/ifs/your_filesystem_path"
    name = "manual_snapshot"
  }
}
 
# Query snapshot schedule details
data "powerscale_snapshot_schedule" "schedule_data" {
  filter {
    names = [powerscale_snapshot_schedule.snap_schedule.name]
  }
}
 
output "filesystem_info" {
  value = data.powerscale_filesystem.system
}
 
output "snapshot_info" {
  value = data.powerscale_snapshot.snapshot_data
}
 
output "snapshot_schedule_info" {
  value = data.powerscale_snapshot_schedule.schedule_data
}

How it works

  • Provider Configuration: The dell_powerscale provider is configured to interact with Dell PowerScale systems
  • Snapshot Schedule: The powerscale_snapshot_schedule resource defines a schedule for taking automated snapshots, specifying the frequency, retention, and naming pattern
  • Snapshot Creation: The powerscale_snapshot resource manually creates a snapshot with a defined expiration, providing a point-in-time backup
  • Snapshot Restoration: The powerscale_snapshot_restore resource uses the snaprevert operation to restore the filesystem from a specified snapshot, allowing duplicates if necessary
  • Data Querying: Datasources powerscale_filesystem, powerscale_snapshot, and powerscale_snapshot_schedule retrieve information about the filesystem, specific snapshots, and snapshot schedules respectively
  • Outputs: Outputs provide detailed information about the current state and configuration of the filesystem, snapshots, and schedules, aiding in monitoring and auditing
     

2. Integration of Writable Snapshots with ACL Configurations

Use Case

This use case focuses on the integration of writable snapshots with ACL configurations using Dell PowerScale and Terraform. Writable snapshots provide a mechanism to create modifiable copies of specific filesystems, allowing users to work on snapshot data without altering the original dataset. By integrating these writable snapshots with appropriate ACL configurations, we ensure secure and restricted access based on designated user permissions. The Terraform configuration provided automates the creation of writable snapshots and the modification of namespace ACLs to establish correct access rights.

Resources and Data Sources

Resources

  • writable_snapshot: Represents the writable snapshot resource that is created from a source snapshot
  • namespace_acl: Manages the ACL settings associated with a namespace, defining access permissions for users and groups

Data sources

  • filesystem: Provides details about the filesystem associated with the writable snapshot, enabling further resource configuration
  • namespace_acl: Retrieves the existing ACL configurations for a given namespace to verify and manage access controls

Terraform Configuration

variable "snap_id" {
   description = "The ID of the source snapshot"
   type        = string
}
 
resource "powerscale_writable_snapshot" "writablesnap_example" {
  dst_path = "/ifs/example_writable_snapshot"
   snap_id  = var.snap_id
}
 
resource "powerscale_namespace_acl" "example_namespace_acl" {
   namespace = powerscale_writable_snapshot.writablesnap_example.dst_path
 
   acl_custom = [
    {
       accessrights  = ["dir_gen_all"]
       accesstype    = "allow"
       inherit_flags = ["container_inherit"]
       trustee = {
        id = "UID:0"
      }
    },
    {
       accessrights  = ["dir_gen_write", "dir_gen_read", "dir_gen_execute"]
       accesstype    = "allow"
       inherit_flags = ["container_inherit"]
       trustee = {
         name = "Isilon Users"
         type = "group"
      }
    },
  ]
}
 
data "powerscale_filesystem" "example_filesystem" {
   directory_path = powerscale_writable_snapshot.writablesnap_example.dst_path
}
 
output "powerscale_filesystem_details" {
  value = data.powerscale_filesystem.example_filesystem
}
 
data "powerscale_namespace_acl" "example_acl" {
  filter {
     namespace = powerscale_writable_snapshot.writablesnap_example.dst_path
     nsaccess  = true
  }
}
 
output "powerscale_namespace_acl_example" {
  value = data.powerscale_namespace_acl.example_acl
}

How it works

  • Variable Declaration: The snap_id variable is defined to input the ID of the source snapshot from which the writable snapshot will be created
  • Writable Snapshot Resource: The powerscale_writable_snapshot resource creates a writable snapshot at the specified destination path, using the snapshot ID provided by the snap_id variable
  • Namespace ACL Resource: The powerscale_namespace_acl resource modifies the ACL configuration for the writable snapshot’s namespace. It sets custom ACL entries that define the access rights for specific users and groups, ensuring the writable snapshot is accessed securely
  • Filesystem Data Source: The powerscale_filesystem data source fetches details of the filesystem associated with the writable snapshot, useful for further configuration and verification
  • Namespace ACL Data Source: The powerscale_namespace_acl data source retrieves current ACL settings, allowing verification of the ACL configurations applied to the writable snapshot
  • Outputs: The configuration includes outputs to display filesystem details and ACL settings, aiding in validation and monitoring of the deployment
     

3. Snapshot Management with NFS Export Integration

Use Case

In the modern data management landscape, efficiently managing snapshots alongside NFS exports is crucial for data protection and accessibility. This use case focuses on creating snapshots for NFS-exported filesystems, configuring export settings to ensure snapshot compatibility, and applying export rules to govern access to snapshot directories. By leveraging Terraform, this process is automated, providing a streamlined approach to managing snapshots and NFS exports in Dell PowerScale environments.

Resources and Data Sources

Resources

  • snapshot: Manages the creation and expiration of filesystem snapshots
  • nfs_export: Configures NFS exports, specifying paths and access rules
  • nfs_export_settings: Defines settings specific to NFS export operations

Data sources

  • nfs_export: Retrieves existing NFS export configurations
  • filesystem: Accesses details about specific filesystem paths
  • snapshot: Obtains information about current snapshot configurations

Terraform Configuration 

resource "powerscale_snapshot" "managed_snapshot" {
   path        = "/ifs/nfs_exported_filesystem"
   name        = "snapshot_for_export"
   set_expires = "1 Week"
}
 
resource "powerscale_nfs_export" "export_for_snapshot" {
  paths = ["/ifs/nfs_exported_filesystem"]
 
  // Optional settings to ensure compatibility with snapshots
  snapshot = powerscale_snapshot.managed_snapshot.name
 
  // Example access rules
   clients          = ["192.168.1.10", "192.168.1.11"]
   read_only        = true
   security_flavors = ["unix"]
}
 
resource "powerscale_nfs_export_settings" "export_settings" {
  // Example settings for NFS export
   all_dirs             = false
   case_insensitive     = true
   commit_asynchronous  = false
   map_full             = true
   write_transfer_size  = 524288
   read_transfer_size   = 131072
   snapshot             = powerscale_snapshot.managed_snapshot.name
   security_flavors     = ["unix"]
}
 
data "powerscale_nfs_export" "export_data" {
  filter {
    paths = ["/ifs/nfs_exported_filesystem"]
  }
}
 
data "powerscale_filesystem" "filesystem_data" {
   directory_path = "/ifs/nfs_exported_filesystem"
}
 
data "powerscale_snapshot" "snapshot_data" {
  filter {
    path = "/ifs/nfs_exported_filesystem"
  }
}
 
output "nfs_export" {
  value = data.powerscale_nfs_export.export_data
}
 
output "filesystem_info" {
  value = data.powerscale_filesystem.filesystem_data
}
 
output "snapshot_info" {
  value = data.powerscale_snapshot.snapshot_data
}

How it works

  • Provider Configuration: The PowerScale provider is initialized, ready to manage resources
  • Snapshot Resource: A snapshot named snapshot_for_export is created for the specified filesystem path, set to expire in one week
  • NFS Export Resource: An NFS export is configured for the same filesystem path, with access restricted to specified clients and read-only permissions. The snapshot created is associated with this export
  • NFS Export Settings Resource: Additional export settings are applied to ensure compatibility and efficiency. These include transfer sizes and security flavors
  • Datasources: The configuration retrieves existing data on NFS exports, filesystems, and snapshots, which can be used for validation or reporting purposes
  • Outputs: The outputs section provides access to the NFS export data, filesystem information, and snapshot details for further use or monitoring

By utilizing this Terraform configuration, administrators can automate the setup and management of snapshots integrated with NFS exports, ensuring a robust and efficient data management strategy.
 

4. Snapshot Policy Automation with SmartPool Settings

Use Case

In the modern data management landscape, efficient storage utilization and automated data protection mechanisms are critical. The “Snapshot Policy Automation with SmartPool Settings” use case explores automating snapshot policies to optimize storage across different tiers within Dell PowerScale’s SmartPool. By leveraging Terraform, this configuration allows IT administrators to define specific snapshot policies that align with SmartPool settings, enhancing data protection while optimizing storage costs and performance. The automation includes setting up SmartPool configurations, defining filepool policies for data movement, and managing snapshots effectively.

Resources and Data Sources

Resources

  • snapshot: Manages the creation and expiration of snapshots
  • smartpool_settings: Configures the parameters for SmartPools, including spillover settings
  • filepool_policy: Defines policies for data movement and snapshot storage across tiers


Data sources

  • smartpool_settings: Retrieves current SmartPool configurations
  • storagepool_tier: Accesses information about available storage tiers
  • snapshot: Collects detailed information on snapshots for monitoring and reporting
     

Terraform Configuration

This Terraform configuration automates the deployment of snapshot policies integrated with SmartPool settings, ensuring efficient data protection and storage optimization:

resource "powerscale_smartpool_settings" "smartpool" {
   spillover_enabled = true
   spillover_target {
    name = "tier1"
    type = "storagepool"
  }
}
 
data "powerscale_storagepool_tier" "tiers" {}
 
resource "powerscale_filepool_policy" "snapshot_policy" {
   name              = "SnapshotPolicy"
   description       = "Policy for managing snapshot storage across tiers"
   apply_order       = 1
   is_default_policy = false
 
   file_matching_pattern {
     or_criteria = [
      {
         and_criteria = [
           {
             operator = ">"
             type     = "size"
             units    = "MB"
             value    = "100"
           }
        ]
      }
    ]
  }
 
  actions = [
    {
       action_type = "apply_snapshot_storage_policy"
       snapshot_storage_policy_action = {
         ssd_strategy = "metadata"
         storagepool  = data.powerscale_storagepool_tier.tiers.storagepool_tiers[0].name
      }
    }
  ]
}
 
data "powerscale_snapshot" "snapshots" {
  filter {
    path = "/ifs"
    sort = "name"
     dir  = "DESC"
  }
}
 
resource "powerscale_snapshot" "example_snapshot" {
  path = "/ifs/example_path"
  name = "example_snapshot"
   set_expires = "1 Week"
}
 
output "snapshot_details" {
  value = data.powerscale_snapshot.snapshots.snapshots_details
}

How it works

  • Provider Configuration: The dell_powerscale provider is initialized to interact with the PowerScale environment
  • SmartPool Settings: The powerscale_smartpool_settings resource configures the SmartPool to enable spillover, directing excess data to a specified storage tier
  • Filepool Policy: The powerscale_filepool_policy resource establishes a policy that applies snapshot storage policies based on file size, ensuring data is efficiently managed across tiers
  • Snapshot Data: The powerscale_snapshot data source retrieves existing snapshots, allowing for monitoring and analysis
  • Snapshot Resource: An example snapshot is created with an expiration of one week, showcasing how snapshots can be managed within the policy framework
  • Output: The snapshot_details output provides detailed information on snapshots for further utilization

By implementing this Terraform configuration, organizations can automate snapshot management, optimize storage usage, and enhance data protection strategies within their Dell PowerScale environments.


Conclusion

As data environments grow increasingly complex, efficient snapshot management can become a major differentiator in your organization’s ability to deliver rapid recovery, ensure compliance, and meet performance requirements. Through the use cases covered in this post, ranging from automated snapshot scheduling, restoration to sophisticated ACL integrations, and advanced SmartPool optimizations, you’ve seen how Terraform can serve as a powerful ally in bringing both agility and reliability to your PowerScale ecosystem.

By codifying your snapshot policies, access control configurations, and storage tiering strategies, you not only reduce the risk of manual misconfigurations but also position your organization to scale seamlessly. Whether you are managing a handful of snapshots or hundreds, adopting an Infrastructure-as-Code approach paves the way for consistent, repeatable, and transparent processes.

I hope these examples inspire you to further explore Terraform’s capabilities in automating PowerScale workflows. As you move forward, consider extending these use cases to other aspects of your environment, from orchestration pipelines to broader disaster recovery planning. 

Dell CSM 1.13 are now Available

$
0
0

On the heels of our previous Kubernetes integration modules (Container Storage Modules). we have just released version 1.13

Cross Portfolio Features:

  • PowerFlex
    • Multi-Availability Zone (AZ): multiple storage systems and dedicated storage systems in each AZ
    • Enable installation on Kubernetes cluster connected to multiple PowerFlex systems

A screen shot of a computer

AI-generated content may be incorrect.

Ecosystem Qualifications:  


Strengthening Cyber Resilience with Dell APEX AIOps Infrastructure Observability

$
0
0

Cyber threats are evolving at an unprecedented pace, and traditional, reactive security measures are no longer enough. Organizations need proactive, AI-driven intelligence to identify vulnerabilities, mitigate risks, and strengthen their security posture before attackers strike.

Cybersecurity in APEX AIOps Infrastructure Observability (formerly CloudIQ) provides advanced cybersecurity insights powered by Dell’s deep expertise in its proprietary products. This platform is designed to give IT and security teams real-time, automated visibility into risks, ensuring they stay ahead of emerging threats.

Key Cybersecurity Capabilities

A screenshot of a computer

AI-generated content may be incorrect.

A screenshot of a computer

AI-generated content may be incorrect.

A screenshot of a computer

AI-generated content may be incorrect.

🔹 Security Misconfigurations: Automatically detect and remediate security misconfigurations by aligning with the latest Dell product security capabilities.

A screenshot of a computer

AI-generated content may be incorrect.

A screenshot of a computer

AI-generated content may be incorrect.

🔹 Security Advisories: Gain real-time detection of Common Vulnerabilities and Exposures (CVEs) in Dell products, along with clear remediation guidance to minimize risk.

A screenshot of a computer

AI-generated content may be incorrect.

A screenshot of a computer

AI-generated content may be incorrect.

🔹 Ransomware Detection: Identify and mitigate potential ransomware attacks early, while minimizing damage.

A screenshot of a computer

AI-generated content may be incorrect.

A screenshot of a computer

AI-generated content may be incorrect.

🔹 Risk Assessment: Obtain a comprehensive, real-time security posture overview to make proactive, data-driven security decisions.

Below are real-world security challenges faced by organizations and how Dell’s cybersecurity features provide tangible solutions to mitigate risk.

Use Case 1: Preventing Security Drifts in Configurations

Challenge:

A large enterprise deploys 5,000 PowerEdge servers in a data center. At launch, all systems are securely configured, but over the next 6-12 months, updates, patches, and manual changes lead to configuration drift. The CISO has no centralized visibility into these changes, creating hidden security gaps that cybercriminals could exploit.

Solution: Cybersecurity Misconfigurations Feature

The Misconfigurations feature continuously monitors all systems against a predefined security baseline, aligned with NIST guidelines, detecting deviations in real-time and providing remediation steps. Instead of periodic, error-prone manual audits, the IT team receives automated alerts for misconfigurations, ensuring security posture remains intact throughout the system lifecycle.

🔹 Impact: Eliminates security blind spots, reducing misconfiguration risks by 40% and strengthening compliance.

Use Case 2: Automating Vulnerability Management

Challenge:

A mid-sized financial services company relies on 1,000 PowerEdge servers. Security teams manually track Dell Security Advisories to identify relevant vulnerabilities, consuming:

  • 3 hours per week for real-time updates
  • 1–2 days per advisory to assess impact
  • 3–4 days per advisory for deep analysis

This manual process leads to delays in patching critical vulnerabilities, leaving the organization exposed to potential exploits.

Solution: Cybersecurity Security Advisories Feature

The Security Advisories feature automates vulnerability tracking, instantly mapping Dell CVEs to the customer’s infrastructure. It provides prioritized remediation guidance, reducing the burden on security teams and accelerating time-to-patch.

🔹 Impact: Cuts vulnerability analysis time by 80%, ensuring faster remediation and improved risk management.

Use Case 3: Early Detection and Mitigation of Ransomware Attacks

Challenge:

A financial services company suffered a ransomware attack that targeted primary and backup storage systems, encrypting terabytes of critical financial transaction data. The company faced potential regulatory penalties for data unavailability, operational downtime exceeding 48 hours, and a ransom demand of $6 million.

Solution: Cybersecurity Ransomware Detection Feature

Cybersecurity Ransomware Detection is a built-in AI-driven solution that continuously analyses data behaviour to detect potential ransomware encryption attempts.

🔹 Real-time anomaly detection – Monitors changes in data reducibility, spotting unusual encryption activities.
🔹 Automated alerts and response – Generates instant alerts to security teams, allowing for rapid isolation of infected systems before the ransomware spreads.
🔹 Recovery point details – Provides information about the beginning of the attack, enabling organizations to revert to clean, unencrypted data without paying the ransom.

🔹 Impact: Faster threat detection and up to 70% cost savings on ransomware recovery.

Use Case 4: Prioritizing Security Risks Across a Distributed Enterprise

Challenge:

A global retail company operates hundreds of servers across multiple locations, each with its own security vulnerabilities, misconfigurations, and unresolved threats. Security teams struggle to assess risk levels across different sites, making it impossible to prioritize which systems require urgent action.

Solution: Cybersecurity Risk Assessment

The Cybersecurity Risk Assessment feature provides a holistic risk score by analysing:
✔ Unresolved misconfigurations across all systems
✔ Open vulnerabilities (CVEs) identified through Dell Security Advisories
✔ Active ransomware threats detected across locations

This centralized risk scoring allows security teams to prioritize critical issues first, ensuring that the most pressing threats are mitigated before they can be exploited.

🔹 Impact: Reduces security response time by 50%, improving risk visibility and enabling a proactive security strategy.

Proactive Security, Simplified

By integrating Security Misconfigurations, Security Advisories, Ransomware Detection, and Risk Assessment, organizations can reduce complexity, enhance security, and stay ahead of evolving threats.

Ready to take control of your security posture? Ensure your Dell product supports APEX AIOps Infrastructure Observability by following the instructions in Appendix A of this guideline. Once enabled, you can activate the Cybersecurity features for your systems.

After signing in, you can also explore an interactive simulator at this address to see it in action.

You can also watch a demo, showing how it all looks, below

Enhancing Dell PowerStore Security with ProLion CryptoSpike: A Powerful Defense Against Ransomware

$
0
0

A post by Jodey Hogeland

In today’s digital landscape, ransomware attacks pose a significant threat to businesses of all sizes. With the increasing sophistication of these attacks, robust security measures are crucial to protect critical data. While Dell TechnologiesPowerStore has industry leading cybersecurity features, there are times where enhancing this protection might be necessary. This is where specialized solutions like ProLion CryptoSpikecome into play. ProLion CryptoSpike can significantly strengthen your defenses against ransomware.

PowerStore: A Foundation for Modern Data Centers

Dell PowerStore is a high-performance, scalable storage platform that meets the demanding needs of modern enterprises. With its advanced features like high-speed NVMe, AI/ML-powered optimizations, and built-in security capabilities, PowerStore provides a robust foundation for critical business applications.

While PowerStore incorporates strong security measures, a layered approach is essential for comprehensive protection against evolving threats like ransomware. This is where the partnership between Dell and ProLion comes into play.

Introducing ProLion CryptoSpike

ProLion CryptoSpike is a specialized ransomware protection solution that seamlessly integrates with PowerStore. Leveraging the Dell Common Event Enabler (CEE) API, specifically the CEPA function, CryptoSpike taps into real-time events generated by PowerStore file systems. By analyzing these events, such as file creations, modifications, and deletions, CryptoSpike can detect and block suspicious activity, such as mass encryption or file modifications, indicative of a ransomware attack.

ProLion CryptoSpike Dashboard

Key Features of CryptoSpike

  • Real-time Threat Detection: CryptoSpike continuously monitors file access activity on PowerStore, analyzing events for anomalies and suspicious patterns.
  • Proactive Blocking: Upon detecting suspicious activity, CryptoSpike can automatically block malicious users, preventing further damage to the system.
  • Rapid Recovery: In the event of a ransomware attack, CryptoSpike assists with rapid recovery by enabling administrators to quickly identify and restore compromised files from snapshots.
  • Enhanced Visibility: CryptoSpike provides detailed insights into file access activity, enabling administrators to gain valuable insights into data usage patterns and identify potential security risks.
  • Multi-layered Protection: CryptoSpike employs a multi-layered approach, combining blocklists of known ransomware extensions, real-time anomaly detection, and user behavior analysis for comprehensive protection.
  • Data Privacy and Security: The 4-eye authentication feature enhances data privacy and security by requiring two distinct users to access sensitive information.

ProLion CryptoSpike

Benefits for PowerStore Customers

  • Enhanced Security: Significantly reduces the risk of successful ransomware attacks.
  • Improved Business Continuity: Minimizes downtime and disruption caused by ransomware incidents.
  • Faster Recovery: Enables rapid recovery from ransomware attacks with minimal data loss.
  • Increased Visibility: Provides valuable insights into data access patterns and potential security threats.
  • Simplified Management: Seamlessly integrates with PowerStore for easy deployment and management.

ProLion CryptoSpike Activity

Conclusion

By combining the power of Dell PowerStore with the advanced security capabilities of ProLion CryptoSpike, organizations can significantly enhance their data protection posture and mitigate the risks associated with ransomware attacks. This powerful combination provides a robust and comprehensive solution for safeguarding critical data in today’s challenging threat landscape.

To learn more about how Dell PowerStore and ProLion CryptoSpike can help protect your organization from ransomware, contact your Dell representative today for a consultation.

Additional Resources

Ansible playbooks for NAS file system management on Dell PowerStore

$
0
0

A post by Parasar Kodati

Scaling file services on your Dell PowerStore requires sharpening your automation skills for tasks like initial network attached server (NAS) setup, quota management, disaster recovery, and access control. This post explores how the Ansible collection for Dell PowerStore can streamline your PowerStore file services operations. We’ll dive into practical use cases, demonstrating how Ansible modules can be combined to achieve common objectives.

Specifically, we’ll cover:

  1. Initial NAS Setup with NFS and SMB Integration – Automate the creation of a new NAS server with both NFS and SMB capabilities, including DNS setup and LDAP integration for seamless user authentication
  2. Implementing File System Quotas and Alerts  Learn how to configure file system quotas to manage storage allocation and set up email alerts for quota breaches, ensuring proactive resource management
  3. Filesystem Snapshots with Replication for Disaster Recovery – Implement a robust disaster recovery strategy by automating the creation of filesystem snapshots and replicating them to a remote system
  4. Integrating NFS Server with LDAP for Access Control – Enhance security and manageability by integrating an NFS server with LDAP to control user access, ensuring only authorized users can access NFS shares
  5. Automating SMB Share Provisioning and Management – Streamline file sharing and access control by automating the creation and management of SMB shares, including local user account management. 

By the end of this post, you’ll have a clear understanding of how Ansible can simplify and automate your PowerStore file services, saving you time and improving your overall efficiency. Let’s get started!
 

1. Initial NAS Setup with NFS and SMB Integration

This section outlines how to automate the initial setup of a NAS server, integrating both Network File System (NFS) and Server Message Block (SMB) protocols. This setup includes configuring DNS settings and integrating with LDAP (Lightweight Directory Access Protocol) for centralized user authentication. An Ansible playbook is provided to streamline and automate the entire process, ensuring consistency and reducing manual configuration errors.

Ansible Modules

The following modules from the Ansible collection are used in this playbook:

  • nasserver – Manages the creation and configuration of the NAS server itself.
  • nfs – Handles NFS export creation and configuration, defining how filesystems are shared over the network.
  • smb_server – Enables and configures the SMB server on the NAS, allowing Windows-based clients to access shared resources.
  • dns – Configures DNS settings, crucial for name resolution within the network.
  • ldap_domain – Sets up the LDAP domain integration, enabling centralized user authentication.
  • ldap_account – Creates and manages LDAP accounts, assigning them roles and permissions.


 Ansible Playbook

---
- name: Initial NAS Setup with NFS and SMB Integration
  hosts: localhost  # Replace with your target host(s)
  gather_facts: false
  vars:
    array_ip: "your_array_ip"
    validate_certs: false
    user: "your_user"
    password: "your_password"
    nas_server_name: "my_nas_server"
    nas_server_description: "NAS server for NFS and SMB"
    dns_id: "my_dns"
    dns_addresses:
      - "192.168.1.1"
      - "192.168.1.2"
    ldap_domain_name: "my_ldap_domain"
    ldap_servers:
      - "ldap.example.com"
    bind_user: "ldap_bind_user"
    bind_password: "ldap_bind_password"
    ldap_account_name: "nas_admin"
    role_name: "Administrator"
    nfs_export_name: "my_nfs_export"
    filesystem: "my_filesystem" # Replace with your filesystem name
    nfs_path: "/exports"
    smb_netbios_name: "MY_SMB_SERVER"
    smb_workgroup: "MY_WORKGROUP"
    smb_description: "SMB Server for NAS"
    smb_local_admin_password: "smb_admin_password"
 
  tasks:
    - name: Create NAS Server
      dellemc.powerstore.nasserver:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        nas_server_name: "{{ nas_server_name }}"
        description: "{{ nas_server_description }}"
        state: "present"
      register: nas_server_result
 
    - name: Configure DNS settings
      dellemc.powerstore.dns:
        array_ip: "{{ array_ip }}"
        user: "{{ user }}"
        password: "{{ password }}"
        validate_certs: "{{ validate_certs }}"
        dns_id: "{{ dns_id }}"
        dns_addresses: "{{ dns_addresses }}"
        dns_address_state: "present-in-dns"
        state: "present"
 
    - name: Create LDAP domain
      dellemc.powerstore.ldap_domain:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        domain_name: "{{ ldap_domain_name }}"
        ldap_servers: "{{ ldap_servers }}"
        protocol: "LDAP"
        ldap_server_type: "OpenLDAP"
        bind_user: "{{ bind_user }}"
        bind_password: "{{ bind_password }}"
        ldap_domain_user_settings:
          user_search_path: "cn=Users"
        ldap_domain_group_settings:
          group_search_path: "cn=Groups"
        ldap_server_state: "present-in-domain"
        state: "present"
      register: ldap_domain_result
 
    - name: Create LDAP account
      dellemc.powerstore.ldap_account:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        ldap_account_name: "{{ ldap_account_name }}"
        ldap_domain_id: "{{ ldap_domain_result.ldap_domain_details.id }}"
        role_name: "{{ role_name }}"
        ldap_account_type: "User"
        state: "present"
 
    - name: Create NFS export
      dellemc.powerstore.nfs:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        nfs_export_name: "{{ nfs_export_name }}"
        filesystem: "{{ filesystem }}"
        nas_server: "{{ nas_server_name }}"
        path: "{{ nfs_path }}"
        state: "present"
 
    - name: Enable SMB server
      dellemc.powerstore.smb_server:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        nas_server: "{{ nas_server_name }}"
        is_standalone: true
        netbios_name: "{{ smb_netbios_name }}"
        workgroup: "{{ smb_workgroup }}"
        description: "{{ smb_description }}"
        local_admin_password: "{{ smb_local_admin_password }}"
        state: "present"


How it works

The Ansible playbook automates the NAS setup through a series of tasks:

1.  NAS Server Creation: The nasserver module creates the NAS server with a specified name and description.  The state: “present” ensures the NAS server exists.
2.  DNS Configuration: The dns module configures the DNS settings for the NAS server, adding DNS server addresses. dns_address_state: “present-in-dns” ensures the specified DNS addresses are configured on the NAS server.
3.  LDAP Domain Integration: The ldap_domain module configures the NAS server to authenticate users against an LDAP domain.  It specifies the LDAP server details, bind user credentials, and search paths for users and groups. The ldap_server_state: “present-in-domain” ensures the specified LDAP servers are configured on the NAS server. The result of this task is registered for use in the next task.
4.  LDAP Account Creation: The ldap_account module creates an LDAP account and assigns it a specific role on the NAS server. The ldap_domain_id is retrieved from the previous task’s result, linking the account to the created LDAP domain.
5.  NFS Export Creation: The nfs module creates an NFS export, specifying the filesystem to be shared, the path, and the NAS server it belongs to.
6.  SMB Server Configuration: The smb_server module enables and configures the SMB server on the NAS, setting parameters like NetBIOS name, workgroup, and local administrator password.  is_standalone: true configures the SMB server to operate in standalone mode.

Before running the playbook, replace the placeholder values in the vars section with your actual environment details, such as array IP, credentials, DNS addresses, LDAP settings, filesystem name and SMB configuration. Localhost should also be updated to reflect the target host or group of hosts where the playbook will be executed.


2. Implementing File System Quotas and Alerts

This section demonstrates how to automate the implementation of file system quotas and configure email alerts for quota breaches using Ansible. File system quotas are essential for managing storage allocation and preventing individual users or groups from consuming excessive disk space. This playbook leverages the dellemc.powerstore collection to interact with a Dell PowerStore storage array, creating filesystems, setting quotas, configuring SMTP settings, and setting up email alerts.

Ansible Modules

The following modules from the Ansible collection are used in this playbook:

  • filesystem – Manages file system creation and configuration on the PowerStore array.
  • quota – Creates and manages quotas for users or groups on a specified filesystem.
  • smtp\_config – Configures the SMTP settings on the PowerStore array for sending email alerts.
  • email – Creates and manages destination email addresses for receiving alerts from the PowerStore array.


Ansible Playbook

---
- name: Implement File System Quotas and Alerts
  hosts: all
  gather_facts: false
  vars:
    array_ip: "your_array_ip"
    validate_certs: false
    user: "your_user"
    password: "your_password"
    filesystem_name: "your_filesystem"
    nas_server_id: "your_nas_server_id"
    email_address: "admin@example.com"
    smtp_address: "smtp.example.com"
    source_email: "noreply@example.com"
    unix_name: "test_user"
    soft_limit_tb: 5
    hard_limit_tb: 10
 
  tasks:
    - name: Create FileSystem
      dellemc.powerstore.filesystem:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        filesystem_name: "{{ filesystem_name }}"
        nas_server: "{{ nas_server_id }}"
        size: "10"
        cap_unit: "GB"
        state: "present"
      register: filesystem_result
 
    - name: Create a Quota for a User
      dellemc.powerstore.quota:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        quota_type: "user"
        unix_name: "{{ unix_name }}"
        filesystem: "{{ filesystem_name }}"
        nas_server: "{{ nas_server_id }}"
        quota:
          soft_limit: "{{ soft_limit_tb }}"
          hard_limit: "{{ hard_limit_tb }}"
        cap_unit: "TB"
        state: "present"
 
    - name: Configure SMTP Settings
      dellemc.powerstore.smtp_config:
        array_ip: "{{ array_ip }}"
        user: "{{ user }}"
        password: "{{ password }}"
        validate_certs: "{{ validate_certs }}"
        smtp_id: "0"
        smtp_address: "{{ smtp_address }}"
        source_email: "{{ source_email }}"
        state: "present"
 
    - name: Create Destination Email for Alerts
      dellemc.powerstore.email:
        array_ip: "{{ array_ip }}"
        user: "{{ user }}"
        password: "{{ password }}"
        validate_certs: "{{ validate_certs }}"
        email_address: "{{ email_address }}"
        notify:
          info: true
          critical: true
          major: true
        state: "present"


How it works

1.  Variable Definition: The playbook starts by defining several variables, including connection details for the PowerStore array (array_ip, user, password), filesystem details (filesystem_name, nas_server_id), email configuration (email_address, smtp_address, source_email), and quota limits (unix_name, soft_limit_tb, hard_limit_tb).  These variables should be modified to reflect your specific environment.
2.  Create FileSystem: The filesystem module is used to create a filesystem on the PowerStore array. The state: “present” ensures that the filesystem is created if it does not already exist.  The register keyword saves the result of the task for later use if needed.
3.  Create a Quota for a User: The quota module is used to create a quota for a specific user (unix_name) on the created filesystem. The quota_type is set to “user”, and the soft_limit and hard_limit are defined in TB.  The state: “present” ensures that the quota is created if it doesn’t exist.
4.  Configure SMTP Settings: The smtp_config module configures the SMTP settings on the PowerStore array. This allows the array to send email alerts. The smtp_address and source_email variables are used to configure the SMTP server and the sender’s email address. The smtp_id is set to “0”, which typically refers to the primary SMTP configuration.
5. Create Destination Email for Alerts: The email module creates a destination email address for receiving alerts from the PowerStore array. The email_address variable specifies the recipient’s email address. The notify parameter specifies which alert severities (info, critical, major) should be sent to the specified email address. state: “present” ensures that the email configuration exists.

This playbook provides a complete solution for automating file system quota management and setting up email alerts on a Dell EMC PowerStore array, improving storage management and proactive monitoring.


3. Filesystem Snapshots with Replication for Disaster Recovery

This section describes how to implement a disaster recovery (DR) strategy using Ansible to automate the creation of filesystem snapshots and their replication to a remote system. This approach ensures data protection and business continuity in the event of a primary site failure. The Ansible playbook provided automates the entire process, from snapshot creation to replication session establishment.

Ansible Modules

The following modules from the Ansible collection are used in this playbook:

  • filesystem_snapshot – Creates and manages filesystem snapshots on the PowerStore array.
  • remotesystem – Manages remote system configurations, enabling replication between PowerStore arrays.
  • replicationsession – Creates and manages replication sessions between volumes or consistency groups on different PowerStore arrays.


Ansible Playbook

---
- name: Filesystem Snapshots with Replication for Disaster Recovery
  hosts: powerstore
  gather_facts: false
  vars:
    array_ip: "your_powerstore_ip"
    validate_certs: false
    user: "your_powerstore_user"
    password: "your_powerstore_password"
    remote_address: "your_remote_powerstore_ip"
    remote_user: "your_remote_powerstore_user"
    remote_password: "your_remote_powerstore_password"
    filesystem_name: "your_filesystem_name"
    nas_server_name: "your_nas_server_name"
    snapshot_prefix: "dr_snapshot"
    replication_session_volume: "your_replication_volume"
 
  tasks:
    - name: Create Filesystem Snapshot
      dellemc.powerstore.filesystem_snapshot:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        snapshot_name: "{{ snapshot_prefix }}_{{ ansible_date_time.isoformat }}"
        nas_server: "{{ nas_server_name }}"
        filesystem: "{{ filesystem_name }}"
        desired_retention: 7
        retention_unit: "days"
        state: "present"
      register: fs_snapshot_result
 
    - name: Add Remote System
      dellemc.powerstore.remotesystem:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        remote_address: "{{ remote_address }}"
        remote_user: "{{ remote_user }}"
        remote_password: "{{ remote_password }}"
        remote_port: 443
        network_latency: "Low"
        decription: "Remote system for DR replication"
        state: "present"
      register: remote_system_result
 
    - name: Establish Replication Session
      dellemc.powerstore.replicationsession:
        array_ip: "{{ array_ip }}"
        verifycert: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        volume: "{{ replication_session_volume }}"
        remote_system_id: "{{ remote_system_result.remotesystem.id | default(remote_system_result.remote_system.id, True) }}"
        session_state: "started"
      register: replication_session_result


How it works

The Ansible playbook operates in the following sequence:

1.  Filesystem Snapshot Creation: The filesystem_snapshot module creates a snapshot of the specified filesystem. The snapshot name is dynamically generated using a prefix and the current date and time.  A retention policy is also configured, defining how long the snapshot will be retained.
2.  Remote System Addition: The remotesystem module adds the remote PowerStore array as a replication target. It configures the connection parameters, including the remote array’s IP address, user credentials, and network latency settings. The register keyword saves the output of the task so it can be used in subsequent tasks.
3. Replication Session Establishment: The replicationsession module establishes a replication session between the primary volume and the remote system. It uses the remote system’s ID (obtained from the previous task’s output) to configure the replication link. The session is started to begin replicating data to the remote site.  The default filter handles potential differences in the keys returned in the registered output (remote_system vs remotesystem).

Before running the playbook, ensure that you replace the placeholder values in the vars section with your actual environment details. This includes the PowerStore IP addresses, user credentials, filesystem names, and replication volume names.


4. Integrating NFS Server with LDAP for Access Control

This article explains how to enhance the security and manageability of NFS access by integrating an NFS server with LDAP. By leveraging Ansible automation, we can streamline the process of configuring NFS servers, creating LDAP domains, and managing user access. The provided Ansible playbook automates these tasks, ensuring consistency and reducing manual effort.

Ansible Modules

The following modules from the Ansible collection are used in this playbook:

  • nfs_server – Manages the configuration of NFS servers, including enabling NFS versions, secure NFS and specifying the hostname.
  • ldap_domain – Configures LDAP domains, defining server addresses, bind users, search paths, and other LDAP-related settings.
  • ldap_account – Creates and manages LDAP accounts, assigning them roles and associating them with specific LDAP domains.


Ansible Playbook

---
- name: Integrate NFS Server with LDAP for Access Control
  hosts: powerstore
  gather_facts: false
 
  vars:
    array_ip: "10.x.x.x"
    validate_certs: false
    user: "username"
    password: "password"
    nas_server_name: "nas_server_01"
    domain_name: "example.com"
    bind_user: "cn=admin,dc=example,dc=com"
    bind_password: "admin_password"
    ldap_servers: ["10.y.y.y"]
    ldap_account_name: "nfs_user"
    role_name: "Administrator"
    nfs_hostname: "nfs.example.com"
 
  tasks:
    - name: Ensure NFS server is present
      dellemc.powerstore.nfs_server:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        nas_server: "{{ nas_server_name }}"
        host_name: "{{ nfs_hostname }}"
        is_nfsv3_enabled: true
        is_nfsv4_enabled: true
        is_secure_enabled: true #Enabling secure NFS
        state: "present"
      register: nfs_server_result
 
    - name: Create LDAP domain
      dellemc.powerstore.ldap_domain:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        domain_name: "{{ domain_name }}"
        ldap_servers: "{{ ldap_servers }}"
        protocol: "LDAP"
        ldap_server_type: "OpenLDAP"
        bind_user: "{{ bind_user }}"
        bind_password: "{{ bind_password }}"
        ldap_domain_user_settings:
          user_search_path: "ou=users,dc=example,dc=com"
        ldap_domain_group_settings:
          group_search_path: "ou=groups,dc=example,dc=com"
        ldap_server_state: "present-in-domain"
        state: "present"
      register: ldap_domain_result
 
    - name: Create LDAP account for NFS access
      dellemc.powerstore.ldap_account:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        ldap_account_name: "{{ ldap_account_name }}"
        ldap_domain_id: "{{ ldap_domain_result.ldap_domain_details.id }}"
        role_name: "{{ role_name }}"
        ldap_account_type: "User"
        state: "present"


How it works

The Ansible playbook operates in a sequential manner, executing the following steps:

1.  NFS Server Configuration: The nfs_server module is used to ensure that the NFS server is properly configured with the specified hostname, NFS versions (v3 and v4), and secure NFS enabled. The state: “present” ensures that the NFS server exists and matches the desired configuration.
2.  LDAP Domain Creation: The ldap_domain module creates an LDAP domain with the specified parameters, including server addresses, bind user credentials, and search paths for users and groups. The ldap_server_state: “present-in-domain” ensures that the LDAP server is part of the domain. The state: “present” ensures that the LDAP domain exists and matches the desired configuration.
3. LDAP Account Creation: The ldap_account module creates an LDAP account for NFS access, associating it with the previously created LDAP domain and assigning it a specific role. The ldap_domain_id is obtained from the registered result of the ldap_domain task. The state: “present” ensures that the LDAP account exists and matches the desired configuration.

By running this playbook, you can automate the integration of an NFS server with LDAP, enhancing security and simplifying user access management.


5. Automating SMB Share Provisioning and Management

This article describes how to automate the creation and management of SMB shares using Ansible. SMB shares are a fundamental component of file sharing in many organizations and automating their provisioning and management streamlines access control and reduces manual effort. The provided Ansible playbook simplifies the process of creating, updating, and managing SMB shares, including setting permissions and configuring various share properties. This automation ensures consistency, reduces errors, and speeds up the deployment of file-sharing resources.

Ansible Modules

The following modules from the Ansible collection are used in this playbook:

  • smbshare – Manages SMB shares on a storage array. This module is used to create, update, and delete SMB shares, as well as configure their properties such as access control lists (ACLs), descriptions, and availability settings.
  • smb_server – Manages SMB server settings. This module is used to enable and configure the SMB server on the storage array, including setting the NetBIOS name, workgroup, and local administrator password.
  • local_user – Manages local users on the storage array. This module is used to create and manage local user accounts, which can then be granted access to SMB shares.


Ansible Playbook

---
- name: Automate SMB Share Provisioning and Management
  hosts: localhost
  gather_facts: false
  vars:
    array_ip: "your_array_ip"
    validate_certs: false
    user: "your_user"
    password: "your_password"
    nas_server_name: "your_nas_server"  # or nas_server_id if you have it
    smb_share_name: "my_smb_share"
    filesystem_name: "my_filesystem"
    local_username: "smb_user"
    local_password: "Password123!"
    smb_share_description: "SMB share managed by Ansible"
    smb_server_netbios_name: "MY_SMB_SERVER"
    smb_server_workgroup: "WORKGROUP"
    smb_server_local_admin_password: "AdminPassword123!"
    role_name: "SecurityAdmin"
 
  tasks:
    - name: Ensure local user exists
      dellemc.powerstore.local_user:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        user_name: "{{ local_username }}"
        user_password: "{{ local_password }}"
        role_name: "{{ role_name }}"
        state: present
 
    - name: Enable SMB server
      dellemc.powerstore.smb_server:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        nas_server: "{{ nas_server_name }}"
        is_standalone: true
        netbios_name: "{{ smb_server_netbios_name }}"
        workgroup: "{{ smb_server_workgroup }}"
        description: "SMB Server managed by Ansible"
        local_admin_password: "{{ smb_server_local_admin_password }}"
        state: present
      register: smb_server_result
 
    - name: Create SMB share
      dellemc.powerstore.smbshare:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        share_name: "{{ smb_share_name }}"
        filesystem: "{{ filesystem_name }}"
        nas_server: "{{ nas_server_name }}"
        description: "{{ smb_share_description }}"
        is_abe_enabled: true
        is_branch_cache_enabled: true
        offline_availability: "DOCUMENTS"
        is_continuous_availability_enabled: true
        is_encryption_enabled: true
        acl:
          - access_level: "Full"
            access_type: "Allow"
            trustee_name: "{{ local_username }}"
            trustee_type: "User"
            state: "present"
        state: present
 
    - name: Update SMB share description
      dellemc.powerstore.smbshare:
        array_ip: "{{ array_ip }}"
        validate_certs: "{{ validate_certs }}"
        user: "{{ user }}"
        password: "{{ password }}"
        share_name: "{{ smb_share_name }}"
        nas_server: "{{ nas_server_name }}"
        description: "Updated SMB share description by Ansible"
        state: present


How it works

The Ansible playbook automates SMB share provisioning and management through a series of tasks:

1.  Create a Local User: The playbook first ensures that a local user account exists on the storage array using the local_user module. This account will be granted access to the SMB share.  The state: present ensures the user is created if it doesn’t exist or remains unchanged if it does.
2.  Enable SMB Server: The playbook enables and configures the SMB server using the smb_server module. It sets properties such as the NetBIOS name, workgroup, description, and local administrator password. The state: present ensures that the SMB server is enabled and configured with the specified settings.
3.  Create SMB Share: The playbook creates an SMB share using the smbshare module. It defines the share name, filesystem, NAS server, description, and various share properties such as Access-Based Enumeration (ABE), BranchCache, offline availability, continuous availability, and encryption.  It also configures access control lists (ACLs) to grant the previously created local user “Full” access to the share. The state: present ensures the SMB share is created with the specified properties.
4. Update SMB Share Description: Finally, the playbook updates the description of the SMB share using the smbshare module. This demonstrates how to modify existing SMB share properties. The state: present ensures the SMB share’s description is updated.

By running this playbook, you can automate the entire process of provisioning and managing SMB shares, ensuring consistency and reducing manual effort. Remember to replace the placeholder values in the vars section with your actual environment details.


Wrapping Up: Ansible – Your Key to PowerStore File Services Automation

As we’ve explored in this post, Ansible provides a robust and efficient way to automate a wide range of file service tasks on your Dell PowerStore array. From initial NAS setup and quota management, to disaster recovery and access control, the use cases we’ve covered demonstrate the power and flexibility of Ansible in streamlining your operations.

By leveraging modules like nasserver, nfs, smbshare, filesystem_snapshot, and many more, you can transform complex, manual processes into automated workflows. This not only saves time and reduces the risk of human error but also allows you to scale your file services more effectively.

Ready to take the next step? Experiment with the examples provided, adapt them to your specific environment, and discover the many ways Ansible can simplify and enhance your PowerStore file services management. Embrace automation and unlock the full potential of your PowerStore

Integrating Dell PowerProtect Data Manager with Red Hat OpenShift and Dell Powerstore

$
0
0

Part 2: Failure and Recovery

Post by Martin Hayes

In Part 1, we got to the point where we configured and manually kicked off a ‘Protection Job’. Next up we want to see how to recover from what we will call an orchestrated ‘mishap’ versus a disaster per se. We will cover off a inter cluster level recovery in a future post, whereby we actually lose our entire OpenShift cluster for whatever reason. For now though we are going to see what happens when we introduce some human error!

As per the diagram above we have our Namespace or ‘Project’ on OpenShift with our application Pod running inside, creatively named ‘ppdm-test-pod’.

Of course I have been busy developing my application and I have written a file to the mounted volume, which is backed via PowerStore. We will see this in more detail in the video demo, when we we run through the process end to end. This will serve as a simple example of data persistence post recovery.

Navigating back to the Pod details in the GUI we can verify the mount path for the storage volume and the associated Persistent Volume Claim (PVC). This is the one we attached in the last post ‘ppdm-claim-1’. Note: the path ‘mnt/storage’ is where we have written our demo text file.

Delving a little deeper into the PVC details we can see the name of the Persistent Volume that has been created on PowerStore and the associated Storage Class.

Moving on over to PowerStore we can see that the ‘PersistentVolume’ ‘ocp11-f0e672d7f6’ is present as expected.

Orchestrated Failure

Before we orchestrate or demo a failure by deleting the OpenShift project, let’s ensure we have a copy on PPDM from which to recover. We did this in the last post, but just to confirm.

Next let’s go ahead and delete the namespace/project by navigating to the project and using the GUI to ‘Delete Project’.

Confirm ‘Delete Project’ when prompted.

Wait for a couple of minutes as the the namespace/project deletes and its associated entities take a couple of minutes to terminate and clear down.

No OpenShift project means our POD/Application has also been deleted as has our Persistent Volume Claim (PVC) and Persistent Volume. Note ‘ocp11-f0e672d7f6’ has disappeared.

What about on PowerStore itself. We can see here it is also gone! Where once we had 8 volumes present we now have 7. The CSI API has unbound the claim on the volume and Powerstore has deleted ‘ocp11-f0e672d7f6’.

Net result everything is gone, the Project/Namespace, the PVC, the volume on PowerStore and by definition our application. A bit of a mini disaster if you deleted the namespace in error…. it happens to the best of us!

Have no fear… PPDM and DDVE to the rescue.

Policy Driven Recovery via PPDM

Of course we have everything backed up in our DDVE instance, fully orchestrated by PPDM. Let’s head back over to the PPDM console and perform a full recovery.

Navigate to the ‘Restore’ menu and then to ‘Assets’.

The process is really very straightforward. Note you are presented with the option to recover from multiple point in time copies of the data (dependent on the length of your retention policy). I want to recover the latest copy. Select the namespace to recover and then click ‘Restore’.

Run through the menu. We will restore to the original cluster (In an upcoming blog we will restore to an alternate OpenShift Cluster on different hardware).

We will chose to restore everything, including cluster scoped resources, such as role bindings and custom resource definitions (CRD’s).

For the restore type we will ‘Restore to a New Namespace’, giving it the name of ‘ppdm-restored’.

We have only a copy of a single PVC to restore, so we will select that copy. Click ‘Next’.

Skipping through a couple of screenshots until we get to the last step ( Everything will be covered in the video demo). Make sure everything looks o.k. and then click ‘Restore’.

Navigate over to the jobs pane and monitor the status of the restore.

You can drill a little deeper into the Job to monitor its progress. There is a bit going on behind the scenes in terms of the cproxy pod deployment, so be patient. (this process will be the subject of another blog also, when we dig into what actually happens in the background). This will be a little clearer also in the video.

Finally after a couple of minutes, the PPDM console has indicated that everything has completed sucessfully

The ‘proof is in the pudding’ as they say, so let’s verify what has actually happened and have I recovered my application workload/pod?

Verification

Back in the OpenShift Console, we can see that the ‘ppdm-restored’ Project has been created and we have the pod ‘ppdm-test-pod’ has been re-created and deployed into this namespace.

Navigating into the Pod terminal itself. Let’s see if I can see the text file that I created earlier. Let’s ‘Cat’ the file to have a peek inside to make sure I’m telling the truth…sure enough here is our original file and content.

What about our Persistent Volume Claim (PVC). as we can see this has also been recovered and re-attached to our POD.

Double-clicking on the ‘ppdm-claim-1’, we can see it is bound and has created a net new Persistent Volume ‘ocp11-c0857aec4d’.

And finally….. back over to Powerstore, we can see our net new volume that has been provisioned via CSI, where our restored data has been written.

Video Demo

Dell PowerStore 4.1 is now Available – What’s New

$
0
0

Dell PowerStore is the flagship storage solution from the industry’s #1 storage provider for a reason – and I’m excited to share how it can help your business adapt to challenges, reduce costs, and simplify infrastructure for the long term.

A screenshot of a computer AI-generated content may be incorrect.

First and foremost, PowerStore is a Future-proof platform designed to ensure you never outgrow your storage, no matter how your business evolves. Think of it as the last array you’ll need – ending the expensive and disruptive cycle of platform upgrades. Future-readiness has been PowerStore’s defining principle since Day 1, and it drives everything we’ll discuss today.

  • It starts with a high-performance unified architecture that supports block, file, vVols, and containers – all in one appliance. This flexibility gives you headroom to handle changing demands
  • Scale-up and scale-out expansion means you can add performance or capacity seamlessly and independently, with easy load balancing between arrays
  • Unlike some competitors, you can scale one drive at a time to manage growth economically. No need to purchase expensive drive packs!
  • All PowerStore models scale equally, so you’ll never need to upgrade models just to get more storage space.

Future-readiness also means resiliency – the ability to recover quickly from data loss or disruptions. With PowerStore, no matter how you grow, your data stays safe and protected. Key features include

  • Secure & immutable snapshots, that can’t be deleted, even by an administrator
  • Synchronous and asynchronous replication, including native metro sync with auto-failover — so your workloads never stop running, even during an outage.
  • Intuitive Protection Policies, that let you customize and combine multiple kinds of local, remote, or cloud-based protection into reusable policies you can apply with just a few clicks

You can even control backup and recovery directly from PowerStore, thanks to PowerStore’s Storage Direct integration with Dell PowerProtect Data Domain – the industry’s #1 backup solution. It’s a great “better together” solution you can have up and running in just 90 seconds.

PowerStore also delivers advanced security to combat today’s AI-accelerated threats. From Hardware Root of Trust to multi-factor authentication, encryption and more, PowerStore is designed to meet the toughest compliance standards. It’s even listed on the US Department of Defense Approved Products List—unlike competitors like Pure Storage.

But the real game-changer is Lifecycle Extension with ProSupport. This industry-leading program leverages PowerStore’s modular hardware and all-inclusive software to put modernization on autopilot. You get seamless technology transitions, cost-saving capacity refresh credits, an appointed technical advisor and more.

Lifecycle Extension simplifies every aspect of storage ownership – ensuring your workloads are always running on the latest technology for the lowest cost – so your business stays ready for anything.

Now, let’s talk Efficiency – another of PowerStore’s core strengths. Storage efficiency is primarily about cost savings, but it’s also about optimizing performance and management so you get the most from every resource.

It’s important to realize there are fundamental differences among storage solutions – and this is an area where architecture really matters.

  • Traditional storage often requires constant manual tuning…but PowerStore’s built to work smarter.
  • Its unique “Autonomous Active/Active” architecture drives incredible efficiency without any intervention on your part.

Let’s explore two software-driven ways this happens.

First, Dynamic Node Affinity auto-tunes performance by dynamically switching host paths between controllers.

  • When you create a volume on PowerStore, it’s NOT pinned to one controller.
  • PowerStore selects a preferred path initially, but from a host perspective, both controllers have an active multipath connection.
  • This makes it easy to swap paths in software – no HA failover is required, it’s just a simple multipathing cutover.

PowerStore distributes paths among volumes automatically – and reassigns them as performance and capacity needs change over time.

  • Unlike other arrays, there’s absolutely nothing you have to do to load balance or reconfigure complex failover scenarios.
  • You simply create your volume, and PowerStore works in the background to keep the system optimized.

Next, our Dynamic Resiliency Engine extends this simplicity to drive redundancy, eliminating the need to manage RAID groups.

  • Again, nothing is pinned – both controllers have access to ALL the drives in a single virtualized pool.
  • You can add drives one at a time, and even mix-and-match drive sizes.
  • Each new resource is simply absorbed into the pool, with all services (including sparing) configured and rebuilt automatically.

PowerStore also drives HUGE cost savings with advanced data reduction – and again, this flows directly from the architecture.

Unlike basic software-only approaches that can degrade performance, PowerStore data reduction combines smart software and hardware to deliver predictable efficiency with zero impact on workloads.

  • Intel QuickAssist Technology built directly into the system, not an add-on module, offloads compression processing.
  • Variable block compression and granular global deduplication leverage the single pool architecture to maximize utilization.

PowerStore data reduction is “always-on.” There’s nothing to manage, configure, or decide – it just works. And the results are well-documented. Independent testing has shown PowerStore delivers up to 2x better data efficiency and 54% lower energy use compared to leading competitors.

Best of all, Dell backs this superior technology with the industry’s best guarantee – 5:1 on reducible data. Your 5:1 results will continue through your entire pre-paid maintenance term, or we’ll ship you actual free drives (up to 50% of your purchased capacity) – not just a “discount toward future purchase.”

  • NO assessment required
  • Valid for up to 6 years
  • We even give you industry-leading data reduction visibility and analytics so you can confirm your guarantee status.

It all adds up to an unprecedented ability to lock in long term savings for simplified planning and purchasing. And it makes owning the world’s best storage platform much more affordable than you might think.

In today’s IT environment, organizations continue to face mounting pressures—managing inefficiencies, responding to growing cyber threats, and meeting sustainability and compliance demands, all while keeping costs under control. Last year, PowerStoreOS 4.0 helped customers tackle these challenges with more performance, efficiency, resiliency and multicloud capabilities than ever before.

Building on that foundation, we’re further enhancing PowerStore’s robust platform the latest release, PowerStoreOS 4.1! This release introduces powerful new features to help you optimize performance and efficiency, stronger security with more ways to control access and boost availability, and improving operational simplicity for file workloads to all new and existing PowerStore customers.

  • First, we’re using the advanced power of machine learning and AI to help you make smarter decisions in the datacenter. We’re introducing energy and CO2e forecasting to help assess and manage your environmental impact while offering performance headroom analytics to optimize your workloads for peak efficiency. Plus, machine learning-based proactive support remediation keeps your environment up and running by proactively resolving predicted issues before they disrupt operations.
  • To strengthen security, we’ve added DoD smart card support for MFA, along with automated certificate alerts and renewal to simplify maintaining secure environments. Enhanced web security tools provide even greater compliance and protection.
  • And with several improvements to file management, including secure snapshots and quality of service, we’re making it easier than ever to ensure your file workloads are secure and performant. And for our Unity customers, the new streamlined file migration with support for Unity Cloud Tiering Appliance makes transitioning to PowerStore faster and more efficient than ever.

These innovations are designed to help you meet today’s challenges while preparing for tomorrow.

Now, let’s explore these exciting new features in more depth

A screenshot of a computer AI-generated content may be incorrect.

Achieving operational efficiency while staying on track with sustainability goals has never been more important, especially in an environment with rising energy costs. PowerStore’s ongoing advancements have helped make it more energy efficient over time, with improvements including up to 60% performance and capacity per watt as well as ENERGY STAR certification across the entire family of products.

With PowerStoreOS 4.1, we’re introducing powerful new carbon footprint analytics/forecasting in APEX AIOps to help organizations monitor, manage, and optimize energy consumption.

This feature provides detailed energy usage and carbon footprint forecasting, showcasing metrics at both global and system levels in an intuitive graphical interface. By tracking key data points such as power (watts) and inlet temperature (Celsius) every 5 minutes, the system sends data directly to APEX AIOps.

With simplified visual displays of historical, current, and forecasted metrics for energy consumption and CO2e emissions, administrators can work to maximize efficiency and reduce costs. And, by aggregating data across Dell products within APEX AIOps’ Carbon Footprint page, you gain a complete, data center-level view of your environmental impact.

NOTE: Carbon footprint visibility in APEX AIOps is slated for wider availability in 1H 2025.

Understanding and optimizing application performance is critical for IT administrators.  With PowerStoreOS 4.1, we’ve introduced enhanced performance analytics, providing a deeper understanding of system utilization and resource consumption.

Through the PowerStore UI, administrators now have access to more detailed system utilization metrics, summarizing the utilization of several categories into a single, easily understandable metric, Appliance Utilization. By consolidating these metrics, administrators can simplify workload planning and intelligently plan when they will need to upgrade, scale-out, or migrate workloads too other appliances in the cluste.r

PowerStoreOS 4.1 also enhances insight into IOPS performance with clear information on the appliance’s max sustainable IOPS, providing the data needed to understand the available performance that they can expect.

Additionally, a new capability provides visibility into the impact of host offload commands like XCOPY, UNMAP, and WRITE_SAME, enabling administrators to better identify bottlenecks and assess performance implications.

These enhancements build upon PowerStore’s existing performance data, elevating the capabilities for monitoring IOPS, bandwidth, CPU utilization, queue depth, and latency to a whole new level. This means better visibility, smarter decision-making, and the tools you need to optimize operations effectively.

A computer with a credit card and a card reader AI-generated content may be incorrect.

Heightened security is a critical requirement for Federal and Department of Defense (DoD) environments. PowerStore meets the most stringent requirements with STIG certification and presence on the Federal Approved Prodcut List (APL). With PowerStoreOS 4.1, we’re improving security for the most sensitive environments by introducing support for Smart Card multi-factor authentication (MFA)  designed specifically to meet the demands of DoD and federal customers.

By supporting government-issued smart cards, including Common Access Cards (CAC) used by DoD employees and Personal Identity Verification (PIV) Cards used across Federal agencies, PowerStore helps deliver both convenience and robust security, ensuring compliance and protecting critical data in even the most sensitive environments. Users simply swipe their issued card, and authentication is conducted via the card’s certificate—eliminating the need for traditional password entry.

Managing this feature with PowerStore MFA is easy with direct integration with Windows Single Sign-On via Active Directory.

This feature is essential for zero-trust security principles that require continuous identity verification. It safeguards access to any interface an administrator can use to manage the array (PowerStore Manager, PSTCLI, and REST API), ensuring control is restricted only to authorized personnel.

A computer server with a blue screen AI-generated content may be incorrect.

Ensuring uptime is crucial in any IT environment and preventing issues before they cause disruptive and costly downtime is paramount for every IT department. With that in mind we’ve introduced a new machine-learning-based proactive support feature, designed to predict and prevent issues before they impact your operations.

By analyzing data from tens of thousands of PowerStore arrays deployed worldwide, PowerStore can proactively resolve up to 79% of predicted issues such as hardware failures or software configuration issues with automatic case creation for remediation. Customers are notified directly via PowerStore Manager alerts, which can include support case creation or step-by-step troubleshooting guidance through knowledge base articles.

Like everything related to PowerStore, it gets smarter over time. This approach not only reduces risks but also ensures your system stays ahead of potential problems with proactive and intelligent support you can count on.

A screenshot of a computer program AI-generated content may be incorrect.

PowerStore’s unified architecture makes consolidating workloads simple by seamlessly supporting block, file, and vVols on a single appliance.

PowerStoreOS 4.1 improves the way you deliver file storage by introducing powerful new ways to protect, prioritize, and optimize resources that enhance performance, security, and capacity management.

  • First, Quality of Service for File allows you to set bandwidth limits on file systems and NAS servers. Especially useful for service providers that offer customized SLAs, this feature prevents ‘noisy neighbor’ scenarios where resource contention during busy periods can degrade performance on critical applications. By using QoS, you can maintain performance levels and prioritize important workloads during especially
  • Second, Secure Snapshots for File enable you to protect entire file systems by preventing unauthorized deletion or modification of file snapshots until designated retention dates. Applied via PowerStore’s easy-to-use protection policy workflow, administrators can maintain government and legal compliance requirements while also providing a reliable recovery tool for ransomware or malware attacks.
  • Lastly, Capacity Accounting for File delivers PowerStore’s industry-leading visibility into data reduction efficiency for file resources. Administrators can now monitor trends and space savings for both block and file workloads, helping you better plan, optimize, and reduce capacity costs.

A computer screen with a warning sign AI-generated content may be incorrect.

PowerStore is #1 in ease of use for a reason, using automation and intelligence to help administrators simplify operations so they can be proactive in solving other business challenges. With PowerStoreOS 4.1, we’re making it even easier to maintain secure, compliant operations protected without unnecessary complexity through automated certificate renewals and alerts.

Tracking and renewing certificates is a necessary but draining chore for anyone in IT. Certificates that expire without renewal can lead to serious issues, like data unavailability or the inability to access the management interface. By keeping your certificates up to date, these risks are minimized.

PowerStore relieves administrators of yet another headache of storage administrators with the new Certificate Management Engine which automatically tracks all certificates and proactively notifies administrators of upcoming expirations, with alerts at 90, 60, and 30-day intervals before renewal is needed.

For PowerStore certificates, you can enable auto-renewal, which acts for you 90 days before expiration, eliminating risk and time-consuming manual tasks.

With notification and auto-renewal options, you don’t have to worry about managing certificates manually. Avoid disruptions, simplify administration, and ensure up-to-date protection against security threats caused by unverified network communications.

A diagram of a cloud computing system AI-generated content may be incorrect.

For customers migrating from Unity appliances, PowerStoreOS 4.1 offers enhanced file migration capabilities, making the transition smoother and enabling customers to retain the usage of their cloud tiering appliance! .

The native file migration tool now supports importing dual and multi-protocol file systems, making it easy to migrate from Unity if file systems utilize both NFS and SMB protocols, whether on the same or separate file systems. Even better, if you’ve tiered data using Unity’s Cloud Tiering Appliance (CTA), PowerStore now supports importing stub-files which enables migration of file systems without rehydrating the data from the cloud back to on-premises storage.

Post-migration, PowerStore can continue to use CTA functionality with any file systems on the array. This means you can continue archiving and recalling files, managing existing policies, and even create new ones! By maintaining your tiered data structure, PowerStore reduces on-premises storage costs by seamlessly tiering cold data to cost-effective, cloud-based storage.

These enhancements make migrating to PowerStore from Unity a clear choice—streamlining your move to a modern, intelligent platform while preserving and enhancing file management capabilities.

NOTE: This applies to file data only.

What are dual and multiprotocol file systems?

Customers can use multiple protocols for file systems on their Unity arrays. PowerStore now supports the abiltiy to import file systems that are using both NFS and SMB on either the same or separate file systems.

What are ‘stub files’ and what does it mean for PowerStore migration to be ‘stub-aware’?

Stub files are small placeholder files used to represent tiered data that has been moved to the cloud by Unity’s Coud Tiering Appliance. They act as pointers to the tiered data which reduces on-premise storage requirements while retaining access to data in the cloud.

In PowerStoreOS 4.1, native file migration from Unity is ‘stub-aware’, meaning that data that is tiered to the cloud will not have to be rehydrated back on-premises prior to migration. This dramatically simplifies the migration process, reduces cloud egress costs, and retains the use of cost-effective cloud storage for cold data.

What is the Unity Cloud Tiering Appliance (CTA)?

The CTA is a software-defined solution that enables seamless data tiering to the cloud. It identifies cold or inactive data and automatically tiers it to cloud storage based on user-defined policies. When tiered to the cloud, the CTA provides access to tiered data while reducing the load on primary storage. You can read more about the Unity CTA here.

What happens with CTA after the file migration?

After updating to PowerStoreOS 4.1 and migrating data from their Unity array, customers can continue using the Unity CTA with their PowerStore system. There are some additional details and restrictions, however:

  • Full CTA functionality (tiering, recalling) continues only on imported NAS servers.
  • The CTA retains the preexisting policies for imported NAS servers on the new PowerStore system.
  • After the migration, CTA policies can be added, modified, and deleted to the imported NAS servers.
  • In CTA, the array will continue to display ‘Unity’ as the server type even when the NAS server exists on PowerStore.

PowerStoreOS 4.1 also introduces several networking-specific features designed to enhance visibility, reduce complexity, and strengthen security.

  • First, PowerStore now supports Link Layer Discovery Protocol (LLDP) discovery information, providing detailed insights into the network connections of your PowerStore appliance. Available through REST and CLI commands, LLDP delivers descriptions, capabilities, and details about connected devices and switches. This added visibility enables faster troubleshooting and helps you resolve issues with greater efficiency.
  • Next, improved volume mapping streamlines host group management by allowing you to map volumes to individual hosts within a group. It also enhances boot from SAN capabilities, reducing complexity and eliminating configuration errors, making deployment and management easier than ever.
  • Finally, PowerStoreOS 4.1 introduces web security enhancements that comply with modern standards. Support for Strict-Transport-Security (HSTS) ensures secure access to PowerStore Manager, while authentication with trusted NTP servers bolsters system integrity. These enhancements provide peace of mind, keeping your infrastructure secure and compliant.

With these advancements, PowerStore empowers you to operate with improved network transparency, reduced administrative overhead, and enhanced protection—further cementing its position as a reliable, and easy-to-use storage solution

PowerStore’s Storage Direct Protection feature, introduced in PowerStoreOS 3.5, connected two of Dell’s leading portfolio products together in a tightly integrated package. With a simple 90 second setup time, data protection is simple, efficient, and secure with no need for a dedicated backup server since PowerStore can back up data directly to industry-leading PowerProtect appliances—physical, virtual, or even in the cloud—with zero impact on application hosts. The process is highly efficient, sending only changed data over the network and leveraging PowerProtect’s typical 65:1 data reduction for cost savings.

Now in PowerStoreOS 4.1, we’re taking this game-changing capability even further. New PowerProtect appliances engineered to work seamlessly with PowerStore provide more options to choose from including a new all-flash ready node and the latest DD model, the DD6410. Additionally, improvements to the integration available in 4.1 offer up to 4x faster restore performance, empowering organizations to restore critical data and operations with greater speed and efficiency.

These updates enhance Storage Direct Protection, a unique offering in the market made possible by Dell’s robust portfolio. Streamline backups, get faster recoveries, and make the best choice for safeguarding your organization’s most valuable asset—its data.

You can view the release notes, below

https://dl.dell.com/content/manual25269339-dell-powerstore-release-notes-for-powerstoreos-version-4-1-0-0.pdf?language=en-us

You can download the new update package from Support for PowerStore 9200T | Drivers & Downloads | Dell US

Viewing all 202 articles
Browse latest View live