Quantcast
Channel: itzikr – Itzikr's Blog
Viewing all 202 articles
Browse latest View live

Dell EMC At Storage Field Day (SFD) 19 – Automation

$
0
0

We are investing a LOT where it comes to automation & containers integration across our storage products, as such, we gave a few sessions, as part of Storage Field Day 19 in Santa Clara, CA on January 23, 2020

The Evolution of Applications and the Need for Better Tools with Dell EMC

Paul Martin, Senior Principal Engineer, and Audrius Stripeikis, Product Manager, present DevOps for storage with Dell EMC. As one of the leading IT infrastructure providers, Dell Technologies is at the forefront of the pressures that technology and economy put on application environments inside and outside the datacenter. In this video, the presenters review the role changes that affect personnel and the new use cases appearing for both developers and administrators. They also show how automation tools improve speed and productivity.

Power Tools and Enablers for Dell EMC Storage – for Programmers

Paul Martin, Senior Principal Engineer, and Audrius Stripeikis, Product Manager, present “power tools” for programmers interacting with Dell EMC storage. Dell Technologies has chosen tools for programmers and administrators to provide capabilities and consistency for using Dell EMC storage products in modern application environments. Programmatic interfaces are the historical mechanism for implementing infrastructure automation and the foundation that lets Dell EMC storage products participate in modern automation frameworks. All Dell EMC storage products support APIs for automation.


Dell EMC Ansible Overview and Demonstration

Paul Martin, Senior Principal Engineer, and Audrius Stripeikis, Product Manager, demonstrate DevOps automation in Ansible with Dell EMC. The presenters describe the need for Ansible core concepts, available modules, and architecture, including success stories. They then demonstrate one-click Dell EMC PowerMAX provisioning.


Dell EMC Storage Integration with VMware vRealize Suite

Paul Martin, Senior Principal Engineer, and Audrius Stripeikis, Product Manager, show how Dell EMC integrates with the VMware vRealize suite. The presenters describe the need for VRO and the core concepts of integration, including architecture and success stories.

Kubernetes for Dell EMC Storage Introduction and Demo

Paul Martin, Senior Principal Engineer, and Audrius Stripeikis, Product Manager, demonstrate Kubernetes integration with Dell EMC storage products. The presenters describe persistent storage for Kubernetes, CSI concepts and choices, architectures, and persistent storage challenges. They then demonstrate integration with PowerMAX persistent storage. Finally, they discuss how Dell EMC embraces open access and community support for developer and automation tools for storage.


Dell EMC At Storage Field Day (SFD) 19 – Unstructured Data

$
0
0

Store, manage and protect unstructured data with efficiency and massive scalability.

Dell EMC Isilon is the industry’s #1 family of scale-out network-attached storage systems, designed for demanding enterprise file workloads. Choose from all-flash, hybrid and archive NAS platforms, as such, we gave a few sessions, as part of Storage Field Day 19 in Santa Clara, CA on January 23, 2020


Dell EMC Project Nautilus Introduction

Ted Schachter, Sr. Advisor, Product Management, introduces Dell EMC Project Nautilus. Dell EMC customers need the ability to capture and analyze fast data from live sensors in their manufacturing and prototyping phases, move it to long term storage, and analyze petabytes of historical data to gain deeper insights from an interconnected platform. They believe that Project Nautilus is the answer, and introduce the Storage Field Day audience to this real-time analytics platform.

 

Dell EMC Isilon’s Answer to Unstructured Data in the Cloud

444

Kaushik Ghosh, Director, Product Management, and Callan Fox, Consultant, Product Management, present Dell EMC Isilon in the cloud. As unstructured data grows, organizations are needing to utilize the cloud more than ever. With only approximately 2% of these organizations able to take advantage of it, they discuss why the top three cloud providers came to Dell EMC to help customers get their file data into the cloud. They provide the audience with an overview of the Isilon unstructured data offerings for public cloud, including a preview of their Azure cloud announcement.

Dell EMC Isilon’s Answer to Infrastructure and Data Insights

555

Kaushik Ghosh, Director, Product Management, and Callan Fox, Consultant, Product Management, introduce Dell EMC Isilon CloudIQ and ClarityNow. These software tools put the insights of the storage and the data in the right hands. These technologies help customers gain user-friendly summaries of the health of their data center, streamlining administrative tasks and alleviating bottlenecks at the Isilon array. ClarityNow, a recent acquisition, gives customers direct insight into their data location, value, and usage.

What Next for Unstructured Data Solutions at Dell EMC?

Kaushik Ghosh, Director, Product Management, discusses the vision for unstructured data from Dell EMC.

Dell EMC AppSync 4.0 Is Here.

$
0
0

CDM (Copy Data Management) is exploding, analysts predict that most of your data in the data centre is derived from copies of your data and as such, we are continuing to invest in this field.

For the readers of my blog, you know that I’m a big fan of AppSync which allows you to copy/restore/repurpose your data with a direct integration to the Dell EMC storage products so now is a great time to explain about the 4.0 version of AppSync which we have just released.

AppSync is a software that enables Integrated Copy Data Management (iCDM) with Dell EMC’s primary storage systems.

AppSync simplifies and automates the process of generating and consuming copies of production data. By abstracting the underlying storage and replication technologies, and through deep application integration, AppSync empowers application owners to satisfy copy demand for operational recovery and data repurposing on their own. In turn, storage administrators need only be concerned with initial setup and policy management, resulting in an agile, frictionless environment.

AppSync automatically discovers application databases, learns the database structure, and maps it through the Virtualization Layer to the underlying storage LUN. It then orchestrates all the activities required from copy creation and validation through mounting at the target host and launching or recovering the application. Supported workflows also include refresh, expire, and restore production.

New Simplified HTML5 GUI

In AppSync 4.0 ,we completely enhance the UI and we now support HTML5 as the interface.

Below you can see a video, showing how to add XtremIO & PowerMAX arrays, run a discovery on the vCenter and the application host that is running a database

and below you can see a demo showing you the options associated with creating (or subscribing) to a service plan

Metro Re-purposing

Select Copy Management -> Copies -> SQL Server -> DB instance -> User Databases -> “metro” database (most options are “Greyed Out” until checkbox is checked)

Check box next to “metro” db instance -> actions become available -> Click “Create Copy with Plan”

Select “Data Re-purposing” and click “Next”

• Select the options you desire and click “Next”

Name is auto-generated by the database name, current date/time and “1.1”

• Copy location – Select either local or remote for which side to store the copy

Note only local restores are possible with SRDF Metro

• Mount options are the same as with normal Service Plans

• 2nd Generation Copies – Select “Yes” to create the 2nd gen at the same time as 1st gen

• You will notice new “Array Selection”, AppSync recognizes a “Metro 1” and the associated array serial numbers.

• Selecting an array to the right of Metro 1 is where the copy is taken, click “Next”

Deeper Integration with Dell EMC Storage Platforms

VMAX v3+ Unisphere 4 PowerMax REST API platform

• All workflows previously supported with the SMI-S provider for VMAX3, AF & PowerMax, now utilize Unisphere for PowerMax (U4P) REST API.

001

Support Storage Class Memory on PowerMax

• Support Service Level Biasing for PowerMax with SCM (Storage Class Memory)

– With the Foxtail PowerMax release, arrays are able to support NVMe SCM drives and NAND flash drives.

– AppSync 4.0 now allows users to set a “Diamond” SLO on PowerMax arrays after adding SCM drives

Application Integration Improvements

• Ability to restore to any RAC node

– Previously, AppSync required restoring to the same node the copy was originally created on

– This enhancement supports environments where the source node may have gone offline, such as during a disaster or outage

• During the restore process, the user will be able to select the specific RAC node to restore to

You can download AppSync 4.0 by clicking the screenshot below

And as always, the documentation can be found here

https://www.dell.com/support/home/us/en/19/product-support/product/appsync/docs#sort=relevancy&f:versionFacet=[49124]&f:lang=[en]

The Dell ESXi 7.0 ISO is now available

$
0
0

vSphere 7 is finally out!

You can download the Dell Server ISO image by clicking the screenshot below

 

 

 

And the added components / drivers can be downloaded clicking the screenshot below

 

The Dell EMC OpenManage™ Integration for VMware vCenter, v5.1 is now available

$
0
0

Following the Day 0 support for vSphere 7 which you can read about here

https://itzikr.wordpress.com/2020/04/02/the-dell-esxi-7-0-iso-is-now-available/

we have also just released the OpenManage Integration for vCenter 7

The OpenManage Integration for VMware vCenter is a virtual appliance that streamlines tools and tasks associated with management and deployment of Dell servers in the virtual environment.

Fixes & Enhancements

Fixes:
1. Unable to delete the directory used to create repository profile.
2. “” PCIeSSDsecureErase “” attribute is not displayed in Host inventory.
3. MX7000 Chassis inventory fails after promoting backup lead as lead.
4. Chassis inventory job fails if role is changed for the MX7000 chassis.
5. When an option is changed in Event Posting Level, 2000000 error is displayed.
6. Discrepancy in OMIVV appliance guest O.S details shown at ESXi.
7. OMIVV Connection test causes idrac to block OMIVV IP.
8. Time displayed is not in-par with iDRAC for Power Monitoring and Firmware inventory details.
9. Few Attributes does not change in RAID attributes during System Profile deployment.
10. Considerable delay is observed in various pages in scale environment (having more than 500 managed nodes).
11.Warranty query and reporting issues.

For complete list of fixes, refer to Release Notes”

Enhancements:
1. Support vSphere 7.0
2. Support for VMware vSphere Lifecycle Manager with OMIVV as Hardware Support Manager
3. Support for R7525 and XR2 PowerEdge servers
4. Parallel MX7000 chassis firmware updates

 

 

VPLEX 6.2, What’s New

$
0
0

The VPLEX family removes physical barriers within, across, and between data centers. VPLEX Local provides simplified management and non-disruptive data mobility across heterogeneous arrays. VPLEX Metro provides data access and mobility between two VPLEX clusters within
synchronous distances. With a unique scale-up scale-out architecture, the advanced data caching and distributed cache coherency of VPLEX provides workload resiliency, automatic sharing,
balancing and failover of storage domains, and enables both local and remote data access with predictable service levels.

Early in 2020, we released the 6.2 version which contains the following enhancements:

Large volume support
Starting with VPLEX GeoSynchrony 6.2, the support capacity limit of storage volume size and virtual volume size has increased from 32 TB to 64 TB.

Digital licensing (e-License)
Starting with VPLEX GeoSynchrony 6.2, Dell EMC introducing new license activation models:
VPLEX_AF_LOCAL: It supports Local Volumes for Dell EMC All Flash Arrays.
VPLEX_AF_METRO: It supports Distributed Volumes for Dell EMC All Flash Arrays.
VPLEX_LOCAL_SC_FRAME: It supports Local Volumes for DELL SC Arrays.
VPLEX_METRO_SC_FRAME: It supports Distributed Volumes for DELL SC Arrays.
VPLEX_1TO1_SC_FRAME: It supports Local and Distributed Volumes for a given DELL SC Array.
VPLEX_ELA_LOCAL_CAPACITY: It is known as Enterprise License Activation and it supports
Local Volumes for All Arrays under Dell EMC.
VPLEX_ELA_METRO_CAPACITY: It is Known as Enterprise License Activation and it supports Distributed Volumes for All Arrays under Dell EMC.
Evaluation License Support for all supported License Models

SRSv3
Starting with VPLEX GeoSynchrony 6.2, Dell EMC introducing the SRSv3 capability. SRSv3 includes Licensing Usage Data Transfer feature where Secure Remote Service version 3 (SRSv3)
REST APIs support transfer of licensing usage data to Dell EMC, from Dell EMC Products.

Backend path management
Starting with VPLEX GeoSynchrony 6.2, VPLEX modifies back-end path management from an I/O timeout algorithm to a latency based one. VPLEX monitors the latency of back-end paths and penalizes those paths which show latency exceeding a 1 second threshold. A penalized path is used
less than other healthy paths. If a path accumulates enough penalties, it is marked degraded and VPLEX stops using it for host based I/O automatically. Once an I-T path is removed from use, VPLEX thoroughly checks the health of the path before automatically returning it to service. These changes improve the VPLEX handling of back-end fabric network congestion and back-end storage array failures.


Call Home Enhancements
Starting with VPLEX GeoSynchrony 6.2, additional firmware events are added to alert customers to give more warning notifications about events occurring in their VPLEX system and environment, and some new call-home events will be sent back to Dell EMC to alert about critical issues.


Port Stats Monitor
Starting with VPLEX GeoSynchrony 6.2, VPLEX will monitor and log FC port statistics for potential fabric issues. This monitoring can be configured to send an email upon detection of an issue. For more details, see KB article 531987.


Hyper-V Support for VPLEX Witness
Starting with VPLEX GeoSynchrony 6.2, VPLEX Witness can be deployed on Windows Hyper-V.
The Virtual Hard Disk (VHD) file is part of the GA package as VPlex-6.2.0.00.00.xxcluster-witness-server.vhd.

HTML5 based new GUI
Starting with VPLEX GeoSynchrony 6.2, HTML5 based new GUI is introduced which has Dell Clarity standards, and it is compatible with the latest browsers. The new HTML5 based GUI is
based on a new REST API v2 which is available. For more information, see the GUI Help Pages.
Note: Starting with VPLEX GeoSynchrony 6.2, the Flash GUI is depreciated.
Note: VPLEX GeoSynchrony 6.2 is the last release to support REST API v1 and Flash GUI .

Below you can see a demo of the new HTML5 based UI

You can download the release notes from here https://support.emc.com/docu96976_VPLEX_GeoSynchrony_6.2_Release_Notes_.pdf?language=en_US&source=Coveo

And the administration guide from here https://support.emc.com/docu96972_VPLEX_GeoSynchrony_6.2_Administration_Guide.pdf?language=en_US&source=Coveo

The CSI Plugin 1.1 for Unity is now available

$
0
0

Back in November 2019, we released the first version of the Kubernetes CSI plugin for the Unity array, you can read all about it here: https://itzikr.wordpress.com/2019/11/22/the-csi-plugin-1-0-for-unity-is-now-available/

Now, we have just released the 1.1 version which includes the following updates:

The Plugin capabilities for this version that are supported are:

  • Persistent volume (PV) capabilities:
  • Create
  • List
  • Delete
  • Mount
  • Unmount
  • Supports mounting volume as file system.
  • Supports snapshot creation.
  • Supports creation of a volume from a snapshot.
  • Supports static volumes and dynamic volumes.
  • Supports Bare Metal machine type.
  • Supports Virtual Machine type.
  • Supports SINGLE_NODE_WRITER access mode.
  • Supports CentOS 7.6 as host operating system.
  • Supports Red Hat Enterprise Linux 7.6 as host operating system.
  • Supports Kubernetes version 1.14.
  • Supports Unity OE 5.0 l Supports FC Protocol.
  • Supports iSCSI Protocol.

Note: Volume Snapshots is an Alpha feature in Kubernetes. It is recommended for use only in short-lived testing clusters, as features in the Alpha stage have an increased risk of bugs and a lack of long-term support. See Kubernetes documentation at https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-stages for more information about feature stages.

Installation

Full details are in the Installation Guide and here is a summary of pre-requisites for the driver.
Upstream Kubernetes 1.14.x, with specific feature gates enabled
Docker daemon running and configured with MountFlags=shared on all k8s masters/nodes
Helm and Tiller installed on k8s masters. (https://helm.sh/docs/using_helm/#quickstart)
Make sure that the FC WWN (initiators) from the Kubernetes nodes are not part of any existing Hosts on the array(s) if FC protocol is to be used
Make sure that all the nodes in the Kubernetes cluster have network connectivity zoned with the Unity Array in case of FC
Make sure that the iSCSI IQN (initiators) from the Kubernetes nodes are not part of any existing Hosts on the array(s) if iSCSI protocol is to be used
Make sure iscsi-initiator-utils is installed and iscsiadm works on all nodes in order to make use of iSCSI protocol
Make sure device-mapper-multipath package is installed on all nodes and /etc/multipath.conf is present

Full details are in the Installation Guide.
1. Clone the repository from the URL – github.com/dell/csi-unity
2. Create myvalues.yaml file from the values.yaml file and edit some parameters for
installation. These include
The Unity URL
Username/Password to access Unisphere
StoragePool ID
Docker image for driver


3. Run the “install.unity” shell script
4. Installation works on helm 2 as well as helm 3

Upgrade Steps

Clone/Update the repository from the URL – github.com/dell/csi-unity
2. myvalues.yaml file from the values.yaml file can be the same as the one used for csi-unity v1.0
3. Uninstall v1.0 of the driver using “uninstall.unity” shell script
4. Optionally the user can upgrade the helm from v2 to v3 only after uninstalling the v1.0 driver
5. Install v1.1 using “install.unity” shell script

A correct installation should display
text similar to that below:
4 containers running in the controller pod
2 containers running in each node pod
Storage classes unity and unity-iscsi
created
NAME READY STATUS RESTARTS AGE
unity-controller-0 4/4 Running 0 20s
unity-node-r5kdt 2/2 Running 0 20s
unity-node-tq5tj 2/2 Running 0 20s

StorageClasses:
NAME PROVISIONER AGE
unity (default) csi-unity.dellemc.com 21s
unity-iscsi csi-unity.dellemc.com 21s

About Custom Storage Class


StorageClass provides a means for passing parameters to Node/Controller
Protocol defines the transfer protocol to be used for volume provisioning. This can be “FC” or “iSCSI” and if this parameter is not specified, the default value would be FC

Troubleshooting
Here are some installation failures that might be encountered and how to mitigate them.
Warning about feature gates: Double check that you have applied all the features
to the indicated processes. Restart the kubelet when remediated.
“kubectl describe pods unity-controller-0 –n unity” indicates the driver
image could not be loaded
. You may need to put an insecure-registries entry in /etc/docker/daemon.json or login to the docker registry.
“kubectl logs unity-controller-0 –n unity” logs shows the driver cannot authenticate (check your secret’s username and password).
“kubectl logs unity-controller-0 –n unity” logs shows the driver failed to connect to the Unity because it couldn’t verify the certificates, then check the
unity-certs secret and ensure it is not empty or it has the valid certificates. Set unityInsecure: “true” for insecure connection.

Pod stuck at container creating state for over 1 minute and pod description says
“Unable to find device after multiple discovery attempts: rpc error: code =
NotFound desc = Check for disk path /dev/disk/by-id/wwn-
0xxxxxxxxxxxxxxxxxxxxxx not found readlink /dev/disk/by-id/wwn-
0xxxxxxxxxxxxxxxxxxxxxx: no such file or directory”
Troubleshooting steps: Check if zoning is done right for FC channel OR if at least 1 iSCSI target is
up on the array
If the problem still persists, execute /usr/bin/rescan-scsi-bus.sh -a -r. The script cleans up any stale
devices present on the node. Install sg3_utils package if the script is not present on the node.

Below you can see a demo of the plugin:

Documentation and Downloads

CSI Driver for Dell EMC Unity v1.1 downloads and documentation are available on:

Github:  https://github.com/dell/csi-unity

Dell EMC AppSync 4.0 Is Here.

$
0
0

CDM (Copy Data Management) is exploding, analysts predict that most of your data in the data centre is derived from copies of your data and as such, we are continuing to invest in this field.

For the readers of my blog, you know that I’m a big fan of AppSync which allows you to copy/restore/repurpose your data with a direct integration to the Dell EMC storage products so now is a great time to explain about the 4.0 version of AppSync which we have just released.

AppSync is a software that enables Integrated Copy Data Management (iCDM) with Dell EMC’s primary storage systems.

AppSync simplifies and automates the process of generating and consuming copies of production data. By abstracting the underlying storage and replication technologies, and through deep application integration, AppSync empowers application owners to satisfy copy demand for operational recovery and data repurposing on their own. In turn, storage administrators need only be concerned with initial setup and policy management, resulting in an agile, frictionless environment.

AppSync automatically discovers application databases, learns the database structure, and maps it through the Virtualization Layer to the underlying storage LUN. It then orchestrates all the activities required from copy creation and validation through mounting at the target host and launching or recovering the application. Supported workflows also include refresh, expire, and restore production.

New Simplified HTML5 GUI

In AppSync 4.0 ,we completely enhance the UI and we now support HTML5 as the interface.

Below you can see a video, showing how to add XtremIO & PowerMAX arrays, run a discovery on the vCenter and the application host that is running a database

and below you can see a demo showing you the options associated with creating (or subscribing) to a service plan

Metro Re-purposing

Select Copy Management -> Copies -> SQL Server -> DB instance -> User Databases -> “metro” database (most options are “Greyed Out” until checkbox is checked)

Check box next to “metro” db instance -> actions become available -> Click “Create Copy with Plan”

Select “Data Re-purposing” and click “Next”

• Select the options you desire and click “Next”

Name is auto-generated by the database name, current date/time and “1.1”

• Copy location – Select either local or remote for which side to store the copy

Note only local restores are possible with SRDF Metro

• Mount options are the same as with normal Service Plans

• 2nd Generation Copies – Select “Yes” to create the 2nd gen at the same time as 1st gen

• You will notice new “Array Selection”, AppSync recognizes a “Metro 1” and the associated array serial numbers.

• Selecting an array to the right of Metro 1 is where the copy is taken, click “Next”

Deeper Integration with Dell EMC Storage Platforms

VMAX v3+ Unisphere 4 PowerMax REST API platform

• All workflows previously supported with the SMI-S provider for VMAX3, AF & PowerMax, now utilize Unisphere for PowerMax (U4P) REST API.

001

Support Storage Class Memory on PowerMax

• Support Service Level Biasing for PowerMax with SCM (Storage Class Memory)

– With the Foxtail PowerMax release, arrays are able to support NVMe SCM drives and NAND flash drives.

– AppSync 4.0 now allows users to set a “Diamond” SLO on PowerMax arrays after adding SCM drives

Application Integration Improvements

• Ability to restore to any RAC node

– Previously, AppSync required restoring to the same node the copy was originally created on

– This enhancement supports environments where the source node may have gone offline, such as during a disaster or outage

• During the restore process, the user will be able to select the specific RAC node to restore to

You can download AppSync 4.0 by clicking the screenshot below

And as always, the documentation can be found here

https://www.dell.com/support/home/us/en/19/product-support/product/appsync/docs#sort=relevancy&f:versionFacet=[49124]&f:lang=[en]

The post Dell EMC AppSync 4.0 Is Here. appeared first on Itzikr's Blog.


The Dell ESXi 7.0 ISO is now available

The Dell EMC OpenManage™ Integration for VMware vCenter, v5.1 is now available

$
0
0

Following the Day 0 support for vSphere 7 which you can read about here

https://xtremio.me/2020/04/02/the-dell-esxi-7-0-iso-is-now-available/

we have also just released the OpenManage Integration for vCenter 7

The OpenManage Integration for VMware vCenter is a virtual appliance that streamlines tools and tasks associated with management and deployment of Dell servers in the virtual environment.

Fixes & Enhancements

Fixes:
1. Unable to delete the directory used to create repository profile.
2. “” PCIeSSDsecureErase “” attribute is not displayed in Host inventory.
3. MX7000 Chassis inventory fails after promoting backup lead as lead.
4. Chassis inventory job fails if role is changed for the MX7000 chassis.
5. When an option is changed in Event Posting Level, 2000000 error is displayed.
6. Discrepancy in OMIVV appliance guest O.S details shown at ESXi.
7. OMIVV Connection test causes idrac to block OMIVV IP.
8. Time displayed is not in-par with iDRAC for Power Monitoring and Firmware inventory details.
9. Few Attributes does not change in RAID attributes during System Profile deployment.
10. Considerable delay is observed in various pages in scale environment (having more than 500 managed nodes).
11.Warranty query and reporting issues.

For complete list of fixes, refer to Release Notes”

Enhancements:
1. Support vSphere 7.0
2. Support for VMware vSphere Lifecycle Manager with OMIVV as Hardware Support Manager
3. Support for R7525 and XR2 PowerEdge servers
4. Parallel MX7000 chassis firmware updates

 

 

The post The Dell EMC OpenManage™ Integration for VMware vCenter, v5.1 is now available appeared first on Itzikr's Blog.

VPLEX 6.2, What’s New

$
0
0

The VPLEX family removes physical barriers within, across, and between data centers. VPLEX Local provides simplified management and non-disruptive data mobility across heterogeneous arrays. VPLEX Metro provides data access and mobility between two VPLEX clusters within
synchronous distances. With a unique scale-up scale-out architecture, the advanced data caching and distributed cache coherency of VPLEX provides workload resiliency, automatic sharing,
balancing and failover of storage domains, and enables both local and remote data access with predictable service levels.

Early in 2020, we released the 6.2 version which contains the following enhancements:

Large volume support
Starting with VPLEX GeoSynchrony 6.2, the support capacity limit of storage volume size and virtual volume size has increased from 32 TB to 64 TB.

Digital licensing (e-License)
Starting with VPLEX GeoSynchrony 6.2, Dell EMC introducing new license activation models:
VPLEX_AF_LOCAL: It supports Local Volumes for Dell EMC All Flash Arrays.
VPLEX_AF_METRO: It supports Distributed Volumes for Dell EMC All Flash Arrays.
VPLEX_LOCAL_SC_FRAME: It supports Local Volumes for DELL SC Arrays.
VPLEX_METRO_SC_FRAME: It supports Distributed Volumes for DELL SC Arrays.
VPLEX_1TO1_SC_FRAME: It supports Local and Distributed Volumes for a given DELL SC Array.
VPLEX_ELA_LOCAL_CAPACITY: It is known as Enterprise License Activation and it supports
Local Volumes for All Arrays under Dell EMC.
VPLEX_ELA_METRO_CAPACITY: It is Known as Enterprise License Activation and it supports Distributed Volumes for All Arrays under Dell EMC.
Evaluation License Support for all supported License Models

SRSv3
Starting with VPLEX GeoSynchrony 6.2, Dell EMC introducing the SRSv3 capability. SRSv3 includes Licensing Usage Data Transfer feature where Secure Remote Service version 3 (SRSv3)
REST APIs support transfer of licensing usage data to Dell EMC, from Dell EMC Products.

Backend path management
Starting with VPLEX GeoSynchrony 6.2, VPLEX modifies back-end path management from an I/O timeout algorithm to a latency based one. VPLEX monitors the latency of back-end paths and penalizes those paths which show latency exceeding a 1 second threshold. A penalized path is used
less than other healthy paths. If a path accumulates enough penalties, it is marked degraded and VPLEX stops using it for host based I/O automatically. Once an I-T path is removed from use, VPLEX thoroughly checks the health of the path before automatically returning it to service. These changes improve the VPLEX handling of back-end fabric network congestion and back-end storage array failures.


Call Home Enhancements
Starting with VPLEX GeoSynchrony 6.2, additional firmware events are added to alert customers to give more warning notifications about events occurring in their VPLEX system and environment, and some new call-home events will be sent back to Dell EMC to alert about critical issues.


Port Stats Monitor
Starting with VPLEX GeoSynchrony 6.2, VPLEX will monitor and log FC port statistics for potential fabric issues. This monitoring can be configured to send an email upon detection of an issue. For more details, see KB article 531987.


Hyper-V Support for VPLEX Witness
Starting with VPLEX GeoSynchrony 6.2, VPLEX Witness can be deployed on Windows Hyper-V.
The Virtual Hard Disk (VHD) file is part of the GA package as VPlex-6.2.0.00.00.xxcluster-witness-server.vhd.

HTML5 based new GUI
Starting with VPLEX GeoSynchrony 6.2, HTML5 based new GUI is introduced which has Dell Clarity standards, and it is compatible with the latest browsers. The new HTML5 based GUI is
based on a new REST API v2 which is available. For more information, see the GUI Help Pages.
Note: Starting with VPLEX GeoSynchrony 6.2, the Flash GUI is depreciated.
Note: VPLEX GeoSynchrony 6.2 is the last release to support REST API v1 and Flash GUI .

Below you can see a demo of the new HTML5 based UI

You can download the release notes from here https://support.emc.com/docu96976_VPLEX_GeoSynchrony_6.2_Release_Notes_.pdf?language=en_US&source=Coveo

And the administration guide from here https://support.emc.com/docu96972_VPLEX_GeoSynchrony_6.2_Administration_Guide.pdf?language=en_US&source=Coveo

The post VPLEX 6.2, What’s New appeared first on Itzikr's Blog.

The CSI Plugin 1.1 for Unity is now available

$
0
0

Back in November 2019, we released the first version of the Kubernetes CSI plugin for the Unity array, you can read all about it here: https://xtremio.me/2019/11/22/the-csi-plugin-1-0-for-unity-is-now-available/

Now, we have just released the 1.1 version which includes the following updates:

The Plugin capabilities for this version that are supported are:

  • Persistent volume (PV) capabilities:
  • Create
  • List
  • Delete
  • Mount
  • Unmount
  • Supports mounting volume as file system.
  • Supports snapshot creation.
  • Supports creation of a volume from a snapshot.
  • Supports static volumes and dynamic volumes.
  • Supports Bare Metal machine type.
  • Supports Virtual Machine type.
  • Supports SINGLE_NODE_WRITER access mode.
  • Supports CentOS 7.6 as host operating system.
  • Supports Red Hat Enterprise Linux 7.6 as host operating system.
  • Supports Kubernetes version 1.14.
  • Supports Unity OE 5.0 l Supports FC Protocol.
  • Supports iSCSI Protocol.

Note: Volume Snapshots is an Alpha feature in Kubernetes. It is recommended for use only in short-lived testing clusters, as features in the Alpha stage have an increased risk of bugs and a lack of long-term support. See Kubernetes documentation at https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-stages for more information about feature stages.

Installation

Full details are in the Installation Guide and here is a summary of pre-requisites for the driver.
Upstream Kubernetes 1.14.x, with specific feature gates enabled
Docker daemon running and configured with MountFlags=shared on all k8s masters/nodes
Helm and Tiller installed on k8s masters. (https://helm.sh/docs/using_helm/#quickstart)
Make sure that the FC WWN (initiators) from the Kubernetes nodes are not part of any existing Hosts on the array(s) if FC protocol is to be used
Make sure that all the nodes in the Kubernetes cluster have network connectivity zoned with the Unity Array in case of FC
Make sure that the iSCSI IQN (initiators) from the Kubernetes nodes are not part of any existing Hosts on the array(s) if iSCSI protocol is to be used
Make sure iscsi-initiator-utils is installed and iscsiadm works on all nodes in order to make use of iSCSI protocol
Make sure device-mapper-multipath package is installed on all nodes and /etc/multipath.conf is present

Full details are in the Installation Guide.
1. Clone the repository from the URL – github.com/dell/csi-unity
2. Create myvalues.yaml file from the values.yaml file and edit some parameters for
installation. These include
The Unity URL
Username/Password to access Unisphere
StoragePool ID
Docker image for driver


3. Run the “install.unity” shell script
4. Installation works on helm 2 as well as helm 3

Upgrade Steps

Clone/Update the repository from the URL – github.com/dell/csi-unity
2. myvalues.yaml file from the values.yaml file can be the same as the one used for csi-unity v1.0
3. Uninstall v1.0 of the driver using “uninstall.unity” shell script
4. Optionally the user can upgrade the helm from v2 to v3 only after uninstalling the v1.0 driver
5. Install v1.1 using “install.unity” shell script

A correct installation should display
text similar to that below:
4 containers running in the controller pod
2 containers running in each node pod
Storage classes unity and unity-iscsi
created
NAME READY STATUS RESTARTS AGE
unity-controller-0 4/4 Running 0 20s
unity-node-r5kdt 2/2 Running 0 20s
unity-node-tq5tj 2/2 Running 0 20s

StorageClasses:
NAME PROVISIONER AGE
unity (default) csi-unity.dellemc.com 21s
unity-iscsi csi-unity.dellemc.com 21s

About Custom Storage Class


StorageClass provides a means for passing parameters to Node/Controller
Protocol defines the transfer protocol to be used for volume provisioning. This can be “FC” or “iSCSI” and if this parameter is not specified, the default value would be FC

Troubleshooting
Here are some installation failures that might be encountered and how to mitigate them.
Warning about feature gates: Double check that you have applied all the features
to the indicated processes. Restart the kubelet when remediated.
“kubectl describe pods unity-controller-0 –n unity” indicates the driver
image could not be loaded
. You may need to put an insecure-registries entry in /etc/docker/daemon.json or login to the docker registry.
“kubectl logs unity-controller-0 –n unity” logs shows the driver cannot authenticate (check your secret’s username and password).
“kubectl logs unity-controller-0 –n unity” logs shows the driver failed to connect to the Unity because it couldn’t verify the certificates, then check the
unity-certs secret and ensure it is not empty or it has the valid certificates. Set unityInsecure: “true” for insecure connection.

Pod stuck at container creating state for over 1 minute and pod description says
“Unable to find device after multiple discovery attempts: rpc error: code =
NotFound desc = Check for disk path /dev/disk/by-id/wwn-
0xxxxxxxxxxxxxxxxxxxxxx not found readlink /dev/disk/by-id/wwn-
0xxxxxxxxxxxxxxxxxxxxxx: no such file or directory”
Troubleshooting steps: Check if zoning is done right for FC channel OR if at least 1 iSCSI target is
up on the array
If the problem still persists, execute /usr/bin/rescan-scsi-bus.sh -a -r. The script cleans up any stale
devices present on the node. Install sg3_utils package if the script is not present on the node.

Below you can see a demo of the plugin:

Documentation and Downloads

CSI Driver for Dell EMC Unity v1.1 downloads and documentation are available on:

Github:  https://github.com/dell/csi-unity

The post The CSI Plugin 1.1 for Unity is now available appeared first on Itzikr's Blog.

Heads Up on ESXI 6.7 Patch 02

$
0
0

VMware have just released a new build for customers who are on the 6.7 release (7.0 is already out!)

If you take a look at the release notes (https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202004002.html) it contains many fixes, some that are very important!

 

PR 2504887: Setting space reclamation priority on a VMFS datastore might not work for all ESXi hosts using the datastore

You can change the default space reclamation priority of a VMFS datastore by running the following ESXCLI command esxcli storage vmfs reclaim config set with a value for the –reclaim-priority parameter. For example,

esxcli storage vmfs reclaim config set –volume-label datastore_name –reclaim-priority none
changes the priority of space reclamation, which is unmapping unused blocks from the datastore to the LUN backing that datastore, to none from the default low rate. However, the change might only take effect on the ESXi host on which you run the command and not on other hosts that use the same datastore.

This issue is resolved in this release.

  • PR 2512739: A rare race condition in a VMFS workflow might cause unresponsiveness of ESXi hosts

    A rare race condition between the volume close and unmap paths in the VMFS workflow might cause a deadlock, eventually leading to unresponsiveness of ESXi hosts.

    This issue is resolved in this release. (i have seen this issue happening many times and i highly encourage you to install this fix asap)

  • PR 2387296: VMFS6 datastores fail to open with space error for journal blocks and virtual machines cannot power on

    When a large number of ESXi hosts access a VMFS6 datastore, in case of a storage or power outage, all journal blocks in the cluster might experience a memory leak. This results in failures to open or mount VMFS volumes.

    This issue is resolved in this release.

    If VMFS volumes are frequently opened and closed, this might result in a spew of VMkernel logs such as does not support unmap when a volume is opened and Exiting async journal replay manager world when a volume is closed

PR 2462098: XCOPY requests to Dell/EMC VMAX storage arrays might cause VMFS datastore corruption

Following XCOPY requests to Dell/EMC VMAX storage arrays for migration or cloning of virtual machines, the destination VMFS datastore might become corrupt and go offline.

This issue is resolved in this release. For more information, see VMware knowledge base article 74595.

 

 

 

 

The post Heads Up on ESXI 6.7 Patch 02 appeared first on Itzikr's Blog.

Dell EMC OpenManage Management Pack for vRealize Operations Manager, v2.1

$
0
0

We have just released a new management pack for VMware vROPS that allow you to monitor the health of Dell Servers directly from the vROPS console.

The first think you need to do, is to download the adapter itself which you can do by clicking the screenshot below

 

Once the adapter has been downloaded an it’s content, extract, you want to navigate to the Administration -> Repository and install it by clicking the Add / Upgrade button

Once the installation is done, you will see a new management pack installed

The next part is to configure an account to be used, so navigate to other accounts -> add accounts -> Dell EMC OpenManage Adapter

You then want to configure the IP address of the OMIVV appliance that needs to be installed as a pre-requisite, it’s and the vROPS credentials

If you get an error (like the one below), you need to configure extended monitoring at the OMIVV appliance

This is easy, navigate to it’s IP address at the appliance management tab, enable the extended monitoring

That’s it!, you now want to wait a bit and let vROPS do it’s magic and gather all the data, there are many important dashboards, here’s a screenshot of the main one

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The post Dell EMC OpenManage Management Pack for vRealize Operations Manager, v2.1 appeared first on Itzikr's Blog.

What Is PowerStore – Part 1, Overview

$
0
0

Dell EMC PowerStore is a next-generation midrange data storage solution targeted at customers who are looking for value, flexibility, and simplicity.

  • PowerStore provides our customers with data-centric, intelligent, and adaptable infrastructure that supports both traditional and modern workloads. We accomplish this through:
  • Purpose-built all-flash Active/Active storage appliance that supports the new Non-Volatile Memory Express (NVMe Non-Volatile Memory Express (NVMe) is a communications protocol specifically developed for SSDs.) communications protocol.
    • Supports NVMe Solid State Drive (SSD) and NVMe Storage Class Memory (SCM) media types for storage
    • Supports NVMe NVRAM media type for cache
    • Supports SAS SSDs by expansion
  • Consolidates storage and virtual server environments. The PowerStore platform design includes two major configurations:
    • PowerStore T
      • Can be configured for Block only or Unified (Block and File) storage
        • Block uses FC and iSCSI protocols for access
        • File uses NFS and SMB protocols (SDNAS) for access
    • PowerStore X
      • Block only storage with hypervisor installed on the system
      • Capability to run customer applications on native virtual machines (VMs) with a separate VMware license. (Built-in ESXi hypervisor deployment)
    • Both configurations support VMware virtual volumes (Vvols)
  • Flexible scale-up/down and scale-out capabilities
    • Scale up: Base Enclosure and up to three Expansion Enclosures
    • Scale out: Two (up to four) Appliances (PowerStore T only in version 1)
  • Integrated data efficiencies and protection


PowerStore provides our customers with data-centric, intelligent, and adaptable infrastructure that supports both traditional and modern workloads. We accomplish this through:

  • Data-centric design that optimizes system performance, scalability, and storage efficiency to support any workload without compromise.
  • Intelligent automation through programmable infrastructure that simplifies management and optimize system resources, while enabling proactive health analytics to easily monitor, analyze, and troubleshoot the environment. 
  • Adaptable architecture that enables speed and application mobility, offers flexible deployment models, and provides choice, predictability, and investment protection through flexible payment solutions and data-in-place upgrades. 

    Use Cases for PowerStore

PowerStore Platform Configurations


PowerStore consist of two major configurations also called modes or personalities: PowerStore T and PowerStore X.

PowerStore T is storage centric and provides both Block and File services. The software stack starting with a CoreOS is deployed directly on bare metal hardware.

PowerStore X is designed to run applications and provide storage. PowerStore X systems are Block-only storage with a hypervisor (ESXi) installed on the bare metal and the software stack is deployed on the hypervisor. This design enables the deployment of customer VMs and custom applications.

The basic hardware on both configurations is called a Node. A node contains the processors and memory and is the storage processor or storage controller. Two nodes are housed in a Base Enclosure. The nodes are configured Active-Active (Each node has access to the same storage.) for high availability. You can build an Appliance that is based on a Base Enclosure. You can add Expansion Enclosures to each appliance for more storage capacity.

PowerStore Clustering

One, two, three, or four PowerStore T appliances can be connected in a cluster. The cluster can be made up of different PowerStore T models. For example, a PowerStore 1000 and a PowerStore 3000 can be combined in the same cluster.

In this initial release, PowerStore X can only have a one appliance cluster.

Scalability

Each PowerStore configuration and model provides different performance. PowerStore can be scaled up and scaled out.

Scaling up is adding more storage. An Appliance can be a single Base Enclosure or a Base Enclosure with up to three Expansion Enclosures. Each Appliance can accommodate up to 100 drives. Depending on the model, each appliance has either 2 or 4 NVMe drives used for cache, leaving 98 or 96 drives maximum available for data.

Scaling out is to add more Base Enclosures. Scaling out increases processing power and storage. One up to four Appliances can be grouped to form a four appliance cluster. PowerStore T supports up to four appliances in a cluster. In this initial release, PowerStore X only supports one appliance.

PowerStore Models

There are ten different models available within the PowerStore platform: Five PowerStore T models and five PowerStore X models. The higher the model number, the more CPU cores and memory per appliance.

he PowerStore series midrange storage solution includes:

  • PowerStore T, configured for block only or block and file storage.
  • PowerStore X, configured for block storage with hypervisor.
  • Both PowerStore T and PowerStore X come in 1000, 3000, 5000, 7000, and 9000 models. The bigger the number, the faster the processor and better performance.
  • PowerStore appliances are made up of a base enclosure and up to three optional expansion enclosures.
    • For PowerStore T, up to four appliances may for a cluster.
    • For PowerStore X, only one appliance is allowed in a cluster.

One of the most significant and exciting attributes of PowerStore is that it enables customers to continuously modernize their infrastructure over time as requirements change, without limits and on their terms. This allows IT organizations to eliminate future cost uncertainties to plan predictably for the future. ​

With PowerStore, customers can deploy the appliance and consolidate data and applications to meet their current needs. The scale up and scale out architecture allows them to independently add capacity and compute/performance as their workload requirements change over time. ​

​But we don’t stop there. PowerStore goes one step further to provide data-in-place upgrades, enabling the infrastructure to be modernized without a forklift upgrade, without downtime, and without impacting applications. This Adaptable architecture effectively spells the END of data migration.​

new03

PowerStore appliances offer deep integration with VMware vSphere such as VAAI and VASA support, event notifications, snapshot management, storage containers for Virtual Volumes (vVols), and virtual machine discovery and monitoring in PowerStore Manager. By default when a PowerStore T or PowerStore X model is initialized, a storage container is automatically created on the appliance. Storage containers are used to present vVol storage from PowerStore to vSphere. vSphere then mounts the storage container as a vVol datastore and makes it available for VM storage. This vVol datastore is then able to be accessible by internal ESXi nodes or external ESXi hosts. If an administrator has a multi-appliance cluster, the total capacity of a storage container will span the aggregation of all appliances within the cluster.

Claim Details:
The only purpose-built array with a built-in VMware ESXi hypervisor (AD #: G20000055​) – Based on Dell analysis of publicly available information on current solutions from mainstream storage vendors, April 2020.

The advantage of PowerStore’s Anytime Upgrade program is significant. Customers have multiple options to upgrade and enhance their infrastructure.

  • They can upgrade to the next higher model within their current family (for example, upgrading with more powerful nodes to convert their PowerStore appliance from a 1000 to a 3000 model)
  • Or they can upgrade their existing appliance nodes from the current generation to the next generation of nodes.
  • Both the next-generation and higher model node upgrades are performed non-disruptively while preserving existing drives and expansion enclosures, without requiring new licensing or additional purchases.
  • Alternatively, a third option allows customers to scale-out their existing environment with a second system equal to their current model. In short, the customers receives a discount towards their second appliance purchase (e.g. they have to pay for the media in the 2nd appliance).

The three big differentiators from other upgrade programs in the market are:

  • Flexible upgrade options, beyond just a next-gen controller swap.
  • The upgrades can be done anytime in contract as opposed to waiting three years or more.
  • No renewal is required when the upgrade is performed.

Finally, Dell Technologies On Demand features several pay-per-use consumption models that scale to align spending with usage – and optimize both financial and technological outcomes.

  • Pay As You Grow: This model was designed for organizations that have stable workload environments and predictable growth. It enables organizations to match payments for committed infrastructure as it is deployed over time. So, for example, customers can opt for a deferred payment that starts on the date the equipment is deployed or provide a step payment structure that is aligned with their forecast of future usage or their deployment schedule. The key here is the payment flexibility provided around a committed infrastructure.
  • The next two offerings are Flex On Demand and Data Center Utility. Both of these provide metered (or measured) usage and are applicable across our ISG portfolio.
  • Flex On Demand: First, with Flex On Demand, the customer selects the desired total deployed capacity – consisting of Committed Capacity plus Buffer Capacity – to create the right balance of flexibility and cost. Then, they can scale elastically up and down within the Buffer Capacity, as needed. The key here is that capacity is paid for only when it is consumed.
  • Data Center Utility: Delivers the highest degree of flexibility to address business requirements within and across the IT ecosystem. Customers can scale up or down as required. Capacity is delivered as needed. Procurement is streamlined and automated. Billing is simplified. Reporting is standardized. And a delivery manager is assigned and dedicated to the customer’s success. And managed services are most often delivered as part of the total solution.

All of these OPEX-structured flexible consumption solutions help organizations more predictably budget for IT spending, pay for technology as it is used, and achieve optimal total cost of ownership over the full technology lifecycle.

** Payment solutions provided by Dell Financial Services L.L.C. (DFS) or its affiliate or designee, subject to availability and may vary in certain countries. Where available, offers may be changed without notice.

** Payment solutions provided by Dell Financial Services L.L.C. (DFS) or its affiliate or designee, subject to availability and may vary in certain countries. Where available, offers may be changed without notice.

 

you can download the spec sheet from here https://www.dellemc.com/en-au/collaterals/unauth/data-sheets/products/storage/h18143-dell-emc-powerstore-family-spec-sheet.pdf

and the data sheet from here https://www.dellemc.com/en-au/collaterals/unauth/data-sheets/products/storage/h18234-dell-emc-powerstore-data-sheet.pdf

you can lunch a virtual hands-on lab from here https://democenter.dell.com/Event/PowerStoreOnline

and an interactive demo from here

2020-05-05_9-35-12

you can also download a technical primer, by clicking the screenshot below

new004

in the 2nd post (https://xtremio.me/2020/05/05/whats-is-powerstore-part-2-hardware/) we are going to cover some of the hardware aspects of the PowerStore family

The post What Is PowerStore – Part 1, Overview appeared first on Itzikr's Blog.


What Is PowerStore – Part 2, Hardware

$
0
0

In the first post, we gave an high level overview of the product (https://xtremio.me/2020/05/05/what-is-powerstore-part-1-overview/)

Now, lets dive a little bit deeper:

There are ten different models PowerStore Models

PowerStore is designed from the ground up to utilize the latest in storage and interface technologies in order to maximize application performance and eliminate bottlenecks. Each PowerStore appliance has two nodes and uses NVMe to take full advantage of the tremendous speed and low latency of solid-state devices, with greater device bandwidth and queue depth. PowerStore has been architected to maximize performance with NVMe flash storage and supports the even greater demands of Intel Optane Storage Class Memory (SCM) which provides performance approaching the speed of DRAM.
This performance-centric design enables PowerStore to deliver 6x more IOPs and 3x lower latency for real-world workloads compared to previous generations of Dell midrange storage.

PowerStore is a flexible design built to meet the requirements of different storage applications with support for high availability. The PowerStore platform design includes two major configurations: PowerStore T and PowerStore X. The table displays the available models and specifications for each platform.

There are ten different models within the PowerStore product line: Five PowerStore T models and five PowerStore X models. The higher the model number, the more CPU cores and memory per system. PowerStore systems consist of nodes, one or more base enclosures, one or more expansion enclosures, and appliances.

For high availability, PowerStore systems have:

  • Two redundant power supplies
  • Multiple redundant network ports with system bond
  • Two redundant nodes
  • RAID-protected disk drives

PowerStore T systems support clusters of up to four appliances for:

  • Constant uptime with intracluster migrations
  • Scale up
  • Simplified management
  • Automatic data placement

PowerStore Back Enclosure – Back View

The back view shows I/O modules and port that provide connectivity for system management, to front-end host, and back-end Expansion Enclosures (shelves)

new02

Management port (in red) is only used only with PowerStore T appliances. Two ports on the mezz card are used for management traffic with PowerStore X appliances.

Drive Slots

The Base Enclosure supports only NVMe devices with twenty-five (25) slots that are labeled Slots 0 to 24.

  • SAS SSDs can only be added to Expansion Enclosures.

In the Base Enclosure, the first 21 slots, slots 0 through 20 can be populated with either NVMe SSD or NVMe SCM drives for data storage.

The same drive types must be populated in the 21 slots. You cannot mix NVMe SSD, and NVMe SCM drives in the same base enclosure. Minimum of 6 SSDs must be used.

The last two or four slots (dependent on the model) must be populated with NVMe NVRAM devices and are used for write cache and vaulting.

  • On the
    PowerStore 1000 and 3000 the last two slots (23 and 24) are reserved for two (2) NVMe NVRAM devices. Since slots 21 and 22 are open, they can be used for data drives in the PowerStore 1000 and 3000.

    Drive Offerings

PowerStore supports four types of drives: NVMe SSD (Flash), NVMe Storage Class Memory (SCM) SSD,
NVMe NVRAM, and SAS SSD (Flash). The drive types must be installed in specific locations and enclosures.


The NVMe flash and NVMe SCM drives on the left are supported in the Base Enclosure, slots 0 to 20. The NVMe NVRAM type drives are supported in the Base Enclosure, slots 21 to 24 for write cache. SAS flash drives shown on the right are only supported in the PowerStore Expansion Enclosures.

Ethernet Switches

Connect PowerStore to a pair of Ethernet switches to ensure high availability, not single-switch configurations. This requirement applies to switches used for iSCSI, file, intercluster management, and intercluster data. Dell EMC does not process PowerStore orders that include only a single switch.

Each node must have at least one connection to each of the Ethernet switches. Multiple connections provide redundancy at the network adapter and switch levels.

It is recommended that you deploy the switches with Multi-Chassis Link Aggregation Group (MC-LAG). The Dell version of this is called Virtual Link Trunking interconnect (VLTi) topology. Alternative connectivity methods—including reliable L2 uplinks and dynamic LAG—should be used only a solution like VLTi is not a possibility.

The PowerStore supports Dell EMC Networking Top-of-Rack (ToR) switches running OS10 Enterprise Edition (OS10EE). Third-party switches with requisite features are supported. See the Support Matrix for a list of supported switches.

Dell EMC recommends the following supported Dell EMC PowerSwitch Ethernet switches:


(For information about OS10EE, go to Dell Support and search for OS10EE.)

PowerStore T and PowerStore X Switches

PowerStore T and PowerStore X have different switch configurations.

Considerations for OOB management configuration:

  • At least one OOB management switch is recommended for PowerStore T configurations. PowerStore X does not support OOB management.
  • Can be configured with or without a management VLAN.
  • Switch ports must support untagged native VLAN traffic for system discovery.

you can take a virtual tour by clicking the screenshot below

new01

you can also download the introduction to the platform white paper which contains much more in-depth information, by clicking the screenshot below

new005

in the 3rd post, we are going to cover PowerStore with AppsON

https://xtremio.me/2020/05/05/dell-emc-powerstore-part-3-x-appson-overview/

 

The post What Is PowerStore – Part 2, Hardware appeared first on Itzikr's Blog.

Dell EMC PowerStore – Part 3, X AppsON Overview

$
0
0

In the first posts of the series

https://xtremio.me/2020/05/05/what-is-powerstore-part-1-overview/ & https://xtremio.me/2020/05/05/whats-is-powerstore-part-2-hardware/ we gave an high level overview of PowerStore.

PowerStore utilizes a container-based software architecture, that provides unique capabilities for delivering and integrating advanced system services. The modularity of containers enables feature portability, standardization and rapid time-to-market for new capabilities and enables maximum deployment flexibility.

The flexibility of the architecture allows customers to use PowerStore in one of two deployment models: As a traditional external storage array that attaches to servers to provide storage (Known as the PowerStore T model), or as hypervisor-enabled appliance where the PowerStoreOS runs as a Virtual Machine (Known as the PowerStore X model). In the latter, the ESXi hypervisor, that many companies have standardized their IT infrastructure on, gets loaded onto each of the two active-active nodes, with PowerStoreOS running as virtual machine on each node.

This allows customers to run applications directly on the same appliance as storage without the need for external servers, a feature known as AppsOn. No matter which model PowerStore a customer chooses, the exact same exact capabilities, data services, and fully redundant all NVMe container based architecture runs on the same exact 2U hardware and is fully interoperable (for example, you can replicate between a PowerStore T model and an X model and vice-versa). Not only does the onboard hypervisor provide additional isolation and abstraction of the operating system, but it enables future deployment models where the storage software can be deployed independently from the purpose-built hardware.

The ideal use cases for this are storage intensive workloads (opposite of compute intensive workloads), where the workload demands are measured in terms of large number of IOPS and capacity, such as a database. Another is infrastructure consolidation use cases, where IT infrastructure is required in locations that don’t haven’t data centers and are very space constrained – PowerStore with AppsOn provides the ability to run applications in an active-active HA manner with enterprise data services and over 1PB effective capacity all in a 2U single appliance form factor. In addition, the PowerStore X allows not only to run Virtual Machines locally through AppsOn, but it also can simultaneously act as a SAN in providing storage to external servers via FC/iSCSI as well! Talk about true flexibility!

Before we go further, lets get familiar with the basics:

  • For PowerStore X, ESXi is installed directly on the purpose-built hardware that we just reviewed. As a quick refresher, it is a 2U 2 node, All NVMe Base enclosure solution with a dual socket Intel Xeon architecture.
  • The PowerStoreOS runs inside of a Virtual machine on that ESXi, this virtual machine is referred to as the Controller VM.
  • The PowerStore X is capable of supporting traditional storage resources such as SAN and vVols, while also embedding applications directly onto the array in the form of VMware Virtual Machines. Regardless of if the X or T model, PowerStore is designed with an Active – Active architecture, both nodes have access to all of the drives and access to all storage resources. With that said, PowerStore will present resources in an ALUA (active optimized / active non-optimized) manner to front end hosts.
  • PowerStoreOS is based on a Linux operating system. PowerStoreOS runs the software stack for PowerStore, which includes management functionality and endpoints, hosts the web browser (no external application needed for management), handles all of the storage functionality, and the serviceability components such as staging and executing upgrades and remote support via embedded SupportAssist.
  • The PowerStoreOS is implemented through multiple different docker containers. Docker is a defined environment for running containerized solutions that many are familiar with. Containerizing PowerStoreOS allows for easier serviceability as new containers can be quickly staged and brought online, and if a container needs to be rebooted or modified the entire stack does not need to come down. It also provides greater potential for integration across the Dell portfolio, as new features can be easily deployed into the docker environment for PowerStore to leverage.
  • In the PowerStore T model, 100% of the system CPU and memory are used by the PowerStoreOS. For the X model, 50% of CPU and memory are reserved for the PowerStoreOS, ensuring there is always guaranteed resources for serving storage, while the remaining 50% is available for user space to run Virtual Machines.

The screenshot above is used to help visualize the capabilities of PowerStore X. On the left, you have a more traditional setup. A physical server running ESXi, and then either an FC or iSCSI connection to a separate storage array (in this example it is a Unity system. You then have applications (VMs) with their compute on the server and the backend disks on the storage system. PowerStore X contains both the compute and storage components internally. The two native ESXi hosts (1 per node) form an ESXi Cluster for the Computer layer. The Controller VM runs PowerStoreOS which handles the storage across the backend disks for any embedded applications (VMs) or traditional storage served to an external host.

Now to showcase these capabilities. Just like a traditional storage array, and external server can create either an FC or iSCSI connection to the PowerStore X. PowerStore X can then expose storage to the host in the form of a VVol datastore, or as individual Volumes (LUNs). In this scenario, you have an application running on an external server using PowerStore X storage, exactly the same as any other storage system.

However, PowerStore X is also capable of running the customer apps (VMs) on itself (hence, the ‘AppsON”). In this scenario, you deploy the entire application directly onto PowerStore X. The compute portion will run on the ESXi host, and the storage will be on the backend, handled by the PowerStoreOS running inside the Controller VM.

Finally, because PowerStoreX uses ESXi, it automatically inherits the services that are offered through vSphere, such as vMotion. You can see the full potential of PowerStore X by seamlessly migrating your existing applications using a compute and storage vMotion entirely onto PowerStore X, and continue moving workloads in and out based on workload and business needs.

PowerStore X is not limited to it’s internal nodes that are running ESXi, should you need more compute, it can also provision volumes to external hosts. We offer both FC & iSCSI protocols to these hosts.

One of the key benefits of PowerStore X (and ‘T’), is the ability to create more than one storage container which allow you to have a multi-tenancy for vVols based environments, above, you can see a screenshot showing the different storage containers.

When you open up vCenter, this is how a typical configuration environment will look like, you have your VMware DataCenter, the PowerStore Cluster (Cluster-WX-H6121), your PowerStore ESXi nodes (10.245.17.120 / 121), each PowerStore ESXI host will have it’s controller VM (PSTX**) and then, you will have the actual customer VMS and their datastores they reside on. In the above we see a single appliance PowerStore X system, its two nodes exposed as vSphere hosts with the PowerStore Controller VM running on each node, and two user VMs running on these internal nodes. In addition, this PowerStore X system is also serving storage to two VMs running on external ESX servers via iSCSI, as a regular SAN array would.

PowerStore X Use Cases

There are many use cases, PowerStore X can accommodate and frankly, apart from the obvious ones, we can’t wait for you, our customers and partners to show us, what are YOU using it for.

Let’s look at some deployment scenarios where PowerStore can be utilized to modernize infrastructure, starting at the Edge.

Enterprises in a variety of industries are proactively deploying a wide range of IoT use cases to enable digital transformation initiatives. They are often challenged with analyzing large volumes of real-time IoT data and information in a secure, cost-effective manner using centralized analytic solutions. IoT devices often create a deluge of structured and unstructured data, including video images and audio content at the device level, which must be evaluated at the source of the data in real-time. Companies can aggregate and filter device data to remove insignificant data points, or identify the most valuable data to transport to the cloud. Gateways can collect data from edge devices and use applications or algorithms to determine if more complex analyses are needed, or to help companies comply with regulatory requirements that dictate local storage.

Organizations with requirements for edge-based IoT data analytics seek infrastructure solutions that are simple to manage, scalable, secure, and meet their network and data retention requirements. PowerStore offers unique capabilities for environments where infrastructure simplicity and density are desirable or critical, including edge computing, ROBO, mobile and tactical deployments. It’s small 2U footprint, ease of deployment, flexible architecture, ability to support multiple data types, centralized management, and advanced replication to core data centers make it an ideal solution for the Edge. Branch office and retail store locations where space and resources are at a premium will be able to take advantage of the smaller footprint resulting from PowerStore’s collapsed hardware stack where separate server and networking are eliminated. These same benefits are also applicable to mobile applications including tactical, shipboard and airborne deployments.

Powerstore can also be deployed in the core data center.

With AppsOn, PowerStore provides unparalleled flexibility and mobility for application deployment. PowerStore cluster management, combined with VMware tools including vMotion and storage vMotion, enable seamless application mobility between PowerStore and other VMware targets. Using a single storage instance, applications can be deployed on networked servers, hyperconverged infrastructure, or directly on the PowerStore appliance and migrated transparently between them. This unparalleled agility enables IT and application owners to quickly and seamlessly deploy and reassign workloads to the most effective environment based on current requirements and available resources.

AppsON further benefits IT organizations by providing additional flexibility while continuing to utilize existing infrastructure investments. It complements existing platforms, including HCI, by provide a landing zone for high capacity, high performance, storage-intensive workloads that require superior data efficiency, and always-on data reduction.

Finally, PowerStore can still be utilized as a more traditional storage appliance, providing capacity to existing networked servers.

In addition to deploying infrastructure at the Edge and Core, many organizations are utilizing public cloud for hybrid cloud solutions. PowerStore customers can easily integrate their on-premises infrastructure into these environments while maintaining operational consistency.

For VMware customers, VMware Cloud on AWS delivers a seamless hybrid cloud by extending their on-premises vSphere environment to the AWS Cloud, enabling users to modernize, protect and scale vSphere-based applications with AWS resources. With PowerStore’s AppsON capability through vSphere, users can easily migrate applications and data between PowerStore and AWS based on requirements, without requiring additional management tools for simple and consistent operations.

In addition to application mobility, PowerStore provides Cloud Data Services through Faction, a managed service provider that offers scalable, resilient cloud-attached storage with flexible multi-cloud access. A variety of public cloud options that are all continuously innovating and developing new services and capabilities creates complexity in determining which cloud is right for an organization. Cloud Storage Services offers agile, multi-cloud support allowing you to leverage multiple clouds and easily and quickly switch clouds based on applications’ needs to maximize business outcomes.

Organizations can avoid vendor lock-in by keeping data independent of the cloud, so you do not have to worry about high egress charges, migration risk, or time required to move data. Extending the data center to the cloud using enterprise-class storage empowers users to innovate in the cloud and easily scale cloud environments to hundreds of thousands of IOPS to support high-performance workloads, while reducing risk and maintaining complete control of data.

In VCF environments, PowerStore can provide external capacity (supplemental storage) for VCF workload domains and provide complementary data services for data-intensive applications. This is a perfect example of how PowerStore can be deployed along with HCI to address a wide range of applications and data requirements.

There are two configurations supported today.

Supported Config #1

Management Domain + VxRail + PowerStore (Supplemental storage)

Supported Config #2

Management Domain + vSAN Ready Nodes + PowerStore (Principal & Supplemental)

In addition to deploying infrastructure at the Edge and Core, many organizations are utilizing public cloud for hybrid cloud solutions. PowerStore customers can easily integrate their on-premises infrastructure into these environments while maintaining operational consistency.

For VMware customers, VMware Cloud on AWS delivers a seamless hybrid cloud by extending their on-premises vSphere environment to the AWS Cloud, enabling users to modernize, protect and scale vSphere-based applications with AWS resources. With PowerStore’s AppsON capability through vSphere, users can easily migrate applications and data between PowerStore and AWS based on requirements, without requiring additional management tools for simple and consistent operations.

In addition to application mobility, PowerStore provides Cloud Data Services through Faction, a managed service provider that offers scalable, resilient cloud-attached storage with flexible multi-cloud access. A variety of public cloud options that are all continuously innovating and developing new services and capabilities creates complexity in determining which cloud is right for an organization. Cloud Storage Services offers agile, multi-cloud support allowing you to leverage multiple clouds and easily and quickly switch clouds based on applications’ needs to maximize business outcomes.

Organizations can avoid vendor lock-in by keeping data independent of the cloud, so you do not have to worry about high egress charges, migration risk, or time required to move data. Extending the data center to the cloud using enterprise-class storage empowers users to innovate in the cloud and easily scale cloud environments to hundreds of thousands of IOPS to support high-performance workloads, while reducing risk and maintaining complete control of data.

PowerStore X is deeply integrated into VMware vCenter as we know many of our customers have standardized their virtualized infrastructure on vSphere. This means creating and managing Virtual Machines on PowerStore X is exactly the same as if it were an external ESXi server managed in vCenter. However, it was important for us to make the PowerStore manager VMware aware so that the PowerStore UI could display VMware objects consuming PowerStore resources. Not only does it offer an extremely simple turn-key vVol setup, but it also provides a lot of information about these VMs using vVols, such as VM performance metrics, all integrated into the PowerStore management UI.

below, you can see some high level video explanation about AppsON

and you can see a demo how it all looks

The post Dell EMC PowerStore – Part 3, X AppsON Overview appeared first on Itzikr's Blog.

The Dell EMC OpenManage™ Integration for VMware vCenter, v5.1 is now available

$
0
0

Following the Day 0 support for vSphere 7 which you can read about here

https://volumes.blog/2020/04/02/the-dell-esxi-7-0-iso-is-now-available/

we have also just released the OpenManage Integration for vCenter 7

The OpenManage Integration for VMware vCenter is a virtual appliance that streamlines tools and tasks associated with management and deployment of Dell servers in the virtual environment.

Fixes & Enhancements

Fixes:
1. Unable to delete the directory used to create repository profile.
2. “” PCIeSSDsecureErase “” attribute is not displayed in Host inventory.
3. MX7000 Chassis inventory fails after promoting backup lead as lead.
4. Chassis inventory job fails if role is changed for the MX7000 chassis.
5. When an option is changed in Event Posting Level, 2000000 error is displayed.
6. Discrepancy in OMIVV appliance guest O.S details shown at ESXi.
7. OMIVV Connection test causes idrac to block OMIVV IP.
8. Time displayed is not in-par with iDRAC for Power Monitoring and Firmware inventory details.
9. Few Attributes does not change in RAID attributes during System Profile deployment.
10. Considerable delay is observed in various pages in scale environment (having more than 500 managed nodes).
11.Warranty query and reporting issues.

For complete list of fixes, refer to Release Notes”

Enhancements:
1. Support vSphere 7.0
2. Support for VMware vSphere Lifecycle Manager with OMIVV as Hardware Support Manager
3. Support for R7525 and XR2 PowerEdge servers
4. Parallel MX7000 chassis firmware updates

 

 

The post The Dell EMC OpenManage™ Integration for VMware vCenter, v5.1 is now available appeared first on Itzikr's Blog.

VPLEX 6.2, What’s New

$
0
0

The VPLEX family removes physical barriers within, across, and between data centers. VPLEX Local provides simplified management and non-disruptive data mobility across heterogeneous arrays. VPLEX Metro provides data access and mobility between two VPLEX clusters within
synchronous distances. With a unique scale-up scale-out architecture, the advanced data caching and distributed cache coherency of VPLEX provides workload resiliency, automatic sharing,
balancing and failover of storage domains, and enables both local and remote data access with predictable service levels.

Early in 2020, we released the 6.2 version which contains the following enhancements:

Large volume support
Starting with VPLEX GeoSynchrony 6.2, the support capacity limit of storage volume size and virtual volume size has increased from 32 TB to 64 TB.

Digital licensing (e-License)
Starting with VPLEX GeoSynchrony 6.2, Dell EMC introducing new license activation models:
VPLEX_AF_LOCAL: It supports Local Volumes for Dell EMC All Flash Arrays.
VPLEX_AF_METRO: It supports Distributed Volumes for Dell EMC All Flash Arrays.
VPLEX_LOCAL_SC_FRAME: It supports Local Volumes for DELL SC Arrays.
VPLEX_METRO_SC_FRAME: It supports Distributed Volumes for DELL SC Arrays.
VPLEX_1TO1_SC_FRAME: It supports Local and Distributed Volumes for a given DELL SC Array.
VPLEX_ELA_LOCAL_CAPACITY: It is known as Enterprise License Activation and it supports
Local Volumes for All Arrays under Dell EMC.
VPLEX_ELA_METRO_CAPACITY: It is Known as Enterprise License Activation and it supports Distributed Volumes for All Arrays under Dell EMC.
Evaluation License Support for all supported License Models

SRSv3
Starting with VPLEX GeoSynchrony 6.2, Dell EMC introducing the SRSv3 capability. SRSv3 includes Licensing Usage Data Transfer feature where Secure Remote Service version 3 (SRSv3)
REST APIs support transfer of licensing usage data to Dell EMC, from Dell EMC Products.

Backend path management
Starting with VPLEX GeoSynchrony 6.2, VPLEX modifies back-end path management from an I/O timeout algorithm to a latency based one. VPLEX monitors the latency of back-end paths and penalizes those paths which show latency exceeding a 1 second threshold. A penalized path is used
less than other healthy paths. If a path accumulates enough penalties, it is marked degraded and VPLEX stops using it for host based I/O automatically. Once an I-T path is removed from use, VPLEX thoroughly checks the health of the path before automatically returning it to service. These changes improve the VPLEX handling of back-end fabric network congestion and back-end storage array failures.


Call Home Enhancements
Starting with VPLEX GeoSynchrony 6.2, additional firmware events are added to alert customers to give more warning notifications about events occurring in their VPLEX system and environment, and some new call-home events will be sent back to Dell EMC to alert about critical issues.


Port Stats Monitor
Starting with VPLEX GeoSynchrony 6.2, VPLEX will monitor and log FC port statistics for potential fabric issues. This monitoring can be configured to send an email upon detection of an issue. For more details, see KB article 531987.


Hyper-V Support for VPLEX Witness
Starting with VPLEX GeoSynchrony 6.2, VPLEX Witness can be deployed on Windows Hyper-V.
The Virtual Hard Disk (VHD) file is part of the GA package as VPlex-6.2.0.00.00.xxcluster-witness-server.vhd.

HTML5 based new GUI
Starting with VPLEX GeoSynchrony 6.2, HTML5 based new GUI is introduced which has Dell Clarity standards, and it is compatible with the latest browsers. The new HTML5 based GUI is
based on a new REST API v2 which is available. For more information, see the GUI Help Pages.
Note: Starting with VPLEX GeoSynchrony 6.2, the Flash GUI is depreciated.
Note: VPLEX GeoSynchrony 6.2 is the last release to support REST API v1 and Flash GUI .

Below you can see a demo of the new HTML5 based UI

You can download the release notes from here https://support.emc.com/docu96976_VPLEX_GeoSynchrony_6.2_Release_Notes_.pdf?language=en_US&source=Coveo

And the administration guide from here https://support.emc.com/docu96972_VPLEX_GeoSynchrony_6.2_Administration_Guide.pdf?language=en_US&source=Coveo

The post VPLEX 6.2, What’s New appeared first on Itzikr's Blog.

The CSI Plugin 1.1 for Unity is now available

$
0
0

Back in November 2019, we released the first version of the Kubernetes CSI plugin for the Unity array, you can read all about it here: https://volumes.blog/2019/11/22/the-csi-plugin-1-0-for-unity-is-now-available/

Now, we have just released the 1.1 version which includes the following updates:

The Plugin capabilities for this version that are supported are:

  • Persistent volume (PV) capabilities:
  • Create
  • List
  • Delete
  • Mount
  • Unmount
  • Supports mounting volume as file system.
  • Supports snapshot creation.
  • Supports creation of a volume from a snapshot.
  • Supports static volumes and dynamic volumes.
  • Supports Bare Metal machine type.
  • Supports Virtual Machine type.
  • Supports SINGLE_NODE_WRITER access mode.
  • Supports CentOS 7.6 as host operating system.
  • Supports Red Hat Enterprise Linux 7.6 as host operating system.
  • Supports Kubernetes version 1.14.
  • Supports Unity OE 5.0 l Supports FC Protocol.
  • Supports iSCSI Protocol.

Note: Volume Snapshots is an Alpha feature in Kubernetes. It is recommended for use only in short-lived testing clusters, as features in the Alpha stage have an increased risk of bugs and a lack of long-term support. See Kubernetes documentation at https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-stages for more information about feature stages.

Installation

Full details are in the Installation Guide and here is a summary of pre-requisites for the driver.
Upstream Kubernetes 1.14.x, with specific feature gates enabled
Docker daemon running and configured with MountFlags=shared on all k8s masters/nodes
Helm and Tiller installed on k8s masters. (https://helm.sh/docs/using_helm/#quickstart)
Make sure that the FC WWN (initiators) from the Kubernetes nodes are not part of any existing Hosts on the array(s) if FC protocol is to be used
Make sure that all the nodes in the Kubernetes cluster have network connectivity zoned with the Unity Array in case of FC
Make sure that the iSCSI IQN (initiators) from the Kubernetes nodes are not part of any existing Hosts on the array(s) if iSCSI protocol is to be used
Make sure iscsi-initiator-utils is installed and iscsiadm works on all nodes in order to make use of iSCSI protocol
Make sure device-mapper-multipath package is installed on all nodes and /etc/multipath.conf is present

Full details are in the Installation Guide.
1. Clone the repository from the URL – github.com/dell/csi-unity
2. Create myvalues.yaml file from the values.yaml file and edit some parameters for
installation. These include
The Unity URL
Username/Password to access Unisphere
StoragePool ID
Docker image for driver


3. Run the “install.unity” shell script
4. Installation works on helm 2 as well as helm 3

Upgrade Steps

Clone/Update the repository from the URL – github.com/dell/csi-unity
2. myvalues.yaml file from the values.yaml file can be the same as the one used for csi-unity v1.0
3. Uninstall v1.0 of the driver using “uninstall.unity” shell script
4. Optionally the user can upgrade the helm from v2 to v3 only after uninstalling the v1.0 driver
5. Install v1.1 using “install.unity” shell script

A correct installation should display
text similar to that below:
4 containers running in the controller pod
2 containers running in each node pod
Storage classes unity and unity-iscsi
created
NAME READY STATUS RESTARTS AGE
unity-controller-0 4/4 Running 0 20s
unity-node-r5kdt 2/2 Running 0 20s
unity-node-tq5tj 2/2 Running 0 20s

StorageClasses:
NAME PROVISIONER AGE
unity (default) csi-unity.dellemc.com 21s
unity-iscsi csi-unity.dellemc.com 21s

About Custom Storage Class


StorageClass provides a means for passing parameters to Node/Controller
Protocol defines the transfer protocol to be used for volume provisioning. This can be “FC” or “iSCSI” and if this parameter is not specified, the default value would be FC

Troubleshooting
Here are some installation failures that might be encountered and how to mitigate them.
Warning about feature gates: Double check that you have applied all the features
to the indicated processes. Restart the kubelet when remediated.
“kubectl describe pods unity-controller-0 –n unity” indicates the driver
image could not be loaded
. You may need to put an insecure-registries entry in /etc/docker/daemon.json or login to the docker registry.
“kubectl logs unity-controller-0 –n unity” logs shows the driver cannot authenticate (check your secret’s username and password).
“kubectl logs unity-controller-0 –n unity” logs shows the driver failed to connect to the Unity because it couldn’t verify the certificates, then check the
unity-certs secret and ensure it is not empty or it has the valid certificates. Set unityInsecure: “true” for insecure connection.

Pod stuck at container creating state for over 1 minute and pod description says
“Unable to find device after multiple discovery attempts: rpc error: code =
NotFound desc = Check for disk path /dev/disk/by-id/wwn-
0xxxxxxxxxxxxxxxxxxxxxx not found readlink /dev/disk/by-id/wwn-
0xxxxxxxxxxxxxxxxxxxxxx: no such file or directory”
Troubleshooting steps: Check if zoning is done right for FC channel OR if at least 1 iSCSI target is
up on the array
If the problem still persists, execute /usr/bin/rescan-scsi-bus.sh -a -r. The script cleans up any stale
devices present on the node. Install sg3_utils package if the script is not present on the node.

Below you can see a demo of the plugin:

Documentation and Downloads

CSI Driver for Dell EMC Unity v1.1 downloads and documentation are available on:

Github:  https://github.com/dell/csi-unity

The post The CSI Plugin 1.1 for Unity is now available appeared first on Itzikr's Blog.

Viewing all 202 articles
Browse latest View live