Quantcast
Channel: itzikr – Itzikr's Blog
Viewing all 202 articles
Browse latest View live

VMworld 2016, Introducing the EMC XtremIO VMware vRealize Orchestrator Adapter

$
0
0

Hi,

The 2nd big announcement at VMworld is the vCenter Orchestrator (vCO) adapter for XtremIO.

This has been in the making for quite some time, as someone who is very close to the development of this, I can tell you that we have been in contact with many customers about the exact requirements since at the end of the day, vCO is a framework and like any other framework, it is only as good (or bad) in what popular workflows it support.

The adapter that we will be releasing in 09/2016 will include the majority of the XtremIO functionality, volume creation, deletion, extending a volume, snapshots creation etc., shortly after the 1st release, we will be adding reports and Replication support via RecoverPoint

Below you can see a demo we recorded for VMworld, it’s a little bit rough but can give you a good overview around what It can do (Thanks to Michael Johnston & Michael Cooney for recording it, you rock!)



Not All Flash Arrays Snapshots Are Born (Or Die) Similar.

$
0
0

 

Hi,

CDM (Copy Data Management) is an huge thing right now for AFA vendors, each product try to position itself as an ideal platform for it but like anything else, the devil is in the details.

If you are new to this concept, I would encourage you to start here:

http://xtremioblog.emc.com/the-why-what-and-how-of-integrated-copy-data-management

and then view the following 3 videos Yoav Eilat did with a great partner of ours, WWT

done watching all the videos and still not convinced? How should you test the your AFA Vs the others? Its pretty simple actually

  1. Fill you AFA with Data (preferably DB’s)
  2. Take plenty of snapshots of the same DB
  3. Present these snapshots to your DB host and run IO against them (using SLOB for example)
  4. Do you see a performance hit of your parental volume compared to the snapshots volume – red flag!
  5. Delete some snapshots and see what happens

You’ll thank me later.


AppSync 3.0.2 is out 

$
0
0

Hi

We’ve just GAd AppSync 3.0.2 which  includes the following new features and enhancements:

 

1. Hotfix/Patch improvement – Starting in AppSync 3.0.2, hotfix/patch is delivered as an executable like the AppSync main installer. A single patch will install both the AppSync server and the AppSync agent hotfixes. You can push the hotfix to both UNIX and Windows agents from the AppSync server.

2. Agent deployment and discovery separation – Enables the deployment of the agent even if discovery fails across all supported AppSync applications including Microsoft applications, File system and Oracle.

3. Event message accuracy and usefulness

The installation and deployment messages have been enhanced to provide more specific information that helps in identifying the cause of the problem.

All AppSync error messages have been enhanced to include a solution.

4. Unsubscribe from subscription tab – You can now easily unsubscribe applications from the Subscription tab of a service plan.

5. Storage order preference enhancement – You can now limit the copy technology preference in a service plan by clearing the storage preference options you do not want.

6. FSCK improvements – You can now skip performing a file system check (fsck) during a mount operation on UNIX hosts.

7. Improve SRM support – For RecoverPoint 4.1 and later, AppSync can now manage VMWare SRM managed RecoverPoint consistency groups without manual intervention. A mount option is now available to automatically disable the SRM flag on a consistency group before enabling image access and returning it to the previous state after the activity.

8. XtremIO improvements

a. Reduces the application freeze time during application consistent protection.

b. Support for XtremIO version earlier than 4.0.0 has been discontinued.

9. Eliminating ItemPoint from AppSync – ItemPoint is no longer supported with AppSync.Users cannot perform item level restore for Microsoft Exchange using ItemPoint with AppSync.

10. XtremIO and Unity performance improvement – Improved operational performance of Unity and XtremIO.

11. Serviceability enhancements – The Windows control panel now displays the size and version of AppSync.

 

The AppSync User and Administration Guide provides more information on the new features and enhancements. The AppSync Support Matrix on https://elabnavigator.emc.com/eln/modernHomeDataProtection is the authoritative source of information on supported software and platforms

 


VSI 7.0 Is Here, VPLEX Support Is Included!

$
0
0

Hi,

We have just released the 7th version! Of the VSI (Virtual Storage Integrator) vCenter plugin, this release includes

  1. VPLEX VIAS Provisioning (and viewing support), yes, its been a feature that was long due and I’m proud to say it’s here now, so first we need to register VPLEX which is a pretty much straightforward thing to do

    VPLEX VIAS Provisioning Support – Use
    Cases


    VPLEX Integrated Array Service (VIAS)
    ŸCreate virtual volumes from pre-defined storage pools on supported arrays. These storage pools are visible to VPLEX through array management providers (AMPs)
    ŸSimplify the provisioning, by comparing with the traditional provisioning from storage volumes

    VPLEX VIAS Provisioning Support –
    Prerequisites

    Software Prerequisites
    VMware vSphere 6.0 or 5.5
    VMware vSphere Web Client
    VMware ESX/ESXi hosts
    VMware vCenter Servers (single or linked-mode)
    ŸStorage Prerequisites
    VPLEX 5.5/6.0
    Only support XtremIO/VMAX3/VNX Block
    Storage pool is created on the array
    AMP is registered with VPLEX, connectivity status is OK.

    VPLEX VIAS Provisioning Support –
    Provision VMFS Datastore on Host Level


    VPLEX VIAS Provisioning Support –
    Provision VMFS Datastore on Cluster Level


    VPLEX VIAS Provisioning Support –
    Provision VMFS Datastore


    VPLEX VIAS Provisioning Support –
    Provision VMFS Datastore


    VPLEX VIAS Provisioning Support –
    Provision VMFS Datastore


    Let’s see a demo about how it all works, thanks for Todd Toles who recorded it!

    Multiple vCenters Support – Background
    VSI 6.7 or Older
    Designed to work with single vCenter
    Multiple vCenters are not supported totally
    More and more customers requested this
    capability
    ŸVSI 6.8
    XtremIO use cases
    ŸVSI 6.9.1 & 6.9.2
    RecoverPoint & SRM use cases
    Unity/UnityVSA use cases

    Quality Improvement – Batch
    Provisioning Use Case
    Customer provisions 30 datastores on a
    cluster which has 32 ESXi hosts, the vCenter
    will fall into no response
    ŸRoot Cause: there are huge tasks (e.g.
    ~900+ “Rescan all HBA” and “Rescan VMFS”
    tasks) created in a short time, which
    generates a rescan storm and impacts the
    vCenter much

    What have we done to fix it?

    Code changes to optimize the host rescan
    operations invoked by VSI
    ŸManually configure the vCenter advanced settings to add “config.vpxd.filter.hostRescanFilter=false
    , which disables the automatic VMFS rescanning on each host (under the same
    cluster) when create a new datastore for the cluster. Enable this filter when batch provisioning is done.


RecoverPoint 5.0 – The XtremIO Enhancements

$
0
0

Hi,

We have just released the 5th version of RecoverPoint which offers an even deeper integration with XtremIO, if you are not familiar with RecoverPoint, it’s the replication solution for XtremIO and basically offers a scale out approach to replication. RecoverPoint can be used in it’s physical form (which is the scope of this blog post) or as a software only solution (RecoverPoint For Virtual Machines, RP4VMs)

Expanding an XtremIO volume is very simple from the CLI or UI. To expand a volume in XtremIO simply right-click the volume, modify it and set the new size.

Before RecoverPoint 5.0, increasing the size of a volume was a manual process. While XtremIO made it very easy to expand volumes, RecoverPoint was unable to perform the change in size. To increase the size of a volume, you would have to remove the volume
from RecoverPoint, log in to the XtremIO and resize the volume. Once the volume was resized, add the volume to RecoverPoint again. A volume sweep would be triggered by the new volume in RecoverPoint.

RecoverPoint 5.0 and above allows online expansion to Replication Set volumes without causing a Full Sweep resulting in journal loss. When both the production and copy volumes
are from an XtremIO array the Replication set can be expanded. Best practice is to perform the re-sizing on the copy first, then change production to match.

This example has a consistency group containing two replication sets. The selected replication set has a production volume and a single remote copy. Both are on XtremIO arrays and in different clusters.

Here is an example of expanding the size of a replication set. The first step is to expand the (remote) copy on the XtremIO array. The volume can be identified by the UID, which is
common to both RecoverPoint and XtremIO. Next we increase the size to 75 GB in this example.

Notice that now the Copy and Production volumes of the Replication Set are different sizes. Since we expanded the Copy volume first, the snapshots created during the time the
volumes are mismatched will still be available for Failover and Recover Production. Upon a Rescan of the SAN volumes the system will issue a warning and log an event. A rescan can be initiated by the user or it will happen during Snapshot creation.

Next we will expand the Production volume of the Replication Set. Once this is accomplished the user can initiate a rescan or wait until the next snapshot cycle.

After the rescan is complete, the Replication Set now contains the Production and the copy of the same size. Displayed is the Journal history, notice the snapshots and bookmarks are intact. Also there is a system generated bookmark after the resizing is complete.

RecoverPoint protection is policy-driven. A protection policy, based on the particular business needs of your company, is uniquely specified for each consistency group, each copy (and copy journal) and each link. Each policy comprises settings that collectively
govern the way in which replication is carried out. Replication behavior changes dynamically during system operation in light of the policy, the level of system activity, and the availability of network resources.
Some advanced protection policies can only be configured through the CLI.

Beginning with RecoverPoint 5.0, there is a new Snapshot Consolidation Policy for copies on the XtremIO array. The Goal of this new policy is to make the consolidation of
snapshots more user configurable.
The XtremIO array currently has a limitation of 8k volumes, snapshots and snapshot mount points. RecoverPoint 4.4 and below enforces the number of maximum number of snapshot, but in a non-changeable manner. For example, the user cannot change the time RecoverPoint will consolidate the snapshots on the XtremIO. This new policy will allow the user more control over how long the snapshots will be retained.

One to three consolidation policies can now be specified for each copy of a consistency
group that resides on an XtremIO array. By default and for simplicity reasons, consolidation policy is selected automatically based on number of snaps and required Protection Window.

The new CLI command config_detailed_snapshot_consolidation_policy will allow a much more detailed and concise consolidation policy for copies on an XtremIO array.
The
config_copy_policy command will allow for the setting of the maximum number of snapshots.

Data Domain is an inline deduplication storage system, which has revolutionized disk-based backup, archiving, and disaster recovery that utilizes high-speed processing. Data Domain easily integrates with existing infrastructure and third-party backup solutions.
ProtectPoint is a data protection offering which brings the benefits of snapshots together with the benefits of backups. By integrating the primary storage with the protection storage, ProtectPoint reduces cost and complexity, increases speed, and maintains recoverability of backups.
RecoverPoint 4.4 introduced support for ProtectPoint 3.0 by enabling the use of Data Domain as a local copy for XtremIO protected volumes. The Data Domain system has to be registered with RecoverPoint using IP. Data transfer can be configured to use IP or FC. The Data Domain based copy, is only local and the link policy supports two modes of replication:
Manual – bookmarks and incremental replication are initiated from the File System or
Application Agents
Periodic – RecoverPoint creates regular snapshots and stores them on Data Domain.
There is no journal for the Data Domain copy during the Protect Volumes or Add Copy wizard. If the add Data Domain copy checkbox is selected, users will only select the Data Domain registered resource pool.

When using RecoverPoint 5.0 with ProtectPoint 3.5 and later, specific volumes of a consistency group can be selected for production recovery, while others volumes in the group are not recovered.

Displayed is an example of a use case for the new Partial Restore feature of RecoverPoint 5.0. In this example, this new feature allows for the recovery of only part of a database system

Only selected volumes are blocked for writing. Transfer is paused from XIO source > DD replica during recovery. During transfer, marking data is collected for the non-selected volumes, in case writes are being performed to them in production.
During partial restore:
Transfer is paused to all copies, until the action completes
At production – selected volumes are blocked, non-selected volumes remain accessible
Only selected volumes are restored
After production resumes all volumes undergo short initialization (in case of periodic snapbased replication policy is configured for XIO->DD link).
Things to keep in mind about the partial restore option for ProtectPoint:
Only selected volumes are blocked for writing
Transfer is paused from XIO source -> DD replica during recovery
During transfer, marking data is collected for the non-selected volumes, in case writes are being performed to them in production


RecoverPoint For VMs (RP4VMs) 5.0 Is Here

$
0
0

Hi,

Carrying on from today’s announcements about RecoverPoint ( https://itzikr.wordpress.com/2016/10/10/recoverpoint-5-0-the-xtremio-enhancements/ ), I’m also super pumped to see that my brothers & sisters from the CTD group just released RP4VMs 5.0!, why do I like this version so much? In one word “simplicity”, or in 2 words, “large scale”, these are 2 topics that I will try to cover in this post.

The RP4VM 5.0 release is centered on the main topics of Ease of use, Scale, and Total Cost of Ownership (TCO). lets take a quick look at the enhanced scalability features of RP4VM 5.0. One enhancement is that it can now support an unlimited amount of Virtual Center inventory. It also supports the protection of up to 5000 Virtual machines (VMs) when using 50 RP clusters, 2500 VMs with 10 RP clusters, and now up to 128 Consistency Groups (CGs) per cluster.
The major focus of this module is on the Easily deploy and Save on network subjects as we discuss the simplification of configuring the vRPAs using from 1-4 IP addresses during cluster creation and the use of DHCP when configuring IP addresses.

RP4VMs 5.0 has enhanced the pre-deployment steps by allowing all the network connections to be on a single vSwitch if desired. The RP Deployer usage to install and configure the cluster has also been enhanced with improvements to the required number of IP addresses required, And lastly, improvements have been made in the RecoverPoint for VMs plug-in to
improve the user experience


With the release of RecoverPoint for VMs 5.0, the network configuration requirements have been simplified in several ways. The first enhancement is that now all the RP4VM vRPA
connections on each ESXi host can be configured to run through a single vSwitch. It will make the pre-deployment steps quicker and easier for the vStorage admin to accomplish and take up less resources. It can also all be run through a single vmnic, saving resources on the hosts.


With the release of RP4VM 5.0, all of the network connections can be combined onto a single vSwitch. Here you can see that the WAN, LAN, and iSCSI ports are all on vSwitch0.
While two VMkernel ports are shown, only a single port is required for a successful implementation. Now for a look at the properties page of the vRPA we just created. You can
see here that the four network adapters needed for vRPA communication are all successfully connected to the VM Network portal. It should be noted that while RP4VM 5.0 allows you to save on networking resources, you still need to configure the same underlying infrastructure for the iSCSI connections as before.


A major goal with the introduction of RP4VM 5.0, is to reduce the number of IPs per vRPA down to as few as 1. Achieving this allows us to reduce the required number of NICs and ports per vRPA. This will also allow for the number of iSCSI connections to be funneled
through a single port. Because of this, the IQN names for the ports will be reduced to one and the name will represent the actual NIC being used, as shown above. The reduced topology will be available when doing the actual deployment, either when running Deployer (DGUI) or when selecting the box property in boxmgmt. This will be demonstrated later

Releases before 5.0 supported selecting a different VLAN for each network during OVF deployment. The RecoverPoint for Virtual Machines OVA package in Release 5.0 requires that only the management VLAN be selected during deployment. Configuring this management VLAN in the OVF template enables you to subsequently run the Deployer wizards to further configure the network adapter topology using 1-4 vNICs as desired.

RP4VMs 5.0 will support a choice from 5 different IP configuration options during deployment. All vRPAs in a cluster require the same configuration for it to work. Shown in this table are the 5 options that can be used. Option 1 uses a single IP address with all the
traffic flowing through Eth0. Option 2A uses 2 IP addresses with the WAN and LAN on one and the iSCSI ports on the other. Option 2B also uses 2 IPs, but with the WAN on its own and the LAN and the iSCSI connections on the other. Option 3 uses 3 IPs, one for WAN, one for LAN and one for the two iSCSI ports. The last option, #4, separates all the connections to their own IPs. Use this option when performance and High Availability (HA) is critical. This is the recommended practice whenever the resources are available. It should be noted that physical RPAs only use options 1 and 2B, without iSCSI, as iSCSI is not yet supported on a physical appliance.

Observe these recommendations when making your selection:
1) In low-traffic or non-production deployments, all virtual network cards may be placed on the same virtual network (on a single vSwitch).

2) Where high availability and performance is desired, separate the LAN and WAN traffic from the Data (iSCSI) traffic. For even better performance, place each network (LAN, WAN, Data1, and Data2) on a separate virtual switch.


3) For high-availability deployments in which clients have redundant physical switches, route each Data (iSCSI) card to a different physical switch (best practice) by creating one VMkernel port on every vSwitch and vSwitch dedicated for Data (iSCSI).


4) Since the vRPA relies on virtual networks, the bandwidth that you expose to the vRPA iSCSI vSwitches will determine the performance of the vRPA. You can configure vSphere hosts with quad port NICs and present them to the vRPAs as single or dual iSCSI networks; or implement VLAN tagging to logically divide the network traffic among multiple vNICs


The Management network will always be run through eth0. When deploying the OVF template you need to know which configuration option you will be using in Deployer and set the port accordingly. If you do not set the management traffic to the correct destination network, you may not be able to reach the vRPA to run Deployer.


start the deployment process, enter the IP address of one of the vRPAs, which has been configured in your vCenter, into a supported browser in the following format: https://<vRPA_IP_address>/. This will open up the RecoverPoint for Virtual Machines
home page where you can get documentation or start the deployment process. To start Deployer, click on the EMC RecoverPoint for VMs Deployer link.


After proceeding through the standard steps from previous releases, you will come to the Connectivity Settings screen. In our first example we will setup the system to have all the networking go through a single interface. The first item to enter is the IP address which will be used to manage the RP system. Next you will choose what kind of network infrastructure will be use for the vRPAs in the Network Adaptors Configuration section. The first part is to choose the Topology for WAN and LAN in the dropdown. When selected, you will see 2 options to choose from, WAN and LAN on the same adapter and WAN and LAN on separate adapters. In this first example we will choose WAN and LAN on same network adapter.
Depending on the option chosen, the number of fields available to configure will change as will the choices in the Topology for Data (iSCSI) dropdown. To use a single interface we will select the Data (iSCSI) on the same network adapter as WAN and LAN option. Because we are using a single interface, the Network Mapping dropdown is grayed out and the choice we made when deploying the OVF file for the vRPA has been chosen for us. The next available field to set is the Default Gateway which we entered to match the Cluster Management IP. Under the vRPAs Setting section there are only two IP fields. The first is for the WAN/LAN/DATA IP, which was already set as the IP used to start Deployer. The second IP is for all the connections in the second vRPA that we are creating the cluster with. This IP will also be used for LAN/WAN and DATA on the second vRPA. So there is a management IP and a single IP for each vRPA to use once completed.


Our second example is for the Data on separate connection from the WAN/LAN option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on same adapter and Data (iSCSI) on separate network
adapter from WAN and LAN
. Next we will have to select a network connection to use for the Data traffic from a dropdown of configured interfaces. Because we are using multiple IP addresses, we have to supply a netmask for each one, unlike the previous option where it was already determined by the first IP address we entered to start Deployer. Here we enter one for WAN/LAN and another for the Data IP address. Under the vRPAs Settings section which was lower on the vRPA Cluster Settings page, we will have to provide an IP for the Data connection of the first vRPA , and also the two required for the second vRPAs connections.


Our third example is for the WAN is separated from LAN and Data connection option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on separate adapters and Data (iSCSI) on same network adapter as LAN. Once that option is selected the fields below will change accordingly. Next we will have to select a network connection to use for the WAN traffic from a dropdown of configured interfaces since the LAN and Data are using the connection chosen when deploying the OVF template for the vRPA. We once again need to supply two netmasks, but this time the first is for the LAN/Data connection and the second is for the WAN connection alone. Under the vRPAs Setting section which is lower down on the vRPA Cluster Settings page, you will supply an IP for the WAN alone on the first vRPA and two IPs for the second vRPA, one for the WAN and one for the LAN/Data connection.


The 4th example is for the WAN and LAN and Data all being separated option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on separate adapters and Data (iSCSI) on separate network adapters from WAN and LAN. Once that option is selected the fields below will change accordingly displaying the new configuration screens shown here. Next we will have to select a network connection to use for the WAN and the Data traffic from a dropdown of configured interfaces since the LAN is using the connection chosen when deploying the OVF template for the vRPA. There will now be three netmasks which need to be input, one for LAN, one for WAN and a single one for the Data connections. In the vRPAs Settings section which is lower down on the Cluster Settings page, you will now input a WAN IP address and a Data IP address for the first vRPA and then an IP for each of the LAN, WAN and Data connections individually on vRPA2.


The last available option is the All are separated with dual data NICs option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on separate adapters and Data (iSCSI) on two dedicated network adapters, which is used for the best available performance and is recommended as a best practice. Once those options are selected the fields below will change to display the new configuration screens shown. Next we will have to select network interfaces to use for WAN and the two Data connections from a dropdown of configured interfaces since the LAN is using the connection chosen when deploying the OVF template for the vRPA. This option requires 4 netmask be entered, one for the WAN, LAN, Data 1 and Data 2 IPs, as all have their own connection links. Under
the vRPAs Setting section which is lower down on the Cluster Settings page, we can now see that we need to provide the full amount of IPs which can be used in any configuration per vRPA.


With the release of RP4VM 5.0, DHCP is supported for the WAN, LAN and iSCSI interfaces, but the cluster management and iSCSI VMkernel addresses must remain static. Support is also added for dynamic changes to all interfaces, unlike previous versions of the software. RP4VM 5.0 is also offering full stack support for IPv6 on all interfaces except iSCSI. Another enhancement is a reduction in the amount of configuration data which is shared across the clusters; with 5.0 only the WAN addresses of all vRPAs, the LAN addresses of vRPA 1and 2, MTUs, the cluster name, and the cluster management IPs of all clusters will be shared.
Note that the boxmgmt option to retrieve settings from remote RPA is unsupported as of 5.0 When IP addresses are provided by DHCP and an RPA reboots, the RPA will acquire an IP address from the DHCP server. If the DHCP server is not available, the RPA will not be able to return to the operational state; therefore it is recommended to supply redundant, highly available DHCP servers in the network when using the DHCP option.


Shown here on the Cluster Settings page of the Connectivity step of Deployer, we can see the option to select DHCP for individual interfaces. Notice that the DHCP icon does not appear for the Cluster Management IP address. This address must remain static in 5.0. If any of the other interfaces have their DHCP checkbox selected, the IP address netmasks will be removed and DHCP will be entered in its place. When you look at the vRPAs Settings window you can see that the LAN is a static address while the WAN and two Data addresses are now using DHCP. Another item to note here is that IPv6 is also available for all interfaces except for iSCSI, which currently only supports IPv4. Another important consideration to take note of is that adapter changes are only supported offline using boxmgmt. A vRPA would have to be detached from the cluster, the changes made, and then reattached to the cluster.


Let us take a closer look at the main page of Deployer. In the top center we see the vCenter IP address as well as the version of the RP4VM plugin which is installed on it. Connected to that is the RP4VMs cluster which displays the system name and the management IP address. If the + sign is clicked, the display will change to display the vRPAs which are part of the cluster.
In the wizards ribbon along the bottom of the page we will find all the functions that can be performed on a cluster. On the far right of the ribbon are all the wizards for the 5.0 release of Deployer which includes wizards to perform vRPA cluster network modifications, replace a vRPA, add and remove vRPAs from a cluster and remove a vRPA cluster from a system.


Up to the release of RecoverPoint for VMs 5.0, clusters were limited to a minimum of two appliances with a maximum of eight. While such a topology makes RP4VMs clusters more robust and provides high availability, additional use cases exist where redundancy and availability is traded for cost reduction. RP4VMs 5.0 introduces support for a single-vRPA cluster to enable, for example, product evaluation of basic RecoverPoint for VMs functionality and operations, and to provide cloud service providers the ability to support a topology with a single-vRPA cluster per tenant. This scale out model enables starting with a low scale single-vRPA cluster and provides a simple scale out process. This makes RP4VMs a low footprint protection tool. It protects a small number of VMs by having a minimal required footprint to reduce Disaster Recovery (DR) costs. The RecoverPoint for VMs environment allows scale out in order to meet sudden growth for DR needs.
RP4VMs systems can contain up to five vRPA clusters. They can be in a star, partially or fully connected formation protecting VMs locally or remotely. All clusters in an RP4VMs system need to have the same amount of vRPAs. RP4VMs 5.0 single-vRPA cluster deployments reduce the footprint for network, compute, and storage overheard for small to medium deployments. It offers a Total Cost of Ownership (TCO) reduction.
Note: The single-vRPA cluster is only supported in RecoverPoint for VMs implementations.


The RP4VMs Deployer can be used for connecting up to five clusters to the RP4VMs system. All clusters in an RP4VMs systems must have the same number of vRPAs. Software upgrades can be run from the Deployer. Non-disruptive upgrades are supported for clusters with two or more vRPAs. For a single-vRPA cluster, the warning shows that the upgrade will be disruptive. The replication tasks managed by this vRPA will be stopped until the upgrade is completed. The single-vRPA cluster upgrade occurs without a full sweep or journal loss. During the vRPA reboot, the Upgrade Progress report may not update and Deployer may become unavailable. When the vRPA completes its reboot, the user can login to Deployer and observe the upgrade process to its completion. Deployer also allows vRPA cluster network modifications such as cluster name, time zones
and so on, for single-vRPA clusters. To change network adapter settings use advanced tools such as Deployer API or the boxmgmt interface.


The vRPA Cluster wizard in Deployer is used to connect clusters. When adding an additional cluster to an existing system, the cluster must be clean, meaning that no configuration changes, including license installations, have been made to the new cluster. Repeat the connect cluster procedure to connect additional vRPA clusters.
Note: RP4VMs only supports clusters with the same number of vRPAs in one RP4VMs system.


New Dashboard tabs for RP4VM 5.0 will provide users an overview of system health and Consistency Group Status. The new tabs will allow the administrator quick access to the overall RP4VM system.
To access the Dashboard, in the vSphere Web Client home page, click on the RecoverPoint for VMs icon.
New Tabs include:
Overall Health – provides a summary of the overall system health including CG transfer status and system alerts
Recovery Activities – displays recovery activities for copies and group sets, provides recovery-related search functions, and enables users to select appropriate next actions
Components– displays the status of system components and a history of incoming writes or throughput for each vRPA cluster or vRPA
Events Log – displays and allows users to filter the system events


The new Dashboard for RP4VMs includes a Recovery Activities Tab. This will allow the monitoring of any active recovery actions such as Failover, Test a Copy and Recover Production. This tab allows the user to monitor and control all ongoing recovery operations.


The RecoverPoint for VMs Dashboard includes a Component Tab for viewing the status of all Clusters and vRPAs managed by the vCenter Server instance. For each component selected from the System Component window on the left, relevant statistics and information will be displayed in the right window.


Beginning with RecoverPoint for VMs 5.0 there is now an automatic RP4VM Uninstaller. Running the Uninstaller removes vRPA clusters and all of their configuration entities from vCenter Servers.
For more information on downloading and running the tool, see Appendix: Uninstaller tool in the RecoverPoint for Virtual Machines Installation and Deployment User Guide.


RecoverPoint for VMs 5.0 allows the removal of a Replication set from a Consistency Group without journal loss. Removing a Replication set does not impact the ability to perform a Failover or Recover Production to a point in time before the Rset was removed. The deleted Rset will not be restored as part of that image.
The RecoverPoint System will automatically generate a bookmark indicating the Rset removal. A point to remember is that the only Replication Set of a Consistency Group cannot be removed


Here we see a Consistency Group that is protecting 2 VMs. Each VM has a Local copy. Starting with RP4VMs 5.0 a user can now remove a protected VM from the Consistency Group without causing a Journal history loss. Also after a VM removal the Consistency Group is still fully capable of returning back to an image, using Recover Production or Failover, that contained the VM that was removed.


Displayed is a view of the Journal Volume for the copies of a Consistency Group. There are both system made Snapshots and User generated Bookmarks. Notice that after the deleting of a Replication Set, a Bookmark is created automatically. All the snapshots created from that point will not include the volumes from the removed Virtual Machine.

Lets see some Demos

The New Deployer

Protection of VMs running on XtremIO

 


XtremIO Directions (AKA A Sneak Peek Into Project-F)

$
0
0

At DellEMC World 2016, together with Chris Ratcliffe (SVP, CTD Marketing) and Dan Inbar (GM XtremIO & FluidFS), we gave a session on what we call “XtremIO Future Directions”, we really wanted to show without getting into too much details where we are heading in the next few years, think of it as a technical vision for the near term future.

We started by giving a quick overview of the business, we think that Dell EMC XtremIO is the FASTEST growing product ever seen in the storage business, with more than 40% of the total AFA market share since our GA in November 2013 is something that can’t be taken lightly. For me personally, I can say that the journey has been amazing so far, as a relatively young father, I think of the acceleration the product had to go through in such a short time, the market demand is absolutely crazy! More than 3,000 unique customers and over $3Bn in cumulative revenue. From a technical perspective, If I try to explain the success of XtremIO over other AFAs, it really boils down to “purpose built architecture” – something which many other products now claim, when we built XtremIO following four pillars were (and in my opinion still are) mandatory building blocks:

  1. Purpose-Built All Flash Array

    We had the luxury of writing the code using a clean slate, that meant SSD’s only in our case and many unique features e.g. XDP, that could never have happened in the old world (you can read more about XDP here: http://www.emc.com/collateral/white-paper/h13036-wp-xtremio-data-protection.pdf)

  2. Inline All The Time Data Services

    XtremIO is a CAS (Content Aware Storage) architecture, many customers think of dedupe (for example) as a feature, in the case of XtremIO, it really isn’t. In the old days we described deduplication as a “side effect” of the architecture but since “side effect” is normally thought of as a bad thing, we took that terminology out J. But seriously, what I mean is that we examine the data in real time and give each chunk of data a unique signature and by doing so, when we write the data, if the signature already exists, we simply dedupe it, there is no post process hashing like so many products out there, no “throttling” to the CAS engine etc. This is a killer feature not just because of data savings but rather HOW the data is being written and evenly distributed in real time across all the available nodes and drives, apart from the CAS/Deduplication engine, we of course compress all the data in real time, no knobs needed. We also encrypt the data and everything is done while using thin provisioning so you really only store the unique data you are producing, again, all in real time.

  3. Scale-Out IOPS & Bandwidth

    Wow, this one is funny – I remember in the old days spending hours explaining why a Scale-Out architecture is needed and had to debunk competitors claims like “no one needs 1M IOPS” and “no one needs more than a dual-controller architecture”. While I would agree that not everyone needs it, if I look at our install base, so many DO. It’s also not just IOPS, the tightly coupled Scale-Out architecture is what gives you the bandwidth and low latency that your applications need.

  4. CDM (or, unique Snapshots capabilities)

    If IDC/Gartner are right by saying that more than 60% of the data stored in your datacenter is actually multiple copies of your data and if your storage array can’t cope with these copies in an efficient way (note that “efficient” is not just capacity but also without performance penalty) then, you’re not as agile as you could be and your TCO goes out of the window – read more about it here:

    http://wikibon.com/evolution-of-all-flash-array-architectures/

    Luckily, XtremIO snapshots have been specifically designed to be ultra-efficient and as a result we see many customers that are using them not just for the primary data itself but for its copies as well, you can read more about it here https://itzikr.wordpress.com/2014/05/12/xtremio-redefining-snapshots/

    https://www.emc.com/collateral/solution-overview/solution-overview-xtremio-icdm-h14591.pdf

    Moving on to our longer term vision for XtremIO, what’s interesting is the that the current XtremIO array, (internally known as X1), was never the endgame, it’s rather a STEP on the road to what we believe customers will need in the coming years. The issue we have faced is that to build what we want means that many new technologies need to become available. We are gradually implementing them all, each cycle, with what’s available from a hardware/software perspective.


The current architecture is doing an amazing job for:

  • Acceleration (the ability to simply accelerate your workload, e.g. your Database) by moving it to XtremIO.
  • Providing very good copy data management (iCDM)
  • Consolidation of large environments into the cluster (think virtualized environments with thousands of VMs residing in the large XtremIO cluster configurations, up to 8 X-Bricks, all 16 storage controllers running in an Active/Active fashion with global dedupe and compression)

We aren’t stopping there, we are going to take scalability to different dimensions, providing far denser configurations, very flexible cluster configurations and…new data services. One of them is the NAS add-on that was the highlight of the session we had at Dell EMC World. Note that it is only one of a number of new data services we will be adding. So why did we mention NAS specifically if we are going to introduce other data services as well? It’s very simple really, this is the first “Dell” and “EMC” World, we wanted to highlight a true UNIFIED ENGINEERED
solution coming from both companies which are one now (Dell EMC).

Before moving to read ahead about the NAS part, we also highlighted other elements of the technical roadmap e.g. the ability to really decouple compute (performance) from capacity, ability to leverage some elements of the Software Defined Storage into the solution and to really optimize the way we move data, not just in the way it lands in the array but rather going sideways to the cloud or other products. Here again, the core CAS architecture comes into play, “getting the architecture right” was our slogan in the early days, you now understand why it was so important to make it right the first time..

Ok, so back to the NAS Support! We gave the first peek into something internally called “Project-F and I must say, the response has been overwhelming so we thought we should share it with you as well – please note that a lot of this is still under wraps and as always, when you deal with roadmap items, the usual disclaimers apply – roadmaps can change without notice etc.

Ok, so what is it?

During 2017, Dell EMC will release an XtremIO-based unified block and file array. By delivering XtremIO NAS and block protocol support on a unified all-flash array, we plan to deliver the transformative power and agility of flash in modern applications to NAS. XtremIO is the most widely adopted All-Flash platform for block workloads. However, we recognized an opportunity to extend our predictable performance, inline data services, and simple scalability to file based workloads. This is the first unified, content-aware, tightly-coupled, scale-out, all flash storage with inline data services to provide consistent and predictable performance for file and block.
The main objective of the Dell/EMC merger is to create an entity that is greater than the sum of its parts. Not just operational synergies, but technical synergies that enable new compelling solutions. The new XtremIO file support feature set is the first of many synergies to come. XtremIO’s NAS capabilities are based on a proven NAS file system (FluidFS) from DELL Technologies and offers:

  • Full NAS feature-set
  • Supports Multiple NAS protocols (NFS, SMB/CIFS, HDFS and NDMP)
  • Over 1M NAS IOPS, predictable sub-millisecond latency
  • Enterprise-grade scalability, and reliability

A common question that I get is “don’t you already have other NAS solutions?” to me, this question is silly. EMC and now, Dell EMC has always been about a portfolio approach. Lets ignore NAS for a second, wouldn’t this question be applicable to Block based protocols (and products) as well? Of course it will, and as in Block, different products are serving different use cases, for the XtremIO NAS solution, we were looking for a platform that can scale out in a tightly-coupled manner, where metadata is distributed, one that can fit the use cases below. Again, there is nothing wrong with the other approaches, each has their cons/pros for different use cases which is the beauty of the portfolio, we don’t force the use case to one product, we tailor the best product to the specific customer use case. Regardless of the problem you are trying to solve, Dell EMC has a best-of-breed platform that can help. If you want to learn more about the storage categories, I highly encourage you to read chad’s post here http://virtualgeek.typepad.com/virtual_geek/2014/01/understanding-storage-architectures.html

Target Use Cases

This solution targets workloads that require low latency, high performance, and multi-dimensional scalability: Transactional NAS applications are well-suited use cases for the XtremIO NAS capabilities. A few examples are VSI, OLTP and VDI and mixed-workloads including TEST & DEV, and DEVOPS. These workloads will also leverage XtremIO’s data services such as compression and deduplication.

A good rule of thumb is, if you are familiar with the current XtremIO use cases and want them to be applied over NAS/SMB, that is a perfect match.

File Capabilities & Benefits

With its addition of NAS protocol support, XtremIO can deliver all this with its unique scale-out architecture and inline data services for Transactional NAS and block workloads. Storage is no longer the bottleneck, but enables database lifecycle acceleration, workload consolidation, private/hybrid clouds adoption, and Transactional NAS workload optimization. The key features and benefits include:

Unified All-Flash scale-out storage for block and file
Multi-protocol support of FC, iSCSI, NFS, SMB, FTP, HDFS and NDMP
Predictable and consistent high performance
In memory metadata architecture w/ inline data service


Scalable
Over 1M NAS IOPS w/ sub-millisecond latency
Billions of files
Petabytes scalability
Elasticity – scale-out for block and file, grow when needed, the NAS part can scale from one appliance (2 Active/Active controllers) to 6 appliances (12 Active/Active controllers!)


Simple
Single unified management (scroll down to see a demo of it)

Resilience
Built-in replication for file
Enterprise grade availability
Proven technology from DELL and EMC


Efficiency
Inline deduplication and compression
Efficient virtual copy technology for block and file
Thin provision support

Both file and block will be using the XtremIO inline data services such as encryption, inline compression and deduplication. In addition, for file workloads native array based replication is available.

Other technical capabilities includes:
ICAP for antivirus scan
Virtual copy technology
Remote replication
Quotas (on name, directories or users)

Below, you can see a recorded demo on the upcoming HTML5 UI, note that it is different than the web UI tech preview that we introduced in july 2016 ( https://itzikr.wordpress.com/2016/06/30/xios-4-2-0-is-here-heres-whats-new/ ), yes, as the name tends to suggest, during the “tech preview”, we have been gathering a lot of customers feedback about what they want to see in the future UI of XtremIO (hence the name, tech preview)



If you want to participate in a tech preview then start NOW! Speak to your DellEMC SE / Account Manager and they’ll be able to help you enroll.

P.S

the reason we called the NAS integration “Project-F” is simple, the original XtremIO product had a temporary name, “Project-X”🙂

2c1cda00-c03d-4e57-a661-9e9f9bbf5e2d-original

==Update==

you can now watch an high quality recording of the session itself here

 


vSphere 6.5 UNMAP Improvements With DellEMC XtremIO

$
0
0

Two years ago, my manager at the time and myself visited the VMware HQ in palo alto, we had an agenda in mind which is around the UNMAP functionality found in vSphere, the title of the presentation I gave was “small problems lead to big problems” and it had a similar photo to the one above, the point we were trying to make is that the more customers that will be using AFAs that VMware do not release back the unused to capcity to, the bigger the problem gets because from a $ per GB, each GB matter…they got the point and we ended the conversation with a promise to ship them an XtremIO X-Brick to develop automated UNMAP on XtremIO that the greater good will benefit from it as well, not just XtremIO.

if you are new to the UNMAP “stuff”, i encourage you to read multiple posts i wrote on the matter..

https://itzikr.wordpress.com/2014/04/30/the-case-for-sparse_se/

https://itzikr.wordpress.com/2013/06/21/vsphere-5-5-where-is-my-space-reclaim-command/

https://itzikr.wordpress.com/2015/04/28/vsphere-6-guest-unmap-support-or-the-case-for-sparse-se-part-2/

The day has come.

VMware has just released vSphere 6.5 which an enhanced unmap functionality at both the volume (Datastore) level and the In-Guest level, lets examine both

  1. Volume (datastore) level

    Using the UI, you can now set an automated UNMAP at the datastore level and set it to either “low” or “none” , low basiecally means that once in a 12 hour internal, the ESXi crawler will run it at the datastore level, however, you can set different thresholds using the ESXI cli as shown below

    Why can’t you set it as “high in the UI? I assume that since space reclamation is a relatively heavy IO related, VMware want’s you to ensure your storage array can actually cope with the load and the cli is less visible than the UI itself…note that you can still run an ad hock space reclamation at the datastore level like you could in vSphere 5.5/6.0, running it manually will finish quicker but will be the heaviest option.

    If you DO chose to run it manually, the best practices for XtrmeIO is to set the chunk size used for the reclamation to 20000 as seen below

  2. In-Guest space reclamation

    In vSphere 6.0, you could have run the “defrag” (optimization) at the windows level, windows server 2012, 2012 R2 and 2016 were supported as long as you set the VMDK to “thin” and enable it at the ESXi host

    And running the latest VM hardware version which at vSphere 6.0 was “11” and in vSphere 6.5 is “13” as seen below

    So, what’s new?

    Linux! In the past, VMware didn’t support the SPC-4 standard which was required to enable space reclamation inside the linux guest OS and now with vSphere 6.5, SPC-4 is fully supported so you can run space reclamation inside the linux OS using either manual cli or a crone job. In order to check that the linux OS does indeed support space reclamation, run the “sg_vpd” command as seen below and look for the LBPU:1 output.

    Running the sq_inq command will actually show if SPC-4 is enabled at the linux OS level

    In order to run the space reclamation inside the linux guest OS, simply run the “rm” command, yes, its that simple.

    You can see the entire flow in the following demo that was recorded by Tomer Nahum from our QA team, thanks Tomer!

P.S

Note that at the time of writing this post, we have identified an issue with the windows guest OS space reclamation, AFAIK, it doesn’t work with many (all?) the arrays out there and we are working with VMware to resolve it. also note that you must use the full web client (NOT the H5 client) when formatting the VMFS 6 datastore, it appears that when using the H5 embedded client, it doesn’t align the volume with the write offset

99

Naa.xxxx01 is created via webclient, naa.xxxx02 is created via embedded vClient

 

 

 



Copy Data Management (CDM), Or, Why It’s the Most Important Consideration For AFA’s Going Forward.

$
0
0

A guest post by Tamir Segal

Very soon after we release Dell-EMC XtremIO’s copy technology, we were very surprise by an unexpected finding. We learned that in some cases, customers would be deploying XtremIO to hold redundant data or “copies” of the production workload. We were intrigued, why would someone use a Primmum array to hold non-production data? and why would they dare to consolidate it with production workloads?

Because of our observations, we commissioned IDC to perform an independent study and drill into the specifics of the copy data problem. The study included responses from 513 IT leaders in large enterprises (10,000 employees or more) in North America and included 10 in-depth interviews across a spectrum of industries and perspectives.

The “copy data problem” is rapidly gaining attention among senior IT managers and CIOs as they begin to understand it and the enormous impact that it has on their organization in terms of cost, business workflow and efficiency. IDC defines copy data as any replica or copy of original data. Typical means of creating copy data include snapshots, clones, replication (local or remote), and backup/recovery (B/R) operations. IDC estimates that the copy data problem will cost IT organizations $50.63 billion by 2018 (worldwide).

One could think that copies are bad for organizations and they just lead to sprawl of data and waste, and therefore the expected question to ask is why don’t we just eliminate all those copies? The answer is straightforward, copies are important and needed for many critical processes in any organization, for example; how can you develop the next generation of your product without a copy of your production environment as baseline for the next version? In fact, there are many significant benefits to using even more copy data in some use cases. However, legacy and inefficient copy management practices resulted with a substantial waste and financial burden on IT (Did I mention $50.3B?).

IOUG made a research on 300 DB managers and professional and what is the most activities taking up most time each week. Results are somehow surprising, Figure 1 shows that 30% spending significant amount of their time on creating copies. However, this does not end here, Test &QA are also tasks done on non-production copies and patches are first tested on non-product environments.

Figure 1 – Database Management activities taking up most time each week (source: Unisphere research, efficiency isn’t enough: data centers lead the drive to innovation. 2014 IOUG survey)

What are those copies? and what are the uses cases they support and what are the problems there today? Those can be categorized under 4 main areas:

  • Innovation (testing or development)
  • Analytic and decision making (run ETL from a copy rather than production)
  • IT operations (such as pre-production simulation and patch management)
  • Data Protection.

Before getting the research’s results, I assumed that Data Protection would be leading use case for copies. I was wrong, based on research there is no significant leader in data consumption.

Figure 2 – Raw Capacity Deployed by Workload (Source IDC Survey)

Another interesting data point was to see what technology is used to create copies, per the research results 53% used custom written scripts to create copies.


Figure 3 – what tools are used to create secondary copies (source IDC Survey)

The copy data management challenges directly impact critical business processes and therefore have direct impact on the cost, revenue, agility and competitiveness of any organization. But the big question is by how much? The IDC research was looking to quantify the size of the problem and how big it, some highlights of the researches are:

  • 78% of organizations manage 200+ instances of Oracle and SQL Server databases. The mean response for the survey was 346.43 database instances.
  • 82% of organizations make more than 10 copies of each instance. In fact, the mean was 14.88 copies of each instance.
  • 71% of the organizations surveyed responded that it takes half a day or more to create or refresh a copy.
  • 32% of the organizations refresh environments every few days, whereas 42% of the organizations refresh environments every week.

Based on the research results, it was found that on average a staggering 20,619 hours are spent on wait time for instance refreshes every week by the various teams. in a conservative estimate of 25% of instances yields more than 5,000 hours, or 208 days, of operational waiting or waste.

The research is available for everyone, and you can view it here

These results are very clear, there is a very large ROI (more than 50%) that can be realized and probably by many organizations since more than 78% of the companies are managing more than 200+ instances of database and as the research shows, the process today is wasteful and inefficient.

The Copy Data Management Challenges

It is important to understand why legacy infrastructure and improper copy data management processes have fostered the need for copy data management solutions. The need for efficient infrastructure is driven by extensive storage silos, sprawl, expensive storage and inefficient copy creation technologies. The need for efficient copy data management processes is driven by increased wait times for copies to be provisioned, low productivity and demands for self-service.

Legacy storage systems were not designed to support true mixed workload consolidation and require significant performance tuning to guarantee application SLAs. Thus, storage administrators have been conditioned to overprovision storage capacity and create dedicated storage silos for each use case and/or workload.

Furthermore, DBAs are often using their own copy technologies, it is very common that DBAs will ask storage administrators to provision capacity, they will than use their native tools to create a database copy. One common practice is to use RMAN in oracle and restore a copy from a backup.

Copy technologies, such as legacy snapshots, do not provide a solution. Snapshots are space efficient compared to full copies; however, in many cases copies created using snapshot technology are under-performing, impact production SLAs, taking too long to create or refresh, have limited scale, lack real modern efficient data reduction and are complex to manage and schedule.

Because of performance and SLA requirements, storage admins are forced to use full copies and clones, but these approaches result in an increase in storage sprawl as capacity is allocated upfront and each copy consumes the full size of its source. To save on capacity costs, these types of copies are created on a lower tiered storage system or lower performing media.

External appliances for copy data management lead to incremental cost and they still require a storage system to store copies. They may offer some remedy; however, they introduce more complexity in terms of additional management overhead and require substantial capacity and performance from the underlying storage system.

Due to the decentralized nature of application self-service and the multitude of applications distributed throughout organizations within a single business, a need for copy data management has developed to provide oversight into copy data processes across the data center and ensure compliance with business or regulatory objectives.

The Dell-EMC XtremIO’s integrated Copy Data Management approach

As IT leaders, how can we deliver the needed services to support efficacies, cost saving and agility to your organization? How does the copy data management can be addressed in a better way? This is how Dell-EMC can help you to resolve the copy data service at its source.

Dell EMC XtremIO pioneered the concept of integrated Copy Data Management (iCDM). The concept behind iCDM is to provide nearly unlimited virtual copies of data sets, particularly databases on a scale-out All-Flash array using a self-service option to allow consumption at need for DBAs and application owners. iCDM is built on XtremIO’s scale-out architecture, XtremIO’s unique virtual copy technology and application integration and orchestration layer provided by Dell-EMC AppSync.


Figure 4 – XtremIO’s integrated Copy Data Management stack

 

 

XtremIO Virtual Copy (XVC) used with iCDM is not physical but rather a logical view of the data at a specific point in time (like a snapshot), unlike snapshot XVC is both metadata and physical capacity efficient (dedup and compression) and does not impact production SLAs. Like physical copies, XVC provides the equal performance compared to production, but unlike physical copies, which may take long time to create, XVC can be created immediately. Moreover, data refreshes can trigger as often as desired at any direction or hierarchy enabling flexible and powerful data movement between data-sets.

The ability to provide consistent and predictable performance on a platform that can scale-out is a mandatory requirement. Once you have an efficient copy services with unlimited copies, you will want to consolidate more workloads. As you consolidate more workloads into a single array, more performance may be needed and you to be able to add more performance to your array.

We live in a world where the copies have consumers, in our case they are the DBAs and application owners. As you modernize your business, you want to empower them to be able to create and consume copies when they need them, this is where Dell-EMC AppSync can provide the application orchestration and automation for applications copies creation.

iCDM is a game changer and its impact on the IT organization is tremendous; XtremIO iCDM enables significant costs savings, provide more copies when needed and support future growth. Copies can be refreshed on-demand, they are efficient, high performance and have no SLA risks. As a result, iCDM enables DBAs and application owner to accelerate their development time and trim up to 60% off the testing process, have more test beds and improve the product quality. Similarly, analytical databases can be updated frequently so that analysis is always performed on current data rather than stale data.


Figure 5 – Accelerate database development projects with XtremIO iCDM

More information on XtremIO’s iCDM can be found here.

As a bonus, I included a short checklist to help you choose your All-Flash array and copy data management solution:

Does your CDM is based on All-Flash array?

ð

Can you have copies and production on the same array w/o SLA risks?

ð

Does your CDM solution is future proof? Can you SCALE-OUT and add more performance and capacity when needed? Can you get scalable number of copies?

ð

Does your CDM can immediately refresh copies from production or any other source? Can it refresh to any direction (prod to copy, copy to copy or copy to prod?

ð

Can your copies have the same performance characteristics as production?

ð

Do your copies get data service like production including compression and deduplication?

ð

Can you get application integration and automation?

ð

Can your DBAs and application owner get self-service options for application copy creation?

ð

XtremIO iCDM is the most effective copy data management option available today, it enables better workflows, reduces risks, eliminates costs and ensures SLA compliance. The benefits extend to all stakeholders, they can now perform their work more efficiently while having better results; the results can be seen in reduced waste and costs reduction while providing better services, improved business workflows and greater productivity.



Dell EMC AppSync 3.1 Is Out, Here’s What’s New!

$
0
0

This post was written by Marco Abela and Itzik Reich

The AppSync 3.1 release (GA on 12/14/16) includes major new features for enabling iCDM use cases with XtremIO. Let’s take a look and what is new for XtremIO:


Crash consistent SQL copies


-Protects the database without using VSS or VDI, and depends on the array to create crash consistent copies


The AppSync 3.0 release introduced a “Non VDI” feature. This “non VDI” feature creates copies of SQL using the Microsoft VSS freeze/thaw framework at the filesystem level. With AppSync 3.1, by selecting “Crash-Consistent”, no application level freeze/thaw is done, resulting in less overhead needed for creating the copy of your SQL server using only XVC. This is equivalent to taking a snapshot of the relevant volumes from the XMS.

Restrictions/Limitations:
If VNX/VNXe/Unity, SQL databases subscribed must all be part of same CG or LunGroup
Mount with recovery will work only for “Attach Database” Recovery Type option
When restoring, only “No Recovery” type will be supported
assqlrestore.exe is not supported for “crash-consistent” SQL copies
Transaction log backup is not supported
Not supported on ViPR

SQL Cluster Mount


Concern Statement: AppSync does not provide the ability to copy SQL clusters using mount points, nor mount to alternate mount paths, only same as original source.
Solution: Ability to mount to alternate paths, including mounting back to the same production cluster, as a cluster resource, under an alternate clustered SQL instance and also mount multiplecopies simultaneously as clustered resources on a single mount host.
Restrictions/Limitations:
For mount points, the root drive must be present and must already be a clustered physical disk resource
Cannot use the default mount path, i.e. C:\AppSyncMounts as this is a system drive


FileSystem Enhancements:

Repurposing support for File Systems:
This new version introduces the repurpose workflow for File Systems compatible with AppSync. This is very exciting for supporting iCDM use cases for Applications in which AppSync does not have direct application support for, allowing you to create copies for test/dev uses cases using a combination of Filesystem freeze/thaw and scripts if needed as part of the AppSync repurpose flow.

Nested Mount Points, Path Mapping, and FileSystem with PowerHA, are also key new FileSystem enhancements. For a summary of what these mean see below:


Repurpose Workflows for File Systems
Concern Statement: The repurposing workflow does not support file systems, only Oracle and SQL. Epic and other file system users need to be able and utilize the Repurposing workflow.
Solution: Enable the repurpose flow (Wizard) for File Systems on all AppSync supported OS and storage. This would be supported on all AppSync supported OS and RecoverPoint Local and Remote.

Unlike SQL and Oracle repurposing, multiple file systems can be repurposed together – as seen in the screenshot.


Repurpose Workflows RULES for File Systems
When copy is refreshed after performing an on demand mount, AppSync unmounts the mounted copy, refreshes (create a new copy & only on successful creation of new ones expire the old copy) and mounts the copy back to the mount host with the same mount options
Applicable to all storage types except for XtremIO volumes that are in a consistency group (see point below)
Not applicable for RecoverPoint repurposing
RecoverPoint repurposing on-demand mounts are not re-mounted
With VNX/RP-VNX, you cannot repurpose from a mounted copy
With VMAX2/RP-VMAX2, you cannot repurpose from a gen1 copy if the gen1 copy created is a snap
When using XtremIO CGs – the copy is unmounted (only application unmount – no storage cleanup),
storage is refreshed and is mounted (only application mount) to the mount host with same mount options
Repurposing NFS file systems and/or Unity environments, are not supported.

Repurpose Workflows RULES for File Systems
File system repurposing workflows support the same rules as SQL and Oracle, such as:
Gen 1 copies are the only copy that is considered application consistent
Restores from Gen 1 through RecoverPoint (intermediary copy) or SRDF are not supported
Restores from Gen 2 are not supported
Callout scripts use the Label field
Freeze and thaw callouts are only supported for Gen 1 copies
Unmount callouts are supported for Gen 1 and Gen 2 copies
Example:
appsync_[freeze/thaw/unmount]_filesystem_<label_name>
If the number of filesystems exceeds the allowable storage units, which is 12 by default, defined
in server settings for each storage type, the operation will fail
“max.vmax.block.affinity.storage units” for VMAX array
“max.vnx.block.affinity.storage.units” for VNX array
“max.common.block.affinity.storage.units” for VPLEX, Unity, and XtremIO arrays


Persistent Filesystem Mount
Concern Statement: File systems mounted with AppSync are not persistent upon the host
rebooting.
Solution: Offer persistent filesystem mount options so the hosts mounted filesystems (by
AppSync), are automatically mounted, upon a host reboot.
Applies to all file systems, including those which support Oracle DB copies
For AIX, ensure the mount setting in /etc/filesystems on the source host, is set to TRUE
(AppSync uses same settings as on source)
For Linux, AppSync modifies the /etc/fstab file:
Entries include the notation/comment “# line added by AppSync Agent”
Unmounting within AppSync, removed the entry

Feature Benefit
Nested mount points Ability to copy and correctly mount, refresh and restore Nested Mount production environments, eliminating the current restriction.
Path Mapping support Ability to mount specific application copies to location specified by the user. This eliminates the restriction of allowing mounts on original path or default path only
FS Plan with PowerHA cluster AppSync will become aware of the node failover within PowerHA cluster so that all supported use cases work seamlessly in a failover scenario.

Currently, when a failover happens on one cluster node and the file system is activated on another, the File System is not followed by AppSync. For that reason this configuration is currently not supported

Repurpose Flow with FS Plan The popular Repurpose Wizard will become available with the file system plan. This will be supported on all AppSync supported OS and storage, including RecoverPoint.

The combination of all these new FS enhancements enable iCDM use cases for……

Epic

That’s right…As XtremIO has become the second product worldwide to be distinguished as “Very Common” with more than 20% Epic customers and counting, we have worked with the AppSync team to enabled iCDM use cases for EPIC. The filesystem enhancements above help enable these use cases, and further demonstrate the robustness XtremIO provides for EPIC software.

Staggering Volume Deletes:
In order to avoid possible temporary latency increases, which can be caused by the massive deletion of multiple volumes/snapshots with high change rate, AppSync introduces a logic to delete XtremIO volumes at a rate of one volume every 60 seconds. This logic is disabled by default, and should be enabled only in the rare circumstance where this increased latency may be observed. The cleanup thread is triggered every 30th minute of the hour (that is, once in an hour).

The cleanup thread gets triggered every 30th minute of the hour (by default)
The cleanup thread starting Hour, Minute, and time delay can all be configured

In order to enable this, user will have to access the AppSync server settings by going to http://APPSYNC_SERVER_IP:8085/appsync/?manageserversettings=true and going to SettingsàAppSync Server SettingsàManage All Settings and change the value of “maint.xtremio.snapcleanup.enable” from “false” to “true”.

Limitations:
All File system from a single VG must be mounted and unmounted together (applies to nested and non-nested mount points)

XtremIO CG support for repurpose workflow:

The repurpose flow now supports awareness into applications laid out on XtremIO using Consistency Groups:

– For Windows applications (SQL, Exchange, Filesystems), all the app components e.x. db files,log files etc.. should be part of same CG (one and only one and not being part of more than one CG) for AppSync to use CG based API calls.


– For Oracle, all the db,control files should be part of one CG and the Archive log should be part of another CG.

What is the benefit of this? Using XtremIO Virtual Copies (XVC) to its full potential for quickest operation time. This reduces application freeze time, as well as reduces the overall length of the copy creation time and later refresh process. During the refresh operation, it will tell you if it was able to use CG based refresh or not:

 

With CG:


You will notice the status screen mentioning that the refresh was done “..using the CG..”

To analyze this a little further in looking at REST logs issued to the XMS:

The snap operation was done with a single API specifying to snap the CG:

2016-12-13 18:49:32,189 – RestLogger – INFO – rest_server::log_request:96 – REST call: <POST /v2/types/snapshots HTTP/1.1> with args {} and content {u’snap-suffix’: u’.snap.20161213_090905.g1′, u’cluster-id’: u’xbricksc306′, u’consistency-group-id’: u’MarcoFS_CG’}

And refresh specifying Refreshing from CG to Consistency Group through single API:

2016-12-13 18:50:23,426 – RestLogger – INFO – rest_server::log_request:96 – REST call: <POST /v2/types/snapshots HTTP/1.1> with args {} and content {u’from-consistency-group-id’: u’MarcoFS_CG’, u’cluster-id’: u’xbricksc306′, u’to-snapshot-set-id’: u’SnapshotSet.1481647772251′}

Without CG:

You will receive a message in the status screen stating that volumes have not been laid out in a CG, or not done so as specified per the prerequisites earlier.

Windows Server 2016 support


Both as AppSync server & agent
Clustering support still pending qualification (check with ESM at time of GA)
Microsoft Edge as a browser is not supported for AppSync GUI


vSphere 2016 tolerance support (ESX 6.5) – no new functionality.


Path Mapping


AppSync currently does not allow users to modify the root path of a Mount operation – a limitation that Replication Manager does not have.
Solution: Specify a complete Mount Path for the application copy being mounted. Change the root path for a Mount Host (specify a substitution), so that the layout is replicated using the substitution.
Unix Example:
/prd01 can become /sup01
Windows Examples:
E:\ can becomes H:\
F:\ can become G:\FolderName

Limitations:

Mounting Windows file system examples:
When E:\ is configured to become H:\
Ensure that the H:\ IS NOT already mounted
When E:\ is configured to become H:\SomeFolder (e.g. H:\myCopy)
Ensure that the H: drive is already available, as AppSync relies on the root mount drive letter to exist, in order to mount the myCopy directory (in this example), which was originally the E:\ drive and contents
When no target mount path is configured, AppSync mounts as “Same as original path”
In this case, the job fails if Mount on Server is set to: Original Host (E:\ cannot mount back as E:\)
Currently supported on File System Service Plans and Repurposing file systems only
Path mapping is not supported with Oracle, Exchange or SQL


Progress Dialog Monitor

Concern Statement: When the progress dialog is closed, users have to go to the event window to look for updates, manually having to refresh the window.
Solution: Allow users to view and monitor the Service Plan run progress through the GUI by launching the process dialog window which updates automatically.


Oracle Database Restart


After a mount host reboot, the Oracle database does not automatically restart
Solution: Offers the ability to reboot a mount host AppSync recovered Oracle on, to the same state as before the reboot
Unlike the conventional /etc/oratab entry, AppSync creates a startupscript
/etc/asdbora on AIX
/etc/init.d/asdbora on Linux
A symlink to this named S99asdbora script is found under /etc/rc.d
Configurable through the AppSync UI (disabled by default)
Not available for RMAN or “Mount on standalone server and prepare scripts for manual database recovery”


File System Service Plan with AIX PowerHA


Concern Statement: Currently AIX PowerHA clustered environments support Oracle only – no support for clustered file systems. Epic, and other file system plan users, need support for file system clustering.
When a failover happens, the file system is not followed by AppSync
Solution: Support PowerHA environments utilizing file system Service Plans and Repurposing workflows
AppSync will become aware of the node failover within PowerHA cluster so that all supported use cases work seamlessly in a failover scenario
Setup Details:
1. Add/register all nodes of the PowerHA cluster to AppSync before registering the virtual IP
2. The virtual IP (Service label IP/name) resource must be configured in the resource group for the clustered application,
as well as added/registered to AppSync (as if it were another physical)
Each resource group must have a unique Service label IP
3. Protect the file systems belonging to that particular resource group, rather than protecting the file systems by navigating the nodes
Note: Volume groups are not concurrent after a restore, you must manually make them concurrent.

Oracle Protection on PowerHA Changes
Previously: Setting up Oracle with PowerHA only involved adding both nodes
There was no need for a servicelableIP/Name as Oracle told AppSync which active node to manage
Application mapping is configured through the Application (Oracle) Service Plan, and not through the node, as
a file system service plan would be configured
AppSync 3.1: Now requires registering the use of the servicelabelP/Name of the Oracle DB
resource group
Similar to configuring AppSync to support file system service plans under PowerHA
Add all physical nodes, and then register the servicelabelIP/Name
Configure the Oracle service plan using the servicelableIP/Name
Repurposing uses the servicelabelIP/Name
If the servicelabelIP/Name is not registered, AppSync will not discover the Oracle databases
If upgrading from a previous version to 3.1, the servicelabelIP/Name must be registered, otherwise the job
fails with an appropriate message – no reconfiguration is required, simply register the servicelabelIP/Name



Important Changes To ESXi and the XtremIO SATP Behavior

$
0
0

Hi Everyone,

Some heads up on an important change we’ve been working with VMware that is now part of the vSphere 6.0/6.5 and 5.5P8

If you attach an ESXi host to an XtremIO array, it will AUTOMATICALLY choose Round Robin (RR) and IOPS=1 as the SATP behavior.

a little background.. because XtremIO is an Active/Active array (all the storage controllers are sharing the performance & ownership of the entire data), the default ESXi SATP behaviour was set to “fixed” which means, only one path for every datastore was actually working, of course, you could have changed it using the ESXi UI, using the Dell EMC VSI plugin etc..

One word of cautious, This does not mean that you can skip the OTHER ESXI host best practices which you can either achieve using the VSI plugin as show below (HBA Qdepth etc) or by using the ESX CLI, powershell, scripts etc

pic01

But it does mean that if a customer didn’t read the HCG or forgot about it, many errors we have seen in the field won’t be seen again due to multipathing misconfiguration. It will also mean much better performance out of the box!

For ESXi 5.5, you must install the latest update (https://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=2144361&sliceId=1&docTypeID=DT_KB_1_1&dialogID=298460838&stateId=1%200%20298476103 )

(https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2144359 )

pic02

For ESXi 6.0, you must install the latest update – 2145663 (https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2145664 )

(https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2145663 )

pic02

For ESXi 6.5, it’s there at the GA build (which is the only build for now)

If you want to run a query to validate this:

# esxcli storage nmp device list
pic03

Lastly, the latest VSI plugin (7.0) doesn’t support vSphere 6.5 yet, the VSI team are working on it and will update once I have more news.



EMC Storage Integrator (ESI) 5.0 Is Out

$
0
0

Hi,

We have just released the 5.0 version of ESI plugin, for those of you who aren’t familiar with the plugin , it allows you to manage physical windows or Hyper-V or even linux OS’s from a storage perspective, things like volume provisioning, snapshots taking etc, similar to our VSI plugin for vSphere but in this case for everything else..

Here’s the supported systems matrix

From an XtremIO perspective, this release bring a support for snapshot refresh using the XtremIO tagging


XtremIO Plugin for VMware vRealize Orchestrator 1.1.0 is out.

$
0
0

Hi

We have just released an update for our XtremIO vCO adapter, if you are new to the adapter, you can read about it here

https://itzikr.wordpress.com/2016/08/26/vmworld-2016-introducing-the-emc-xtremio-vcenter-orchestrator-adaptor/

You can download the new version from here

https://download.emc.com/downloads/DL79358_XtremIO_1.1.0_plugin_for_VMware_vRealize_(vCenter)_Orchestrator.zip?source=OLS

And the documentation from here

https://support.emc.com/docu78902_XtremIO_Plugin_for_VMware_vRealize_Orchestrator_Installation_and_Configuration_Guide.pdf?language=en_US

https://support.emc.com/docu79427_XtremIO_Plugin_for_VMware_vRealize_Orchestrator_Workflows_User_Guide.pdf?language=en_US

 

VMware vRealize Orchestrator is an IT process automation tool that allows automated management and operational tasks across both VMware and third-party applications.
Note: VMware vRealize Orchestrator was formerly named VMware vCenter Orchestrator.
The XtremIO plugin for vRealize Orchestrator facilitates the automation and orchestration of tasks that involve the XtremIO Storage Array. It augments the capabilities of VMware’s
vRealize Orchestrator solution by providing access to XtremIO Storage Array-specific management workflows.
This plugin provides many built-in workflows and actions that can be used by a vRealize administrator directly or for the construction of higher level custom workflows to automate storage tasks. The major workflow categories are provided by the XtremIO plugin for vRealize Orchestrator for storage configuration, provisioning, LUN and VM snapshot backup and VM recovery.

Basic Workflow Definition
A basic workflow is a workflow that for the most part represents a discrete piece of XtremIO functionality, such as creating a Volume or mapping a Volume. The workflows in the management folders for Consistency Groups, Cluster, Initiator Groups, Protection Scheduler, Snapshot Set, Tag, Volume and XMS are all examples of basic workflows.

High-level Workflow Definition
A high-level workflow is a collection of basic workflows put together in such a way as to achieve a higher level of functionality. The workflows in the XtremIO Storage Management
folder and the XtremIO VMware Storage Management folder are examples of high-level workflows. The Host Expose Storage workflow in the VMware Storage Management folder, for example, allows a user to create Volumes, create any necessary Initiator Groups and map those Volumes to a host, all from one workflow. All the input needed from this workflow is supplied prior to the calling of the first workflow in the chain of basic workflows that are called.

Accessing XtremIO Workflows

Expand the XtremIO folder to list folders containing specific XtremIO and VMware
functionality. The following section provides information on each of the workflows.

XtremIO CG Management Folder

Workflow Name

Description

CG Add Volume

Adds one or more Volumes to a Consistency Group.

CG Copy

Creates a copy of the Volume(s) in the selected Consistency
Group.

CG Create

Allows you to create one Consistency Group and optionally supply
a Consistency Group Tag to the created Consistency Group.

CG Delete

Deletes the specified Consistency Group.

CG List

Lists all known Consistency Groups for a given cluster.

CG List Volumes

Lists all Volumes in a Consistency Group.

CG Protection

Creates a Protection of the Volume(s) in the selected Consistency
Group.

CG Refresh

Refreshes a Consistency Group from the selected copy of the
Consistency Group.

CG Remove Volume

Removes one or more Volumes from a Consistency Group.

CG Rename

Renames the specified Consistency Group.

CG Restore

Restores a Consistency Group from a selected CG Protection
workflow.

XtremIO Cluster Management Folder

Workflow Name

Description

Cluster List

Lists the cluster(s) for a given XMS Server.

XtremIO IG Management Folder

Workflow Name

Description

IG Add Initiator

Adds an Initiator to an Initiator Group.

IG Create

Allows you to create one or more Initiator Groups and optionally
supply an Initiator Group Tag to the created Initiator Groups.

IG Delete

Deletes one or more Initiator Groups.

IG List

Lists all Initiator Groups based upon the supplied input criteria.

IG List Initiators

Lists all Initiators for the supplied Initiator Group.

 

IG List Volumes

Lists Volumes for a given Initiator Group.

IG Remove Initiator

Removes an Initiator from an Initiator Group.

IG Rename

Renames a selected Initiator Group.

IG Show

Returns Initiator Group attributes as a formatted string for a given
Initiator Group, set of specified Initiator Groups or all Initiator
Groups, (if none are supplied).

Workflow Name

Description

XtremIO Performance Management Folder

Workflow Name

Description

Cluster Report

Lists the cluster information shown below in one of three output
formats based upon the supplied input criteria.
• Output formats: CSV, HTML, Dictionary (JSON)
• Data Reduction Ratio: Cluster’s object field
data-reduction-ratio-text
• Compression Factor: Clusters object field
compression-factor-text
• Dedup Ratio: Clusters object field dedup-ratio-text
• Physical Capacity: Sum of SSD objects field
ssd-size-in-kb * 1024 (in bytes)
• Total Space: Clusters object field
ud-ssd-space * 1024 (in
bytes)
• Space Consumed: Clusters object field
ud-ssd-space-in-use * 1024 (in bytes)
• Remaining Space: Total Space – Space Consumed * 1024 (in
bytes)
• Thin Provisioning Savings: 1 / Cluster Field
(
thin-provisioning-ratio)
• Overall Efficiency: = 1 / Cluster Field
(
thin-provisioning-ratio *
data-reduction-ratio)
• Performance: Contains certain metrics for ascertaining overall
cluster performance.

Datastore Report

Provides datastore performance information for a specific ESXi
host.

Host Report

Provides performance information for ESXi Hosts.

IG Report

Provides performance information for Initiator Groups

Workflow Usage Report

Returns usage count information for XtremIO Workflows that have
been successfully invoked.

 

XtremIO Protection Scheduler Folder

Workflow Name

Description

Protection Scheduler Create

Creates a Protection Scheduler, which consists of the schedule
name, optional Tag and associated input arguments.

Protection Scheduler Delete

Deletes a Protection Scheduler.

Protection Scheduler List

List the Protection Schedulers based upon the supplied input
criteria.

Protection Scheduler Modify

Allows you to modify various attributes of a Protection Scheduler.

Protection Scheduler Resume

Resumes a suspended local Protection Scheduler.

Protection Scheduler Suspend

Suspends the action of a local Protection Scheduler.

XtremIO RecoverPoint Management Folder

Workflow Name

Description

RP Add

Adds a RecoverPoint Server to the list of available RecoverPoint
Servers in the vRealize inventory, for use by the XtremIO
RecoverPoint workflows.

RP Create CG

Creates and enables a new RecoverPoint Consistency Group from
new and existing user and journal volumes.

RP Delete CG

Deletes a RecoverPoint Consistency Group but retains the user
and journal storage.

RP Delete CG Storage

Deletes a RecoverPoint Consistency Group and the associated
user and journal storage.

RP List

Produces a list of available RecoverPoint Servers that are present
in the vRealize inventory.

RP List CGs

Produces a list of RecoverPoint Consistency Groups.

RP Modify CG

Allows modification of the RecoverPoint Consistency Group
recovery point objective, compression and number of snapshots
settings.

RP Remove

Removes a RecoverPoint server from the list of available
RecoverPoint Servers in the vRealize inventory.

 

XtremIO Snapshot Set Management Folder

Workflow Name

Description

SnapshotSet Copy

Creates a copy of the Volumes in the selected Snapshot Set.

SnapshotSet Delete

Deletes a Snapshot Set.

SnapshotSet List

Returns a list of Snapshot Sets by name and/or Tag.

SnapshotSet Map

Maps a Snapshot Set to an Initiator Group.

SnapshotSet Protection

Creates a Protection of the Volumes from the selected Snapshot
Set.

SnapshotSet Refresh

Refreshes a Snapshot Set from the selected copy.

SnapshotSet Rename

Renames a Snapshot Set.

SnapshotSet Unmap

Unmaps a Snapshot Set.

Workflow Name

Description


XtremIO Storage Management Folder

Workflow Name

Description

Datastore Delete Storage

This workflow either:
• Shutdowns and deletes all VMs associated with the datastore.
• Disconnects all the secondary datastore VMDKs from the VMs
utilizing that datastore.
The workflow then unmounts the datastore as necessary from all
ESXi hosts and deletes it. It then unmaps the XtremIO Volume
associated with the datastore and deletes it.
If the Volume to be deleted is also in an XtremIO Consistency
Group, the Consistency Group is also deleted, if the group is
empty (as a result of the Volume deletion).
If the datastore selected for deletion is used for primary storage
(i.e. VMDKs that are surfacing operating system disks, for
example, c:\ for windows) then select Yes to the question “Delete
VMs associated with the datastore” prior to running the workflow.
Note: Best practice for datastore usage is to use separate
datastores for primary and secondary VM storage.

Datastore Expose Storage

Allows you to create or use an existing XtremIO Volume to
provision a VMFS Datastore.
If an existing XtremIO Volume is used, which is a copy or
protection of an existing datastore, then the existing XtremIO
Volume needs to be assigned a new signature prior to utilizing it.
You can select to expose the storage to a single ESXi host or all
ESXi hosts in a given cluster.
If selecting existing Volume(s), either select them individually or
select a Consistency Group that contains the Volumes to use for
the datastores.

VM Clone Storage

Makes a copy of the specified production datastores and
connects those datastores copies to a set of specified
test/development VMs. If the datastore to be cloned contains a
hard disk representing the operating system disk, the VMDK is
ignored when reconnecting the VMDKs to the test/development
VMs.
The datastores to be copied can either be selected individually or
from a specified XtremIO Consistency Group.
The workflow then makes a copy of the selected Volumes. It then
clones the VM to VMDKs relationships of the production
datastores to the copied datastores for the selected
test/development VMs.
The VMs involved follow a specific naming convention for the
production and test/development VMs.
For example, if the production VM is called “Finance” then the
secondary VM name must start with “Finance” followed by a
suffix/delimiter such as “Finance_001”.
The production and test/development VMs must also be stored in
separate folders. The workflow skips over the production VMs
that do not have a matching VM in the test/development folder.

VM Delete Storage

Deletes the datastores containing the application data and
underlying XtremIO storage Volumes associated to these
datastores.
The VMs for which the datastores are deleted can be selected by
choosing either a vCenter folder that the VMs reside in or by
individually selecting the VMs.
The VM itself can also be deleted by selecting Yes to the question
“Delete VMs and remove all storage (No will only remove VMs
secondary storage)”.
If No is selected to the above question, the workflow unmounts
the secondary datastore containing the application data from all
hosts that are using the datastore prior to its deletion.
The workflow then proceeds to unmap the XtremIO Volume
associated with the datastore and then deletes it.
If the Volume to be deleted is also in an XtremIO Consistency
Group, the Consistency Group is also deleted, if the group is
empty (as a result of the Volume deletion).

VM Expose Storage

Allows you to create or use an existing XtremIO Volume to
provision a VMFS datastore and then provision either a VMDK
(Virtual Machine Disk) or RDM (Raw Disk Mappings) to a virtual
machine. This is accomplished by calling the workflows Datastore
Expose Storage and VM Add Storage from this workflow.

Workflow Name

Description


XtremIO Tag Management Folder

Workflow Name

Description

Tag Apply

Applies a Tag to an object.

Tag Create

Creates a Tag for a given tag type.

Tag Delete

Deletes a list of supplied Tags.

Tag List

Lists Tags for a given Tag type.

Tag Remove

Removes a Tag from an object.

Tag Rename

Renames a Tag.


XtremIO VMware Storage Management Folder

Workflow Name

Description

Datastore Copy

Makes a copy of your underlying XtremIO Volume that the
datastore is based on.

Datastore Create

Creates a VMFS datastore on a Fibre Channel, iSCSI or local SCSI
disk.

Datastore Delete

Deletes the selected datastore.
This workflow either:
• Shutdowns and deletes all VMs associated with the datastore.
• Disconnects all the datastore VMDKs from all the VMs utilizing
that datastore.
The workflow then unmounts the datastore as necessary from all
ESXi hosts and deletes it. It then unmaps the XtremIO Volume
associated with the datastore and deletes it.
If the datastore selected for deletion is used wholly or partly for
primary storage (i.e. VMDKs that are surfacing operating system
disks, for example, c:\ for windows) then select Yes to the
question “Delete VMs associated with the datastore” prior to
running the workflow.
Note: Best practice for datastore usage is to use separate
datastores for primary and secondary VM storage.

Datastore Expand

Used to expand a datastore.

Datastore List

Returns the datastores known to the vCenter Server instance
selected.

Datastore Mount

Mounts the given datastore onto the selected host, or all hosts
associated with the datastore if a host is not specified.
Note: If you select to mount the datastore on all hosts and the
datastore fails to mount on at least one of the hosts, the info log
file of the workflow reports the hosts that the datastore could not
be mounted on.
For convenience this workflow allows you to select from
pre-existing datastores that have already been mounted to ESXi
hosts or datastores that have been copied (but not yet previously
mounted).Such datastores are resignatured as part of the
mounting process.

Datastore Protection

Makes a protection (a read only copy) of your underlying XtremIO
Volume that the datastore is based on.

Datastore Reclaim Storage

Used to reclaim unused space in a datastore, and also, to delete
the unused VMDK files.

Datastore Refresh

Refreshes a copy of the underlying XtremIO Volume that the
datastore is based on.
Once the datastore to refresh has been selected, a list of copies
or protections, from which to refresh, are displayed. These are
based on whether “Copy” or “Protection” was selected in the
answer to “Restore Snapshot Type”.
The list displayed chronologically, with the most recent copy
listed first.
If a copy to refresh is not selected, the most recent copy is
automatically refreshed.
There is an option to provide a name for the Snapshot Set and
suffix that will be created.

Datastore Restore

Restores the underlying XtremIO Volume that the datastore is
based on, from a copy that was created earlier.
Once the datastore to refresh has been selected, a list of copies
or protections, from which to refresh, are displayed. These are
based on whether “Copy” or “Protection” was selected in the
answer to “Restore Snapshot Type”.
The list displayed chronologically, with the most recent copy
listed first.
If a copy to refresh is not selected, the most recent copy is
automatically refreshed.

Datastore Show

Returns information about the supplied datastore:
• AvailableDisksByHost: List of hosts and available disks that
can be used in creating a datastore.
• Capacity: Datastore’s total capacity in bytes.
• Expandable: True if datastore is expandable.
• ExpansionSpace: Space in bytes that is left for expansion of
datastore.
• Extendable: True if datastore can be extended.
• FreeSpace: Datastore’s free capacity in bytes.
• Hosts: List of hosts known to the datastore.
• Name: Name of the datastore.
• RDMs: List of RDMs known to the datastore.
• UnusedVMDKs: List of VMDks that are not in use by a VM.
• VMs: List of VMs known to the datastore.
• VMDKs: List of VMDKs residing in the datastore and in use by a
VM.

Datastore Show Storage

Returns the XtremIO Volumes, Consistency Groups and Snapshot
Sets that make up the datastore.
If the workflow runs successfully but none of the output variables
are populated, check the info log for messages such as this one:
[YYYY-MM-DD HH:MM:SS] [I] Unable to find one or more
corresponding Volumes to naa-names — possibly unregistered,
unreachable XMSServers, or non-XtremIO Volumes.
In addition to ensuring that all of the XMS Servers have been
added to the vRO inventory via the XMS Add workflow, it is
strongly recommended to also run the Host Conformance
workflow. This ensures that all your XtremIO Volumes that are
visible to ESXi hosts have their corresponding XMS Server known
to vRealize.

Datastore Unmount

Unmounts the given datastore from the specified host or all hosts
associated with the datastore, if a host is not specified.

Workflow Name

Description

 

Host Add SSH Connection

Adds the connection information for a given ESXi Host into the
vRealize inventory.
When running the workflow Host Modify Settings you must select
the SSH Connection to use in order for the workflow to connect to
the ESXi Host and run properly.
Note: When supplying a value for the field “The host name of the
SSH Host”, ensure this is a fully qualified domain name or IP
address that allows the workflow to connect to the ESXi host via
TCP/IP.

Host Conformance

Checks either a single ESXi host or all ESXi hosts in a vCenter
instance, to ensure that all the XtremIO Volumes that are mapped
to ESXi hosts have for this vRealize instance all of the necessary
XMS Servers connected to it.
Once the report is run, it provides a list of Volumes that are
conformant (have an associated XMS Server instance present for
those Volumes) and those Volumes that are not conformant (do
not have an associated XMS Server present for those Volumes).
For those Volumes that are found to be not conformant the XMS
Add workflow should be run so that XtremIO vRealize workflows
can work properly with these Volume(s).

Host Delete Storage

Allows you to delete a list of selected Volumes mounted to a host.
The Volumes can be specified by individual name, Consistency
Group or Snapshot Set.
You can supply a value for all three fields or just a subset of them.
As long as one Volume name, Consistency Group name or
Snapshot Set name is supplied, the workflow can be run.
The selected Volumes are unmapped and deleted and the
corresponding Consistency Group Deleted (if all the Volumes
have been removed.

Host Expose Storage

Exposes new or existing XtremIO Volumes to a standalone host or
VMware ESXi host.
Select either a list of Volumes or a Consistency Group
representing a list of Volumes, to be exposed to the host.
If new Volumes are being created you also have the option of
creating a new Consistency Group to put the Volumes into.

Host List

Lists the hosts for a given vCenter.

Host Modify Settings

Modifies the settings for a given ESXi server.
This workflow requires that an SSH Host Instance be setup for the
ESXi host to be checked, prior to running this workflow.
In order to setup this SSH Host Instance run the workflow Host
Add SSH Connection.

Host Remove SSH Connection

Removes an SSH Host configuration entry from the vRealize
inventory.

Host Rescan

Rescans one or all hosts of a given vCenter.

Host Show

This workflow returns the following.
• WWNs for the given vCenter Host.
• Available disks for the given vCenter Host that can be used to
create new datastores. The output variable
availableDisks
can be used as input to the Datastore Create workflow input
parameter
diskName.
• In use disks for the given vCenter Host.

Workflow Name

Description

 

vCenter Add

Adds a vCenter instance to the vRealize inventory.

vCenter Cluster Delete Storage

Deletes all unused XtremIO storage Volumes for each ESXi host in
a vCenter Cluster.
This workflow exits with an error when an XMS Server cannot be
located for any of the unused XtremIO storage Volumes that are to
be removed.

vCenter Cluster Expose
Storage

Exposes new or existing XtremIO Volumes to the ESXi hosts in a
VMware Cluster.

vCenter List

Lists the vCenter instances known to the vRealize inventory.

vCenter List Clusters

Returns a list of vCenter Cluster objects for the supplied vCenter.

vCenter Remove

Removes a vCenter instance from the vRealize inventory.

vCenter Show

Returns a list of vCenter Hosts for a given Cluster object.
For each host a list of available disks is also returned.

VM Add VMDK

Provisions to a VM, either a:
• New VMDK
• New RDM
• List of existing, unused, VMDKs
If the storage type selected is VMDK then a datastore must be
selected for the VMDK.
If the storage type selected is RDM then a datastore does not
need to be selected in order to create the RDM.

VM Delete

Deletes a VM and release resources, such as the primary storage
allocated to the VM.
Note: This workflow should be used to remove the primary
storage, “Hard disk 1” associated with a VM.
Primary storage cannot be removed until the VM is shutdown and
deleted. Secondary VMDK and RDM storage are preserved.
Use the VM Delete Storage workflow to delete all of the following:
• VM
• The datastore associated with the VM
• The underlying XtremIO storage making up that VM

VM List

Lists the VMs for a given vCenter or for a given vCenter host.

VM Remove VMDK

Removes and deletes an RDM or VMDK from a VM.
Note: This workflow cannot be used to remove the primary
storage, “Hard disk 1” (the VM’s primary storage drive), as this
requires the VM to be shut down and deleted prior to reclaiming
the storage for “Hard disk 1”.

VM Show

Lists the VMDKs, RDMs and RDM Physical LUN names for each
RDM for a given VM.

Workflow Name

Description


XtremIO Volume Management Folder

Workflow Name

Description

Volume Copy

Allows you to create one or more copies supplying a list of source
Volumes, or a list of Volume Tags.

Volume Create

Allows you to create one or more Volumes and optionally supply a
Volume Tag to the created Volumes. For each Volume created, if at
the cluster level, the parameter
show-clusters-thresholds shows that
vaai-tp-alerts is set to a value of between 1-100, then the
add-volume vaai-tp-alerts parameter is set to enabled.

Volume Delete

Deletes the list of selected Volumes or Volumes associated with a
particular Volume Tag.

Volume Expand

Expands a Volume to the new size supplied.

Volume List

Lists Volumes by name, size and Tag for a given cluster.

Volume Map

Allows you to map Volumes by Name or by Tag for the supplied
Initiator Group.

Volume Modify

Allows particular Volume parameters to be changed. The current
Volume parameter that can be changed is
vaai-tp-alerts
(VAAI Thin Provisioning alert).
The selected Volumes can be either be set to participate or
disabled from participation in thin provisioning threshold alerts.
At the cluster level the thin provisioning alert level must set to a
value of between 1 – 100 in order for a Volume to generate an
alert. The syntax is as follows:
vaai-tp-alerts=[enabled / disabled]

Volume Protection

Allows you to create one or more Protections supplying a list of
source Volumes, or a list of Volume Tags.

Volume Refresh

Refreshes a Volume from the selected copy.

Volume Rename

Renames the selected Volume.

Volume Restore

Restores a Volume from the selected Protection.

Volume Show

Returns Volume parameters as a formatted string for a given
Volume, set of specified Volumes or all Volumes (if none are
specified).
The values for
naa-name, logical-space-in-use and
vaai-tp-alerts are returned for the specified Volumes.
• The
naa-name parameter represents the NAA Identifier that is
assigned to a Volume only after it has been mapped to an
Initiator Group.
• The
logical-space-in-use parameter is the Volume
logical space in use (VSG).
• The
vaai-tp-alerts parameter controls whether a Volume
participates in thin provisioning alerts. The value is either
enabled or disabled.
At the cluster level, the thin provisioning alert level must set to a
value between 1 – 100, in order for a Volume to generate an alert,
regardless of whether an individual Volume has
vaai-tp-alerts enabled.

Volume Unmap

Unmaps Volumes in one of the following ways:
• Unmaps a user supplied list of Volumes.
• Unmaps all Volumes associated with a particular Initiator
Group.
• Unmaps a selected list of Volumes associated with a particular
Initiator Group.
• Unmaps all Volume associated with a particular Tag.

Workflow Name

Description


XtremIO XMS Management Folder

Workflow Name

Description

XMS Add

Adds an XMS server to the vRealize Inventory.

XMS List

Lists the known XMS servers in the vRealize Inventory.

XMS Remove

Removes an XMS server from the vRealize Inventory.

XMS Show

Returns XMS attributes as a formatted string for a given XMS
server.
The
xms-ip, server-name and cluster-names attributes
are always returned for a given XMS Server.
The
sw-version is also returned by default but can be removed
from the list of returned attributes, if that information is not
required.

 


VSI 7.1 Is Here, vSphere 6.5 supported!

$
0
0

Hi,

We have just released the 7.1 version of our vSphere vCenter plugin, if you are new to the VSI plugin, I highly suggest you start with these posts here

https://itzikr.wordpress.com/2015/09/21/vsi-6-6-3-is-here-grab-it-while-its-hot/

https://itzikr.wordpress.com/2016/03/31/vsi-6-8-is-here-heres-whats-new/

https://itzikr.wordpress.com/2016/10/04/vsi-7-0-is-here-vplex-is-included/

VSI enables VMware administrators to provision and manage the following EMC storage systems for VMware ESX/ESXi hosts:

  • EMC Unity™
  • EMC UnityVSA™
  • EMC ViPR® software-defined storage
  • EMC VMAX All Flash
  • EMC VMAX3™
  • EMC eNAS storage
  • EMC VNX® series storage
  • EMC VNXe1600™ and VNXe3200™ storage
  • EMC VPLEX® systems
  • EMC XtremIO® storage

Tasks that administrators can perform with VSI include storage provisioning, storage mapping, viewing information such as capacity utilization, and managing data protection systems. This release also supports EMC AppSync®, EMC RecoverPoint®, and EMC PowerPath®/VE.

New features and Changes

This release of VSI includes support for the following:

  • VMware vSphere Web Client version 6.5 tolerance
  • Multiple vCenter server IP addresses

dual-ip-vcenter

  • Restoring deleted virtual machines or datastores using AppSync software

restore-vm

restore-datastore

  • Space reclamation with no requirement to provide the ESXi host credential (vSphere Web Client 6.0 or later)

unmap-schedule-view

  • Viewing storage capacity metrics when provisioning VMFS datastores on EMC XtremIO storage systems

capacity-view

  • Enabling and disabling inline compression when creating VMFS datastores on EMC Unity storage systems version 4.1.0 or later
  • Space reclamation on Unity storage systems
  • Extending Unity VVol datastores
  • Viewing and deleting scheduled tasks, such as space reclamation, from the VSI plug-in
  • Enabling and disabling compression on NFS datastores on VMAX All Flash/VMAX3 eNAS devices
  • Viewing the compression property of a storage group when provisioning VMFS datastores on VMAX All Flash storage systems
  • Path management on Unity storage arrays using VMware NMP and EMC
  • PowerPath/VE version 6.1 or later

you can download VSI 7.1 from here https://download.emc.com/downloads/DL82021_VSI_for_VMware_vSphere_Web_Client_7.1.ova?source=OLS

 

 


An Important APD/PDL Change in the XtremIO Behavior

$
0
0

Hi

We have recently released a new target code for our arrays, know as XIOS 4.0.15

This release contains many fixes including a change in the APD/PDL behavior when using ESXi hosts, this is based on many customers feedback we have received that were asking to tweak the array APD/PDL behavior..

If you are new to this concept, I highly suggest you start reading about it here:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2004684

And an older article from the ESXi 5 days here, https://blogs.vmware.com/vsphere/2011/08/all-path-down-apd-handling-in-50.html

Permanent Device Loss (PDL) is a condition where all paths to a device are marked as “Dead.” Because the storage adapter cannot communicate to the device, its state is “Lost Communication.” Similarly, All Paths Down (APD) is a condition where all paths to a device are also marked as “Dead.”
However, in this case, the storage adapter displays the state of the device as “Dead” or “Error.”
The purpose of differentiating PDL and APD in ESX 5.x and higher is to inform the operating system whether the paths are permanently or temporarily down. This affects whether or not ESX attempts to re-establish connectivity to the device.

A new feature in XtremIO 4.0.15 allows the storage administrator to configure the array to not send “PDL” as a response to a planned device removal. By default, XtremIO configures all ESX initiators with the PDL setting.
In planned device removals where the cluster has stopped its services and there are no responses received by the ESX host for I/O requests, ESX Datastores will respond with PDL or APD behavior depending on the XtremIO setting.

An XMCLI command is used to enable all ESX initiators as “APD” or revert back to “PDL.” A new option is available for the modify-clusters-parameters admin-level command which modifes various cluster parameters:
device-connectivity-mode=<apd, pdl>

In any case, if you have a new or existing XtremIO cluster and you are using ESXi hosts (who doesn’t?), please set the initiators type to “ESX”


Refer to the XtremIO Storage Array User Guide (https://support.emc.com/products/31111_XtremIO/Documentation/) or
help in XMCLI for more information on its usage.

The version that will follow the 4.0.15-20 one will have APD turned on by default for newly installed arrays.



The First End-To-End Dell VDI Reference Architecture

$
0
0

What an awesome thing is to have an end to end infrastructure Reference Architecture for your VDI deployment, not just the storage part, that’s easy, just add XtremIO.

This end-to-end VDI solution is currently available as a Reference Architecture. It showcases different scale points in terms of desktop/user densities with XtremIO as the VM storage and Unity for the user shares. In the next phase, it will be made a validated Nodes/Bundles solution that will be an orderable pre-packaged offering simplifying procurement for customers.

VMware Horizon RA Link: http://en.community.dell.com/techcenter/blueprints/blueprint_for_vdi/m/mediagallery/20442049/download

Citrix XenDesktop RA Link: http://en.community.dell.com/techcenter/extras/m/mediagallery/20443457/download


RecoverPoint 5.0 SP1 Is out, here’s what’s new for both the classic and the RP4VMs version, Part 2, RP4VMs

$
0
0

Hi, so this is part 2 of the posts series which cover the what’s new for the RP4VMs version 5.0 SP1, if you are looking for part 1 which covers the physical RP, look here:

https://itzikr.wordpress.com/2017/02/22/recoverpoint-5-0-sp1-is-out-heres-whats-new-for-both-the-classic-and-the-rp4vms-version-part-1-physical-recoverpoint/

ok, so to the new stuff in RP4VMs!

RE–IP – THE BASICS
RE-IP simplifications will apply to systems running RPVM 5.0.1 and later
Main purpose is to simplify the operation – no need for glue scripts
Configured completely in the UI/Plugin
Co-exist with previous glue script based reIP
Supports IPv4 or IPv6
Users can change IPs, Masks, Gateways, DNS servers

RE-IP IMPLEMENTATION

Supports major VM OS: MS Windows server versions 2016, 2012,
2008, Windows 8, and 10, or Red Hat Linux server versions 6.x and
7.x, SUSE 12.x, Ubuntu 15.x and CentOS 7
Implementation is different between Windows based and Linux based
VMs, but the outcome is the same
Configuration will be done with the RP VC Plugin
Requires VM Tools

RE-IP CONFIGURATION
To use the new re-IP method, which is the default setting, Network Configuration Method is set to Automatic
No need to fill in the “Adapter ID” with the Automatic re-IP option
Supports multiple NICs per VM
Configuration on a per-Copy basis, which allows flexibility in multi-copy replication
During recovery wizard (Test Copy, Fail-over, Recover Production), use any of the 4 Network options

Protection > Consistency Groups. Expand the relevant consistency
group, select the relevant copy, and click the Edit Copy Network
Configuration
Available after a VM has been protected

RE-IP CONSIDERATIONS
Adding/removing NICs, requires reconfiguration
If performing a temporary failover (+Setting new copy as production), ensure that the network configuration of the former prod copy is configured to avoid losing its network configuration

MAC REPLICATION

Starting with 5.0.1 MAC replication to remote Copies is enabled by default
By default, MAC replication to Local Copies
is disabled
During the Protect VM wizard, the user will have the option to enable MAC replication for copies residing on the same vCenter
(local copies and remote copies when RPVM clusters are sharing the same vCenter)
This can create a MAC conflict if the VM is protected back within the same VC/Network
Available for different networks and/or VCs hosting the Local Copy
When enabled, The production VM network configuration is also preserved (so there’s no need to configure the Re-IP)

RPVM 5.0.1 MISC. CHANGES
Enhanced power-up Boot Sequence – VM Load up indication will be obtained from the OS itself (using VM tools)
“Critical” mechanism is still supported.
Enhanced detection and configuration conflicts for Duplicate UUIDs of
ESXi nodes
In an all 5.0.1 system, RPVM will use VMWare’s MORef of the ESXi and will generate a new unique identifier to replace the BIOS UUID usage
A Generic Warning will be displayed if an ESX has been added to an ESX Cluster that is registered to RPVM, and one of the entities (either a splitter or a cluster) are not 5.0SP1, and the ESX has a duplicate BIOS UUID.
RPVM 5.0.1 will add support for vSphere 6.5 (including VSAN)
A new message was added when attempting to protect a VM in a nonregistered ESX cluster right in the Protect VM wizard
Minimal changes in Deployer to enhance user experience and simplify the deployment flow, mainly around NIC topology selection

 

 

RP4VMs as part of DellEMC Hybrid Cloud (EHC)


RecoverPoint 5.0 SP1 Is out, here’s what’s new for both the classic and the RP4VMs version, Part #1, Physical RecoverPoint

$
0
0

Hi

This is part 1 of what’s new with the RecoverPoint 5.0 SP1 release, the classic (Physical) version of it.

Partial Restore for XtremIO

  • An added option to the recover production flow for XtremIO
    • Available for DD since RP 5.0
  • Allows to select which volumes to restore (previously restore of all CG volumes could of taken place)
  • During partial restore:
    • Transfer is paused to all replica copies, until the action completes
    • Only selected volumes are restored
    • At production – selected volumes would be set to NoAccess, non-selected volumes remain Accessible
    • Like current recover production behavior:
      • At replica journal – images newer than the recovery image are lost
      • After production resumes all volumes undergo short init

    Limitations

  • Partial Restore from an image that does not contain all the volumes (when Adding\Removing RSets) would show the suitable volumes available for restore but restore might remove RSETs which were added after the restore took place
    • Exactly the same as with full restore
  • Both production and the selected replica clusters must be running at least 5.0.1 to enable the capability

    GUI changes

    XtremIO Snapshot Expiration

  • As of previous versions, snapshot deletion and manual retention policy assignment were not possible via RecoverPoint for XtremIO
  • Snapshot deletion allows the user to manually delete a PiT which corresponds to a XtremIO snap
    • Was only possible with DataDomain
  • Retention policy allows the user to setup expiration for specific PiT

Applies only to XtremIO and DD copies

Snapshot deletion – GUI

Snapshot deletion – CLI

  • Admin CLI command delete_snapshot
  • delete_snapshot –help

DESCRIPTION: Deletes the specified snapshot.

PARAMETER(S):

group=<group name>

copy=<target copy name>

NOTES ON USAGE:

This command is relevant only for DataDomain and XtremIO copies and can only be run interactively

  • delete_snapshot – interactive mode

Enter the consistency group name

Enter the name of the copy at which the snapshot resides

Select the snapshot you want to delete

  • User can delete any user or system snapshot without setting retention
  • Oldest PiT cannot be deleted
  • Deleting a snapshot is only possible when there are XtremIO volumes at the remote copy
  • When there are connectivity issues to the XMS, system would keep trying to delete the snapshot. It would get deleted when there is proper connectivity
  • RecoverPoint would block any deletion of PiT which is currently used for any recovery operation
  • In order to perform snapshot deletion, both clusters must be running at least RP 5.0.1

    Snapshot Retention GUI option 1

    Snapshot retention – CLI

  • Admin CLI command set_single_snapshot_consolidation_policy
  • Can only be run interactively
  • set_single_snapshot_consolidation_policy – Interactive mode

    Enter the consistency group name

    Enter the name of the copy at which the snapshot resides

    Select the snapshot whose consolidation policy you want to set

    For the snapshot retention time, enter the number units and the unit (<num>months | <num>weeks | <num>days | <num>hours | none) for the desired time.

    Enter consistency type: (Default is current value of ‘Crash consistent’)

    * None is for remove/cancel the retention of the snapshot

  • The retention of any user or system snapshot can be changed given it’s on a XtremIO or DD array
  • Retention can be set in: hours, days, weeks and months.
    • The minimum is 1 hour
  • Changing the retention of a user bookmark will apply to to all the replica copies.
  • In case a PiT has with retention higher than the Protection Window, the PiT will last until the retention time is over and will not be deleted, it other words, it would extend the PW
  • In case a PiT has with retention lower than the Protection Window, the PiT will last until the retention time is over and then would be treated as a regular PiT which can be removed as part of the XtremIO specific snapshot consolidation policy
  • In order to configure retention, all participating clusters must be running at least RP 5.0.1

    Manual Integrity Check for Snap-Based Replication

  • In previous versions, integrity check was available only for non-SBR (Snap-Based Replication) links.
    • In RP 5.0.1, manual Integrity Check can be performed on SBR links, this currently includes:
      • XtremIO
      • DataDomain
  • This enhanced capability enables integrity check while in the transfer status is “snap-idle”
  • Manual Integrity check for SBR links – CLI
  • Integrity Check is only available via CLI
  • Interactive Mode

  • Non-Interactive Mode

  • RPO can be impacted during integrity check and should return to normal state in the next snap shipping cycle. This is general to all integrity check operations
  • Integrity Check Events remains unchanged

    This release is also supporting ScaleIO in an heterogenous environment!


The First Dell EMC End To End VDI Reference Architecture

$
0
0

We have been working with heritage Dell teams for past several months to bring end-to-end solutions that are completely built off Dell Technologies stack to market.

I am happy to announce that the very first solution is now available for you to leverage. It is an end-to-end VDI solution built with Dell Wyse Thin/Zero clients for end-points, Dell  EMC PowerEdge Servers, Dell EMC Networking switches for networking, Dell EMC XtremIO & Dell EMC Unity for storage, and VMware Horizon as the VDI platform.

This end-to-end VDI solution is currently available as a Reference Architecture. It showcases different scale points in terms of desktop/user densities with XtremIO as the VM storage and Unity for the user shares. In the next phase, it will be made a validated Nodes/Bundles solution that will be an orderable pre-packaged offering simplifying procurement for customers.


 

VMware Horizon

http://en.community.dell.com/techcenter/extras/m/mediagallery/20443602/download

 


Citrix XenDesktop

http://en.community.dell.com/techcenter/extras/m/mediagallery/20443457/download

 

You gotta love the powerhouse!


Want Your vSphere 6.5 UNMAP Fixed? Download This Patch

$
0
0

 

Hi,

One of the biggest changes introducted In vSphere 6.5, was the ability to run an automated UNMAP at the datastore level and inside the guest os (for both windows and linux VMs that support the SCSI primitive)

Unfortunately, the GA release of vSphere 6.5, had a bug in it that prevented the in-guest UNMAP from running properly, I wrote about it here

https://itzikr.wordpress.com/2016/11/16/vsphere-6-5-unmap-improvements-with-dellemc-xtremio/

 

we’ve been working with VMware on resolving this issue (it’s not a storage array specific issue) so everyone will benefit from the fix and today (15/03/2017), the fix has been released!

 

https://my.vmware.com/group/vmware/patch#search

 

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2148980

 

“Tools in guest operating system might send unmap requests that are not aligned to the VMFS unmap granularity. Such requests are not passed to the storage array for space reclamation. In result, you might not be able to free space on the storage array.”

 

So there you go, if you are looking for THE most compelling reason to upgrade to vSphere 6.5, this is it!


Viewing all 202 articles
Browse latest View live