Quantcast
Channel: itzikr – Itzikr's Blog
Viewing all 202 articles
Browse latest View live

EMC ViPR Controller 2.4 Is Out, Now With XtremIO Support

$
0
0

Hi,

We have just released the 2.4 version of the ViPR controller, this release add a lot of goodness for many scenarios, one of them is the support for XtremIO XIOS 4.0/4.0.1,

For me, I think about ViPR as a manager of managers, it allow EMC and NON EMC customers to be able to provision volumes, create snapshots, map volumes, create DR workflows and much more, all in a true self-service portal (cloud anyone..?) with a true RBAC (role based access)

There’s a good high level why ViPR is such a critical component in today’s diverse DC which you can view here:

and another good deep dive into ViPR (from an older version) which you can view here:

Now,let’s recap the changes in ViPR 2.4, please note that I haven’t covered everything as it’s a monster release, instead, I just focused on areas that involves XtremIO (either directly or via another product that integrates to XtremIO)


Block: enhancements have been made to support ScaleIO via the REST API starting with version 1.32, the management of remote clusters with Vblock and XtremIO 4.0
Object: Elastic Cloud Storage (ECS) Appliance support has been added through the Object Storage Services.


File: enhancements have been made to add the ingestion of file system subdirectory and shares along with the discovery of Virtual Data Mover on the VNX (vNAS) to intelligently place newly created file systems on vNAS servers that provide better performance.


Data Protection: Cinder discovered storage arrays are supported as VPLEX backend and an administrator can increase the capacity of a RecoverPoint.


Product: enhancements have been made to empower the administrator to customize a node name, to add different volumes to a consistency group, and improve security using improved handling of passwords

Some enhancements have been made to:
VCE Vblock: support is being added for the integration of multiple remote image servers; this directly provides better network latency that benefits the installation of operating systems
ScaleIO: while the supported functionalities remain the same, ViPR Controller is able to communicate with ScaleIO version 1.32 using the REST API
XtremIO: along with support for the software version 4.0, ViPR Controller manages multiple clusters through a single XtremIO Management Service (XMS)

The following enhancements have been made relating to data protection:
VPLEX: Cinder discovered storage arrays are now usable as VPLEX backend storage, which enables ViPR Controller to orchestrate virtual volumes from non-EMC storage arrays behind
VPLEX systems. Additional enhancements are the ingestion of backend volumes and the management of migration services using VPLEX.
RecoverPoint: this release enables the administrator to optionally increase the size of a journal volume making certain the volumes continue to collect logs.


Security while accessing the ViPR Controller has improved. By default, entering th password ten consecutive times incorrectly causes the system to lock that station out for 10
minutes. An administrator with REST API and/or CLI access can manage this feature. An administrator has the capability to customize the ViPR Controller node name to meet
data center specification, which is a change from the usual “vipr1”, “vipr2” and “vipr3” naming convention.
The ViPR Controller Consistency Group has been enhanced to support VPLEX, XtremIO, and other volumes. This also includes the ability to add multiple volumes to a consistency group to ensure these volumes remain at a consistent level.
The method that ViPR Controller treats existing zones is different. When an order has been placed, the infrastructure checks the fabric manager to determine whether a zone with the appropriate WWNs already exists. If yes, that zone is leveraged to process the order. If a zone does not already exist, a new zone is created to process the order. This feature makes certain that the ViPR Controller creates zones only when necessary. This enhancement is available for all installations of ViPR Controller 2.4, but it must be enabled on upgrades.


Starting with ViPR Controller 2.4, support for XtremIO 4.0 along with the management of multiple cluster through a single XtremIO Management Service (XMS) is being added. The Storage Provider page discovers the XMS along with its clusters. A user with
administrative privileges is required for the ViPR Controller to integrate and manage these clusters. Additionally, XtremIO-based volume can now be part of a Consistency Group in ViPR Controller, which is an operation that was unavailable to XtremIO volumes before this release.

After upgrading to ViPR Controller 2.4, ViPR Controller will create a storage provider entry for each XtremIO system that was previously registered.


ViPR Controller also adds support for XtremIO 4.0 snapshots. The specific supported operations are:
Read-Only: XtremIO snapshots are regular volumes and are created as writable snapshots. In order to satisfy the need for local backup and immutable copies, there is an option to create a read-only snapshot. A read-only snapshot can be mapped to an external host such as a backup application, but it is not possible to write to it.
Restore: Using a single command, it is possible to restore a production volume or a CG from one of its descendant Snapshot Sets.
Refresh: The refresh command is a powerful tool for test and development environments and for the offline processing use case. With a single command, a snapshot of the production volume or CG is taken. This allows the test and development application to work on current data without the need to copy data or to rescan.



VPLEX: the VPLEX lesson covers the use of Cinder discovered storage arrays as VPLEX backend storage, the ingestion of backend volume and the management of VPLEX data
migration speed.
RecoverPoint: the RecoverPoint lesson covers the enhanced capability of adding more capacity to a journal size.
Let us first take a look at VPLEX.

ViPR Controller 2.0 started support for a broader set of third party block storage arrays by leveraging OpenStacks Cinder interface and existing drivers. ViPR Controller 2.2 added support for multipathing for FC. With ViPR Controller 2.4, Cinder discovered storage arrays
can be used as VPLEX backend, as long as both Fibre Channel ports from the VPLEX local and the third-party storage array are connected in the same fabric. Most importantly, the third party storage array must also be a supported VPLEX backend.
Check the ViPR Controller Support Matrix for a list of supported fabric manager, OpenStackoperating system and VPLEX supported backend.


Listed are some of the steps necessary to provision a virtual volume using the ViPR Controller. Add FC Storage Port (step 3) is being added here due to Cinder’s limitation. The limitation occurs when ViPR Controller discovers any storage arrays behind Cinder, Cinder only provides ViPR Controller with one link to communicate with the storage array. It is recommended to add additional ports for the storage array to assure that there are at least two storage ports connected to each VPLEX director. This step only needs to be performed the first time a Cinder discovered storage array is being used; thereafter, it can be skipped. In this course, some of the steps are covered due to their technical differences. Let us take a look at how storage ports are added and virtual pools are created.


First, the OpenStack server must be added as a Storage Provider. The process to do this is the same as before. This image shows three storage providers: two VPLEX systems and
the OpenStack host identified as a Third-Party Block Storage Provider. When the OpenStack host is added onto ViPR Controller for Southbound integration, any storage arrays that are configured inside of the Cinder configuration are automatically identified.
Also shown here are five configured storage arrays. Due to Cinder limitation, only one storage port is identified per storage array; however, there are ways within the ViPR Controller to add more storage ports.


Beginning with ViPR Controller 2.4, a Migration Services option, which leverages VPLEX, is introduced in the Service Catalog. The two tasks that can be performed within Migration Services are data migration and migration from VPLEX Local to VPLEX Metro.
In order to leverage data migration, all volumes must already have been created through the VPLEX cluster and the ViPR Controller. However, if the volume was created through the VPLEX cluster but not ViPR Controller, the volumes must first be ingested into ViPR Controller for management before it can be migrated.
With migration from VPLEX Local to VPLEX Metro, the virtual volume is simply moved from being a local volume to a distributed volume thus improving its availability across two VPLEX clusters instead of one.


In the VPLEX Data Migration page, the options are project, virtual pool, operation, target virtual pool and volume. Two options play a key role in the data migration task: Operation and Target Virtual Pool. Operation specifies the type of data migration while Target
Virtual Pool specifies the destination of the volume being migrated.


The speed of the migration can be configured using the Controller Configurations within ViPR Controller > VPLEX > Data Migration Speed. The value varies among the following: lowest, low, medium, high and highest. The transfer size can either be 128 KB, 2MB, 8 MB, 16 MB or 32 MB.
Note: If the migration value is changed during a migration operation, the newly-changed value will take effect on future migration operations. The current operation is not impacted.


Prior to ViPR Controller 2.4, VPLEX volume ingestion was only being performed for the virtual volume, not for the other components including the storage from the backend
storage arrays. With ViPR Controller 2.4, this framework is improved by adding ingestion for backend volumes, clones/full copies and mirrors/continuous copies of unmanaged VPLEX volumes. With this improvement, the volumes become ViPR Controller-managed volumes along with their associated snapshots and copies.
Note: For the most up-to-date information on supported VPLEX backend arrays inside of the ViPR Controller, please refer to the ViPR Controller Support Matrix in EMC Support Zone.


Now let us take a look at the RecoverPoint related enhancements in this release.


Prior to ViPR Controller 2.4, a RecoverPoint journal created within the ViPR Controller was a single volume, with no way to increase the journal size within the ViPR Controller. 80% of the journal volume is used to keep track of changes. In a busy environment, the journal size could fill quickly.
ViPR Controller 2.4 now provides the ViPR Controller administrator the option to increase the journal capacity. Using the Add Journal Capacity option within the Block Protection
Services category, an administrator can increase the volume by selecting the appropriate project, consistency group, copy name (the RecoverPoint volume name), virtual array and virtual pool. The new capacity can either depend on pre-defined calculations detailed in the RecoverPoint Administration Guide or defined by the data center administrator.


ViPR Controller 2.4 enhances the way a Consistency Group operates. For VPLEX systems, to ensure that all virtual volumes get to and remain on a consistent level, volumes from
different backend storage arrays can be part of the same consistency group. For RecoverPoint, ViPR Controller is able to provision multiple provisioning requests against the same Consistency Group at the same time. For XtremIO, with the support of version 4.0, XtremIO volumes can be added to or deleted
from a Consistency Group. Snapshots can also be taken or deleted from a Consistency Group.


As part of this release, enhancements have been made to the plug-ins that ViPR Controller works with. This table shows the enhancements related to the vRO workflow while there were no changes to Microsoft’s SCVMM and VMware’s vROps/vCOps. Let us look into how the workflow has been enhanced.


Prior to ViPR Controller 2.4, when the vRO administrator wanted to add ViPR Controller configuration, the vRO configurator was leveraged. While this was convenient, it also meant that every time something was updated in the ViPR Controller, the service needed to be
restarted. This impacted the availability of the plug-in during the restart. With ViPR Controller 2.4, the ViPR Controller configuration is moved to vRO workflow from the vRO configurator. The configuration of tenant and project is also being moved to vRO workflow. By moving the ViPR Controller configuration to the vRO workflow, there is no need to restart the service. Additionally, tenants and projects are now configurable within the workflow. Both of these enhancements make the EMC ViPR Plug-in for vRO more time efficient to use and minimize the need to restart the service.

For vRO users who have upgraded to ViPR Controller 2.4, a message appears indicating that the ViPR Controller configuration has been moved to the vRO workflow when the user
accesses the VMware vRealize Orchestrator Configuration. Shown here is the vRealize Orchestrator management interface where the configuration folder is selected to show the vRO workflow. Before vRO can be used, the administrator must decide to either proceed with the existing ViPR Controller configuration details or update the ViPR Controller configuration.
– By selecting to proceed with the existing ViPR Controller configuration, the plug-in continues to work; however tenants, projects and virtual arrays will need to be added manually.
– By choosing to update the ViPR Controller configuration, the user will be provided with a series of screens to input details of the ViPR Controller before being able to use vRO as seen in the next few slides.



Host Configuration for VMware® vSphere On EMC XtremIO

$
0
0

Hi,

There has been a lot of questions lately around best practices when using XtremIO with vSphere, attached below is the extract of our user guide vSphere section, why am I posting it online then? Because, user guides are can be somewhat difficult to find if you don’t know where to look and google is your best friend..

Please note that this section refer to the vSphere cluster as it exclusively connected to the XtremIO array, if you are using a mixed cluster environment, some of these parameters will be different, a later post will follow up on that scenario.


Note: XtremIO Storage Array supports both ESX and ESXi. For simplification, all references to ESX server/host apply to both ESX and ESXi, unless stated otherwise.


Note: In hosts running a hypervisor, such as VMware ESX or Microsoft Hyper-V, it is important to ensure that the logical unit numbers of XtremIO volumes are consistent across all hosts in the hypervisor cluster. Inconsistent LUNs may affect operations such as VM online migration or VM power-up.


Note: When using Jumbo Frames with VMware ESX, the correct MTU size must be set on the virtual switch as well.


Fibre Channel HBA Configuration

When using Fibre Channel with XtremIO, the following FC Host Bus Adapters (HBA) issues should be addressed for optimal performance.

Pre-Requisites

To install one or more EMC-approved HBAs on an ESX host, follow the procedures in one of these documents, according to the FC HBA type:

For Qlogic and Emulex HBAs – Typically the driver for these HBAs is preloaded with ESX. Therefore, no further action is required. For details, refer to the vSphere and HBA documentation.

For Cisco UCS fNIC HBAs (vsphere 5.x and above) – Refer to the Virtual Interface Card Drivers section in the Cisco UCS Manager Install and Upgrade Guides for complete driver installation instructions

(http://www.cisco.com/en/US/partner/products/ps10281/prod_installation_guides _list.html).

Queue Depth and Execution Throttle


Note: Changing the HBA queue depth is designed for advanced users. Increasing queue depth may cause hosts to over-stress other arrays connected to the ESX host, resulting in performance degradation while communicating with them. To avoid this, in mixed environments with multiple array types connected to the ESX host, compare the XtremIO recommendations with those of other platforms before applying them.


This section describes the required steps for adjusting I/O throttle and queue depth settings for Qlogic, Emulex, and Cisco UCS fNIC. Follow one of these procedures according to the vSphere version used.

The queue depth setting controls the amount of outstanding I/O requests per a single path. On vSphere, the HBA queue depth can be adjusted through the ESX CLI.

Execution throttle settings control the amount of outstanding I/O requests per HBA port.

The HBA execution throttle should be set to the maximum value. This can be done on the HBA firmware level using the HBA BIOS or CLI utility provided by the HBA vendor:

Qlogic – Execution Throttle – This setting is no longer read by vSphere and is therefore not relevant when configuring a vSphere host with Qlogic HBAs.

Emulex – lpfc_hba_queue_depth – No need to change the default (and maximum) value (8192).

For Cisco UCS fNIC, the I/O Throttle setting determines the total number of outstanding I/O requests per virtual HBA.

For optimal operation with XtremIO storage, it is recommended to adjust the queue depth of the FC HBA. With Cisco UCS fNIC, it is also recommended to adjust the I/O Throttle setting to 1024.


Note: For further information on adjusting HBA queue depth with ESX, refer to VMware KB article 1267 on the VMware website

(http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displa yKC&externalId=1267).


Note: If the execution throttle in the HBA level is set to a value lower than the queue depth, it may limit the queue depth to a lower value than set.


Note: The setting adjustments in this section for Cisco UCS fNIC HBA apply to VMware vSphere only. Since these settings are global to the UCS chassis, they may impact other blades in the UCS chassis running different OS (e.g. Windows).


To adjust HBA I/O throttle of the Cisco UCS fNIC HBA:

  1. In the UCSM navigation tree, click the Servers tab.
  2. In the navigation tree, expand the Policies and Adapter Policies.
  3. Click the FC Adapter Policy Linux or FC Adapter Policy VMWare.
  4. In the main window, expand the Options drop-down.
  5. Configure the I/O Throttle Count field to 1024.
  6. Click Save Changes.


Note: For more details on Cisco UCS fNIC FC adapter configuration, refer to:

https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/unified-computing/guide-c07-730811.pdf


Fibre Channel HBA Configuration

To adjust the HBA queue depth on a host running vSphere 5.x or above:

  • Open an SSH session to the host as root.
  • Run one of the following commands to verify which HBA module is currently loaded:
HBA Vendor Command
Qlogic esxcli system module list | egrep “ql|Loaded”
Emulex esxcli system module list | egrep “lpfc|Loaded”
Cisco UCS fNIC esxcli system module list | egrep “fnic|Loaded”

Example (for a host with Emulex HBA):

# esxcli system module list | egrep “lpfc|Loaded” Name Is Loaded Is Enabled lpfc true true lpfc820 false true

In this example the native lpfc module for the Emulex HBA is currently loaded on ESX.

  • Run one of the following commands on the currently loaded HBA module, to adjust the HBA queue depth:


    Note: The commands displayed in the table refer to the Qlogic qla2xxx/qlnativefc, Emulex lpfc and Cisco UCS fNIC modules. Use an appropriate module name based on the output of the previous step.


HBA Vendor Command
Qlogic vSphere 5.x:

esxcli system module parameters set -p ql2xmaxqdepth=256 -m qla2xxx

vSphere 6.x:

esxcli system module parameters set -p qlfxmaxqdepth=256 -m qlnativefc

Emulex esxcli system module parameters set -p lpfc0_lun_queue_depth=128 -m lpfc

Cisco UCS fNIC

esxcli system module parameters set –p fnic_max_qdepth=128 –m fnic


Note: The command for Emulex HBA adjusts the HBA queue depth for the lpfc0 Emulex HBA. If another Emulex HBA is connected to the XtremIO storage, change lpfc0_lun_queue_depth accordingly. For example, if lpfc1 Emulex HBA is connected to XtremIO, replace lpfc0_lun_queue_depth with lpfc1_lun_queue_depth.


Note: If all Emulex HBAs on the host are connected to the XtremIO storage, replace lpfc0_lun_queue_depth with lpfc_lun_queue_depth.


  • Reboot the ESX host.
  • Open an SSH session to the host as root.
  • Run the following command to confirm that queue depth adjustment is applied:

    esxcli system module parameters list -m <driver>


    Note: When using the command, replace <driver> with the module name, as received in the output of step 2 (for example, lpfc, qla2xxx and qlnativefc).


    Examples:

    • For a vSphere 5.x host with Qlogic HBA and queue depth set to 256:

# esxcli system module parameters list -m qla2xxx | grep ql2xmaxqdepth

ql2xmaxqdepth int 256 Max queue depth to report for target devices.

  • For a vSphere 6.x host with Qlogic HBA and queue depth set to 256:

# esxcli system module parameters list -m qlnativefc | grep qlfxmaxqdepth

qlfxmaxqdepth int 256 Maximum queue depth to report for target devices.

  • For a host with Emulex HBA and queue depth set to 128:

# esxcli system module parameters list -m lpfc | grep lpfc0_lun_queue_depth

lpfc0_lun_queue_depth int 128 Max number of FCP commands we can queue to a specific LUN

If queue depth is adjusted for all Emulex HBAs on the host, run the following command instead:

# esxcli system module parameters list|-m lpfc | grep lun_queue_depth

Host Parameters Settings

This section details the ESX host parameters settings necessary for optimal configuration when using XtremIO storage.


Note: The following setting adjustments may cause hosts to over-stress other arrays connected to the ESX host, resulting in performance degradation while communicating with them. To avoid this, in mixed environments with multiple array types connected to the ESX host, compare these XtremIO recommendations with those of other platforms before applying them.


When using XtremIO storage with VMware vSphere, it is recommended to set the following parameters to their maximum values:

Disk.SchedNumReqOutstanding – Determines the maximum number of active storage commands (I/Os) allowed at any given time at the VMkernel. The maximum value is 256.


Note: When using vSphere 5.5 or above, the Disk.SchedNumReqOutstanding parameter can be set on a specific volume rather than on all volumes presented to the host. Therefore, it should be set only after XtremIO volumes are presented to the ESX host using ESX command line.


Disk.SchedQuantum – Determines the maximum number of consecutive “sequential” I/Os allowed from one VM before switching to another VM (unless this is the only VM on the LUN). The maximum value is 64.

In addition, the following parameter setting is required:

Disk.DiskMaxIOSize
Determines the maximum I/O request size passed to

storage devices. With XtremIO, it is required to change it from 32767 (default setting of 32MB) to 4096 (4MB). This adjustment allows a Windows VM to EFI boot from XtremIO storage with a supported I/O size of 4MB.


Note: For details on adjusting the maximum I/O block size in ESX, refer to VMware KB article 1003469 on the VMware website

(http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType= kc&docTypeID=DT_KB_1_1&externalId=1003469).


These setting adjustments should be carried out on each ESX host connected to XtremIO cluster via either the vSphere Client or the ESX command line.

To adjust ESX host parameters for XtremIO storage, follow one of these procedures:

Using the vSphere WebUI client:

  1. Launch the vSphere Web client and navigate to Home > Hosts and Clusters.
  2. In the left menu section, locate the ESX host and click it.
  3. In the right pane, click Manage > Settings.
  4. From the System section, click Advanced System Settings.
  5. Locate the Disk.SchedNumReqOutstanding parameter. Click the Edit icon and set the parameter to its maximum value (256).


Note: Do not apply step 5 in a vSphere 5.5 (or above) host, where the parameter is set on a specific volume using ESX command line.


  1. Locate the Disk.SchedQuantum parameter. Click the Edit icon and set it to its maximum value (64).
  2. Locate the Disk.DiskMaxIOSize parameter. Click the Edit icon and set it to 4096.
  3. Click OK to apply the changes.

Using the ESX host command line (for vSphere 5.0 and 5.1):

  • Open an SSH session to the host as root.
  • Run the following commands to set the SchedQuantum,

    SchedNumReqOutstanding, and DiskMaxIOSize parameters, respectively:

    • esxcfg-advcfg -s 64 /Disk/SchedQuantum
    • esxcfg-advcfg -s 256 /Disk/SchedNumReqOutstanding
    • esxcfg-advcfg -s 4096 /Disk/DiskMaxIOSize

Using the ESX host command line (for vSphere 5.5 or above):

  • Open an SSH session to the host as root.
  • Run the following commands to set the SchedQuantum and DiskMaxIOSize parameters, respectively:
    • esxcfg-advcfg -s 64 /Disk/SchedQuantum
    • esxcfg-advcfg -s 4096 /Disk/DiskMaxIOSize
  • Run the following command to obtain the NAA for XtremIO LUNs presented to the ESX host and locate the NAA of the XtremIO volume:
    • esxcli storage nmp path list | grep XtremIO -B1
  • Run the following command to set SchedNumReqOutstanding for the device to its maximum value (256):
    • esxcli storage core device set -d naa.xxx -O 256

vCenter Server Parameter Settings

The maximum number of concurrent full cloning operations should be adjusted, based on the XtremIO cluster size. The vCenter Server parameter

config.vpxd.ResourceManager.maxCostPerHost determines the maximum

number of concurrent full clone operations allowed (the default value is 8). Adjusting the parameter should be based on the XtremIO cluster size as follows:

10TB Starter X-Brick (5TB) and a single X-Brick – 8 concurrent full clone operations

Two X-Bricks – 16 concurrent full clone operations

Four X-Bricks – 32 concurrent full clone operations

Six X-Bricks – 48 concurrent full clone operations

To adjust the maximum number of concurrent full cloning operations:

  1. Launch vSphere WebUI client to log in to the vCenter Server.
  2. From the top menu, select vCenter Inventory List.
  3. From the left menu, under Resources, Click vCenter Servers.
  4. Select vCenter > Manage Tab > Settings > Advanced Settings.
  5. Click Edit.
  6. Locate the config.vpxd.ResourceManager.maxCostPerHost parameter and set it according to the XtremIO cluster size. If you cannot find the parameter, type its name in the Key field and the corresponding value in the Value field.
  7. Click Add.
  8. Click OK to apply the changes.

vStorage API for Array Integration (VAAI) Settings

VAAI is a vSphere API that offloads vSphere operations such as virtual machine provisioning, storage cloning and space reclamation to storage arrays that supports VAAI. XtremIO Storage Array fully supports VAAI.

To ensure optimal performance of XtremIO storage from vSphere, VAAI must be enabled on the ESX host before using XtremIO storage from vSphere. Failing to do so may expose the xtremIO cluster to the risk of datastores becoming inaccessible to the host.

This section describes the necessary settings for configuring VAAI for XtremIO storage.

Enabling VAAI Features

Confirming that VAAI is Enabled on the ESX Host

When using vSphere version 5.x and above, VAAI is enabled by default. Before using the XtremIO storage, confirm that VAAI features are enabled on the ESX host.

To confirm that VAAI is enabled on the ESX host:

  • Launch the vSphere Web Client and navigate to Home > Hosts and Clusters.
  • In the left menu section, locate the ESX host. and click it.
  • In the right pane, click Manage > Settings.
  • From the System section, click Advanced System Settings.
  • Verify that the following parameters are enabled (i.e. both are set to “1”):
    • DataMover.HardwareAcceleratedMove
    • DataMover.HardwareAcceleratedInit
    • VMFS3.HardwareAcceleratedLocking


    If any of the above parameters are not enabled, adjust them by clicking the Edit icon and click OK.

Manually Setting VAAI on Datastore

If VAAI setting is enabled after a datastore was created on XtremIO storage, the setting does not automatically propagate to the corresponding XtremIO Volumes. The setting must be manually configured to avoid data unavailability to the datastore.


Perform the following procedure on all datastores created on XtremIO storage before VAAI is enabled on the ESX host.

To manually set VAAI setting on a VMFS-5 datastore created on XtremIO storage with VAAI disabled on the host:

  1. Confirm that the VAAI Hardware Accelerator Locking is enabled on this host. For details, refer to “Confirming that VAAI is Enabled on the ESX Host” on page 46.
  2. Using the following vmkfstools command, confirm that the datastore is configured as “public ATS-only”

    # vmkfstools -Ph -v1 <path to datastore> | grep public

    In the following example, a datastore volume is configured as “public”

    # vmkfstools -Ph -v1 /vmfs/volumes/datastore1 | grep public Mode: public

    In the following example, a datastore volume is configured as “public ATS-only”

    # vmkfstools -Ph -v1 /vmfs/volumes/datastore2 | grep public Mode: public ATS-only

  3. If the datastore was found with mode “public”, change it to “public ATS-only” by executing the following steps:
    1. Unmount the datastore from all ESX hosts on which it is mounted (except one ESX host).
    2. Access the ESX host on which the datastore is still mounted.
    3. To enable ATS on the datastore, run the following vmkfstools command:

      # vmkfstools –configATSOnly 1 <path to datastore>

    4. Click 0 to continue with ATS capability.
    5. Repeat step 2 to confirm that ATS is set on the datastore.
    6. Unmount datastore from the last ESX host.
    7. Mount datastore on all ESX host.

Tuning VAAI XCOPY with XtremIO

By default, vSphere instructs the storage array to copy data in 4MB chunks. To optimize VAAI XCOPY operation with XtremIO, it is recommended to adjust the chunk size to 256KB. The VAAI XCOPY chunk size is set using the MaxHWTransferSize parameter.

To adjust the VAAI XCOPY chunk size, run the following CLI commands according to the vSphere version running on your ESX host:

For vSphere version earlier than 5.5:

esxcli system settings advanced list -o

/DataMover/MaxHWTransferSize

esxcli system settings advanced set –int-value 0256

–option /DataMover/MaxHWTransferSize

For vSphere version 5.5 and above:

esxcfg-advcfg -s 0256 /DataMover/MaxHWTransferSize

Disabling VAAI in ESX

In some cases (mainly for testing purposes) it is necessary to temporarily disable VAAI.

As a rule, VAAI should be enabled on an ESX host connected to XtremIO. Therefore, avoid disabling VAAI or temporarily disable it if required.


Note: For further information about disabling VAAI, refer to VMware KB article 1033665 on the VMware website

(http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displa yKC&externalId=1033665).


As noted in the Impact/Risk section of VMware KB 1033665, disabling the ATS (Atomic Test and Set) parameter can cause data unavailability in ESXi 5.5 for volumes created natively as VMFS5 datastore.


To disable VAAI on the ESX host:

  • Browse to the host in the vSphere Web Client navigator.
  • Select the Manage tab and click Settings.
  • In the System section, click Advanced System Settings.
  • Click Edit and modify the following parameters to disabled (i.e. set to zero):
    • DataMover.HardwareAcceleratedMove
    • DataMover.HardwareAcceleratedInit
    • VMFS3.HardwareAcceleratedLocking
  • Click OK to apply the changes.

Multipathing Software Configuration


Note: You can use EMC Virtual Storage Integrator (VSI) Path Management to configure path management across EMC platforms, including XtremIO. For information on using this vSphere Client plug-in, refer to the EMC VSI Path Management Product Guide.


Configuring vSphere Native Multipathing

XtremIO supports the VMware vSphere Native Multipathing (NMP) technology. This section describes the procedure required for configuring native vSphere multipathing for XtremIO volumes.

For best performance, it is recommended to do the following:

Set the native round robin path selection policy on XtremIO volumes presented to the ESX host.


Note: With NMP in vSphere versions below 5.5, clustering is not supported when the path policy is set to Round Robin. For details, see vSphere MSCS Setup Limitations in the Setup for Failover Clustering and Microsoft Cluster Service guide for ESXi

5.0
or ESXi/ESX 4.x. In vSphere 5.5, Round Robin PSP (PSP_RR) support is introduced. For details, see MSCS support enhancements in vSphere 5.5 (VMware KB 2052238).


Set the vSphere NMP Round Robin path switching frequency to XtremIO volumes from the default value (1000 I/O packets) to 1.

These settings ensure optimal distribution and availability of load between I/O paths to the XtremIO storage.


Note: Use the ESX command line to adjust the path switching frequency of vSphere NMP Round Robin.


To set vSphere NMP Round-Robin configuration, it is recommended to use the ESX command line for all the XtremIO volumes presented to the host. Alternatively, for an XtremIO volume that was already presented to the host, use one of the following methods:

Per volume, using vSphere Client (for each host where the volume is presented)

Per volume, using ESX command line (for each host where the volume is presented)

The following procedures detail each of these three methods.

To configure vSphere NMP Round Robin as the default pathing policy for all XtremIO volumes, using the ESX command line:


Note: Use this method when no XtremIO volume is presented tothe host. XtremIO volumes already presented to the host are not affected by this procedure (unless they are unmapped from the host).


  1. Open an SSH session to the host as root.
  2. Run the following command to configure the default pathing policy for newly defined XtremIO volumes to Round Robin with path switching after each I/O packet:

    esxcli storage nmp satp rule add -c tpgs_off -e “XtremIO

    Active/Active” -M XtremApp -P VMW_PSP_RR -O iops=1 -s

    VMW_SATP_DEFAULT_AA -t vendor -V XtremIO

    This command also sets the vSphere NMP Round Robin path switching frequency for newly defined XtremIO volumes to one (1).


    Note: Using this method does not impact any non-XtremIO volume presented to the ESX host.


To configure vSphere NMP Round Robin on an XtremIO volume in an ESX host, using vSphere WebUI Client:

  1. Browse to the host in the vSphere Web Client navigator.
  2. Select the Manage tab and click Storage.
  3. Click Storage Devices.
  4. Locate the XtremIO volume and select the Properties tab.
  5. Under Multipathing Properties, click Edit Multipathing.
  6. From the Path Selection policy drop-down list, select Round Robin (VMware) policy.
  7. Click OK to apply the changes.
  8. Click Edit Multipathing and verify that all listed paths to the XtremIO Volume are set to Active (I/O) status.


To configure vSphere NMP Round Robin on an XtremIO volume in an ESX host, using ESX command line:

  1. Open an SSH session to the host as root.
  2. Run the following command to obtain the NAA of XtremIO LUNs presented to the ESX host:

    #esxcli storage nmp path list | grep XtremIO -B1

  3. Run the following command to modify the path selection policy on the XtremIO volume to Round Robin:

    esxcli storage nmp device set –device <naa_id> –psp VMW_PSP_RR

    Example:

#esxcli storage nmp device set –device naa.514f0c5e3ca0000e

–psp VMW_PSP_RR


Note: When using this method, it is not possible to adjust the vSphere NMP Round Robin path switching frequency. Adjusting the frequency changes the NMP PSP policy for the volume from round robin to iops, which is not recommended with XtremIO. As an alternative, use the first method described in this section.


For details, refer to VMware KB article 1017760 on the VMware website

(http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc& docTypeID=DT_KB_1_1&externalId=1017760).

Configuring PowerPath Multipathing


Note: For the most updated information on PowerPath support with XtremIO storage, refer to the XtremIO Simple Support Matrix.


XtremIO supports multipathing using EMC PowerPath on Linux. PowerPath provides array-customized LAMs (native class support) for XtremIO volumes. PowerPath array-customized LAMs feature optimal failover and load balancing behaviors for the XtremIO volumes, managed by PowerPath.

For details on the PowerPath/VE releases supported for your VMware vSphere host, refer to the XtremIO Simple Support Matrix.

For details on native class support with XtremIO for your host, refer to the EMC PowerPath/VE release notes document for the PowerPath/VE version you are installing.

For details on installing and configuring PowerPath/VE with XtremIO native class on your host, refer to the EMC PowerPath on VMware vSphere Installation and Administration Guide for the PowerPath/VE version you are installing. This guide provides the required information for placing XtremIO volumes under PowerPath/VE control.

Post-Configuration Steps – Using the XtremIO Storage

When host configuration is completed, you can use the XtremIO storage from the host. For details on creating, presenting and managing volumes that can be accessed from the host via either GUI or CLI, refer to the XtremIO Storage Array User Guide that matches the version running on your XtremIO cluster.

EMC Virtual Storage Integrator (VSI) Unified Storage Management version 6.2 and above can be used to provision from within vSphere Client Virtual Machine File System (VMFS) datastores and Raw Device Mapping volumes on XtremIO. Furthermore, EMC VSI Storage Viewer version 6.2 (and above) extends the vSphere Client to facilitate the discovery and identification of XtremIO storage devices allocated to VMware ESX/ESXi hosts and virtual machines.

For further information on using these two vSphere Client plug-ins, refer to the VSI Unified Storage Management product guide and the VSI Storage Viewer product guide.

Disk Formatting

When creating volumes in XtremIO for a vSphere host, the following considerations should be made:

Disk logical block size – The only logical block (LB) size supported by vSphere for presenting to ESX volumes is 512 bytes.

Note: In XtremIO version 4.0.0 (and above), the Legacy Windows option is not supported.


Disk alignment – Unaligned disk partitions may substantially impact I/O to the disk.

With vSphere, data stores and virtual disks are aligned by default as they are created. Therefore, no further action is required to align these in ESX.

With virtual machine disk partitions within the virtual disk, alignment is determined by the guest OS. For virtual machines that are not aligned, consider using tools such as UBERalign to realign the disk partitions as required.

Presenting XtremIO Volumes to the ESX Host

Note: This section applies only to XtremIO version 4.0 and above.

Note: When using iSCSI software initiator with ESX and XtremIO storage, it is recommended to use only lower case characters in the IQN to correctly present the XtremIO volumes to ESX. For more details, refer to VMware KB article 2017582 on the VMware website. http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=display KC&externalId=2017582


When adding Initiator Groups and Initiators to allow ESX hosts to access XtremIO volumes, specify ESX as the operating system for newly-created Initiators, as shown in the figure below.


Note: Refer to the XtremIO Storage Array User Guide that matches the version running on your XtremIO cluster.


Following a cluster upgrade from XtremIO version 3.0.x to version 4.0 (or above), make sure to modify the operating system for each initiator that is connected to an ESX host.

Creating a File System

Note: File system configuration and management are out of the scope of this document.

It is recommended to create the file system using its default block size (using a non-default block size may lead to unexpected behavior). Refer to your operating system and file system documentation.

Using LUN 0 with XtremIO Storage

This section details the considerations and steps that should be performed when using LUN 0 with vSphere.

Notes on the use of LUN numbering:

In XtremIO version 4.0.0 (or above), volumes are numbered by default starting from LUN id 1 (and not 0 as was the case in previous XtremIO versions).

Although possible, it is not recommended to manually adjust the LUN id to 0, as it may lead to issues with some operating systems.

When a cluster is updated from XtremIO version 3.0.x to 4.0.x, an XtremIO volume with a LUN id 0 remains accessible following the upgrade.

With XtremIO version 4.0.0 (or above), no further action is required if volumes are numbered starting from LUN id 1.

By default, an XtremIO volume with LUN0 is inaccessible to the ESX host.


Note: Performing the described procedure does not impact access to XtremIO volumes with LUNs other than 0.


When native multipathing is used, do not to use LUN0, or restart the ESX if the rescan fails to find LUN0.

Virtual Machine Formatting

For optimal performance, it is recommended to format virtual machines on XtremIO storage, using Thick Provision Eager Zeroed. Using this format, the required space for the virtual machine is allocated and zeroed on creation time. However, with native XtremIO data reduction, thin provisioning, and VAAI support, no actual physical capacity allocation occurs.

Thick Provision Eager Zeroed format advantages are:

Logical space is allocated and zeroed on virtual machine provisioning time, rather than scattered, with each I/O sent by the virtual machine to the disk (when Thick Provision Lazy Zeroed format is used).

Thin provisioning is managed in the XtremIO Storage Array rather than in the ESX host (when Thin Provision format is used).

To format a virtual machine using Thick Provision Eager Zeroed:

  1. From vSphere Web Client launch the Create New Virtual Machine wizard.
  2. Proceed using the wizard up to the 2f Customize hardware screen.
  3. In the Customize hardware screen, click Virtual Hardware.
  4. Toggle the New Hard Disk option.
  5. Select the Thick Provision Eager Zeroed option to format the virtual machine’s virtual disk.


  6. Proceed using the wizard to complete creating the virtual machine.

Space Reclamation

This section provides a comprehensive list of capacity management steps for achieving optimal capacity utilization on the XtremIO array, when connected to an ESX host.

Data space reclamation helps to achieve optimal XtremIO capacity utilization. Space reclamation is a vSphere function, enabling to reclaim used space by sending zeros to a specific address of the volume, after being notified by the file system that the address space was deleted.

Unlike traditional operating systems, ESX is a hypervisor, running guest operating systems on its file-system (VMFS). As a result, space reclamation is divided to guest OS and ESX levels.

ESX level space reclamation should be run only when deleting multiple VMs, and space is reclaimed from the ESX datastore. Guest level space reclamation should be run as a periodic maintenance procedure to achieve optimal capacity savings.

The following figure displays a scenario in which VM2 is deleted while VM1 and VM3 remain.


Space Reclamation at Guest Level

Note: Refer to the relevant OS chapter to run space reclamation in the guest OS level.

On VSI environments, every virtual server should be treated as a unique object. When using VMDK devices, T10 trim commands are blocked. Therefore, it is required to run space reclamation manually. RDM devices pass through T10 trim commands.

There are two types of VDI provisioning that differ by their space reclamation guidelines:

Temporary desktop (Linked Clones) – Normally, temporary desktops are deleted once the end users log off. Therefore, running space reclamation on the guest OS is not relevant, and only ESX level space reclamation should be used.

Persistent desktop (Full Clones) – Persistent desktop contains long-term user data. Therefore, space reclamation should be run on guest OS level first, and only then on ESX level.

On large-scale VSI/VDI environments, it is recommended to divide the VMs to groups to avoid overloading the SAN fabric.

Space Reclamation at ESX Level

ESX 5.1 and below

In versions prior to ESX 5.5, the vmkfstools command is used for space-reclamation. This command supports datastores up to 2TB.

The following example describes running vmkfstool on a datastore XtremIO_DS_1 with 1% free space to allow user writes.

# cd /vmfs/volumes/XtremIO_DS_1

# vmkfstools -y 99

Vmfs reclamation may fail due to T10 commands blocking (VPLEX). In such cases, it is required to apply a manual copy of zeroes to the relevant free space.

The following example describes running a manual script on X41-VMFS-3 datastore (refer to “ESX Shell Reclaim Script” on page 62).

# ./reclaim_space.sh X41-VMFS-3

Note: The datastore name cannot include spaces.

ESX 5.5 and above

ESX 5.5 introduces a new command for space reclamation and supports datastores larger than 2TB.

The following example describes running space reclamation on a datastore XtremIO_DS_1:

# esxcli storage vmfs unmap –volume-label=XtremIO_DS_1 –reclaim-unit=20000

The reclaim-unit argument is an optional argument, indicating the number of vmfs blocks to UNMAP per iteration.

Vmfs reclamation may fail due to T10 commands blocking (VPLEX). In such cases, it is required to apply a manual copy of zeroes to the relevant free space.

The following exmaple describes running a manual script on X41-VMFS-3 datastore (refer to “ESX Shell Reclaim Script” on page 62):

# ./reclaim_space.sh X41-VMFS-3

Note: The datastore name cannot include spaces.

ESX Shell Reclaim Script

The following example describes an ESX shell reclaim script.

for i in $1 do size=$(df -m|grep $i|awk ‘{print $4}’) name=$(df -m|grep $i|awk ‘{print $NF}’) reclaim=$(echo $size |awk ‘{printf “%.f\n”,$1 * 95 / 100}’) echo $i $name $size $reclaim dd count=$reclaim bs=1048576 if=/dev/zero of=$name/zf sleep 15 /bin/sync rm -rf $name/zf done


Note: While Increasing percentage leads to elevated precision, it may increase the probability of receiving a ‘no free space’ SCSI error during the reclamation.


Out of Space VM Suspend and Notification with Thin Provisioning (TPSTUN)

TPSTUN is a VAAI primitive that enables the array to notify vSphere when a LUN is running out of space due to thin provisioning over-commit. The command causes suspending all virtual machines on that LUN. XtremIO supports this VAAI primitive.

A virtual machine provisioned on a LUN that is aproaching full capacity usage becomes suspended, and the following message appears:


At this point, the VMware administrator can resolve the out-of-space situation on the XtremIO cluster, and prevent the guest OS in the VMs from crushing.


EMC Storage Integrator (ESI) 3.9 Is Here

$
0
0

ESI is our windows server + windows server Hyper-V plugin, it’s very similar to our VSI plugin but is targeting our windows based customers who are either running windows server as bare betal or as Hyper-V

For an overview about the plugin itself, please read the blog post I wrote some time ago, https://itzikr.wordpress.com/2014/11/25/esi-for-windows-suite-3-6-emc-xtremio/

Ok, so what’s new with this version (3.9) for our XtremIO customers?

Support For XtremIO Snapshots


Creating Snapshot of Volume
Below is the wizard which is used for creating snapshot for XtremIO Volume.
Read-only snapshots are only supported from XtremIO 4.0, So for XtremIO 3.0
this snapshot type option will be disabled


Create Snapshot of another Snapshot
In Volume & Snapshot tab for XtremIO storage system, ESI will give option for
creating snapshot from another snapshot. The selected snapshot will be the
source for creating a snapshot. On selection of a snapshot , this option will be
available as a context menu.


 

Refresh XtremIO Volume
A XtremIO volume can be refreshed from Snapshot of that particular
Volume. This option will be available in Volume & Snapshot tab. On
selection of Volume/Snapshot , this option will be available as a context
menu.


Below is the wizard which is used for Refresh/Restore .
All the snapshots of the selected volume of the selected are listed


Restore XtremIO Volume
A XtremIO volume can be restored from snapshot (read-only) of that
particular Volume. This option will be available in Volume tab. On selection
of Volume, this option will be available as a context menu.


Below is the wizard which is used for Restore Snapshot
All the read-only snapshots of that are listed .


Best practices for Host Provisioning with XtremIO 4.0

Host Best Practices are the set of optimal settings/configurations which are
recommended for achieving the optimized performance.
It is divided into three categories:


1.Storage Side Settings
a)Specifying Disk Logical Block Size


2.Hypervisor Settings (VMWare ESX)
a)Disk Settings.
b)Native Multipathing Settings


3. Host Settings
i)Windows Host
a)HBA Queue Depth Settings
ii)Linux Host
a)HBA Settings
b)Native Multipathing Settings

 

Best practices for Host Provisioning , setting Disk
Logical Block Size.

Recommended Disk Logical Block Size for Windows or Linux Operating System
and Other Operating System is 4KB and 512B respectively.


Best practices for Windows Host Provisioning with
XtremIO 4.0

Below wizard depicts the best practices for Windows Host Provisioning with
XtremIO 4.0 storage system


Best practices for Host Provisioning with XtremIO 4.0
and VMware ESX

Below wizard depicts the best practices for Host Provisioning with XtremIO 4.0
storage system and VMWare ESX


Best practices for Linux Host Provisioning with
XtremIO 4.0

Below wizard depicts the best practices for Linux Host Provisioning with
XtremIO 4.0 storage system.



Connecting EMC XtremIO To An Heterogeneous Storage Environment

$
0
0

Hi,

A Topic that comes and go every once in a while is what should you do if multiple storage arrays (VNX, VMAX etc’) are connected to the same vSphere cluster where the XtremIO array is connected to as well.

This is in fact a two sided question.

Question number one is specific to the VAAI ATS primitive, it is because in some specific VMAX / VNX software revisions, there was a recommendation to disable ATS because of bugs, these bugs have been resolved since and I ALWAYS encourage you to check with your VNX/VMAX team if a recommendation in the past was made to disable ATS/XCOPY.. but what happen if your ESXi host(s) are set to ATS off and you just connected an XtremIO array, mapped these volumes and now you recalled that, hey! We (XtremIO) actually always recommend to enable ATS..well,

If VAAI setting is enabled after a datastore was created on XtremIO storage, the setting does not automatically propagate to the corresponding XtremIO Volumes. The setting
must be manually configured to avoid data unavailability to the datastore. Perform the following procedure on all datastores created on XtremIO storage before VAAI
is enabled on the ESX host. To manually set VAAI setting on a VMFS-5 datastore created on XtremIO storage with VAAI
disabled on the host:
1. Confirm that the VAAI Hardware Accelerator Locking is enabled on this host.
2. Using the following vmkfstools command, confirm that the datastore is configured as “public ATS-only”: # vmkfstools -Ph -v1 <path to datastore> | grep public
• In the following example, a datastore volume is configured as “public”:


• In the following example, a datastore volume is configured as “public ATS-only”:


3. If the datastore was found with mode “public”, change it to “public ATS-only” by executing the following steps:
a. Unmount the datastore from all ESX hosts on which it is mounted (except one ESX host).
b. Access the ESX host on which the datastore is still mounted.
c. Run the following vmkfstools command to enable ATS on the datastore: # vmkfstools –configATSOnly 1 <path to datastore>
d. Click 0 to continue with ATS capability.
e. Repeat step 2 to confirm that ATS is set on the datastore.
f. Unmount datastore from the last ESX host.
g. Mount datastore on all ESX hosts.

Qestion number two is a more generic one, you have a VNX/VMAX And XtremIO all connected to the same vSphere cluster and you want to enable ESXi best practices, for example, the XCOPY chunk size, what can you do if some of these best practices vary between one platform to the other, its easy when a best practice can be applied as per the specific storage array but like the example I used above, XCOPY is a system parameter that can be applied per the entire ESXi host..

Below you can see the table we have come up with, like always, things may change so you want to consult with your SE before the actual deployment

 

 

 

Parameter Name

Scope/ Granularity

VMAX1

VNX

XtremIO

Multi-Array Resolution

vSphere 5.5

vSphere 6

FC Adapter Policy IO Throttle Count

per vHBA

256 (default)

256 (default)

1024

2562

(or per vHBA)

same as 5.5

fnic_max_qdepth

Global

32 (default)

32 (default)

128

32

same as 5.5

Disk.SchedNumReqOutstanding

LUN

32 (default)

32 (default)

256

Set per LUN3

same as 5.5

Disk.SchedQuantum

Global

8 (default)

8 (default)

64

8

same as 5.5

Disk.DiskMaxIOSize

Global

32MB (default)

32MB (default)

4MB

4MB

same as 5.5

XCOPY (/DataMover/MaxHWTransferSize)

Global

16MB

16MB

256KB

4MB

VAAI Filters with VMAX

 

Notes:

  1. Unless otherwise noted, the term VMAX refers to VMAX and VMAX3 platforms
  2. The setting for FC Adapter policy IO Throttle Count can be set to the value specific to the individual storage array type if connections are segregated. If the storage arrays are connected using the same vHBA’s, use the multi-array setting in the table.
  3. The value for Disk.SchedNumReqOutstanding can be set on individual LUNs and therefore the value used should be specific to the underlying individual storage array type.

Parameters Detail

 

The sections that follow describe each parameter separately.

 

FC Adapter Policy IO Throttle Count

 

Parameter

FC Adapter Policy IO Throttle Count

Scope

UCS Fabric Interconnect Level

Description

The total number of I/O requests that can be outstanding on a per-virtual host bus adapter (vHBA)


This is a “hardware” level queue.

Default UCS Setting

2048

EMC Recommendations

EMC recommends setting to 1024 for systems vHBA’s connecting to XtremIO only

EMC recommends leaving at default of 256 for vHBA’s connecting to VNX/VMAX systems only

EMC recommends setting to 256 for vHBA’s connecting to XtremIO systems and VNX/VMAX.

 

fnic_max_qdepth

 

Parameter

fnic_max_qdepth

Scope

Global

Description

Driver level setting that manages the total number of I/O requests that can be outstanding on a per-LUN basis.
This is a Cisco driver level option.

Mitigation Plan
vSphere 5.5

There are options to reduce the queue size on a per-lun basis:

 

Disk Queue Depth:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008113

esxcli storage core device set –device  device_name –queue-full-threshold  Q –queue-full-sample-size S

 


 

Mitigation Plan
vSphere 6

There are options to reduce the queue size on a per-lun basis:

 

Disk Queue Depth:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008113

esxcli storage core device set –device  device_name –queue-full-threshold  Q –queue-full-sample-size S


 

EMC Recommendations

There are options to reduce the queue size on a per-lun basis:

 

Disk Queue Depth:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008113

esxcli storage core device set –device  device_name –queue-full-threshold  Q –queue-full-sample-size S

 

EMC will set fnic_max_qdepth to 128 for systems with XtremIO only

VCE will leave at default of 32 for VNX/VMAX systems adding XtremIO.

VCE will set to 32 for XtremIO systems adding VNX/VMAX.

 

Disk.SchedNumReqOutstanding

 

Parameter

Disk.SchedNumReqOutstanding

Scope

LUN

Description

When two or more virtual machines share a LUN (logical unit number), this parameter controls the total number of outstanding commands permitted from all virtual machines collectively on the host to that LUN (this setting is not per virtual machine).

Mitigation Plan
vSphere 5.5

vSphere 5.5 permits the application of per-device application of this setting. Use the value in the table that corresponds to the underlying storage system presenting the LUN.

Mitigation Plan
vSphere 6

vSphere 6.0 permits the application of per-device application of this setting. Use the value in the table that corresponds to the underlying storage system presenting the LUN.

 

Disk.SchedQuantum

 

Parameter

Disk.SchedQuantum

Description

The maximum number of consecutive “sequential” I/O’s allowed from one VM before we force a switch to another VM (unless this is the only VM on the LUN). Disk.SchedQuantum is set to a default value of 8.

Scope

Global

EMC Recommendations

EMC recommends setting to 64 for systems with XtremIO only

EMC recommends leaving at default of 8 for VNX/VMAX systems adding XtremIO.

EMC recommends setting to 8 for XtremIO systems adding VNX/VMAX.

 

Disk.DiskMaxIOSize

 

Parameter

Disk.DiskMaxIOSize

Scope

Global

Description

ESX can pass I/O requests as large as 32767 KB directly to the storage device. I/O requests larger than this are split into several, smaller-sized I/O requests. Some storage devices, however, have been found to exhibit reduced performance when passed large I/O requests (above 128KB, 256KB, or 512KB, depending on the array and configuration). As a fix for this, you can lower the maximum I/O size ESX allows before splitting I/O requests.

EMC Recommends

EMC recommends setting to 4096 for systems only connected to XtremIO.

EMC recommends leaving at default of 32768 for systems only connected to VNX or VMAX.

EMC recommends setting to 4096 for systems with VMAX + XtremIO.

EMC recommends setting to 4096 for XtremIO systems adding VNX.

 

XCOPY (/DataMover/MaxHWTransferSize)

 

Parameter

XCOPY (/DataMover/MaxHWTransferSize)

Scope

Global

Description

Maximum number of blocks used for XCOPY operations.

EMC Recommends

vSphere 5.5:

EMC recommends settingto 256 for systems only connected to XtremIO.

EMC recommends settingto 16384 for systems only connected to VNX or VMAX.

EMC recommends leaving the default of 4096 for systems with VMAX or VNX adding XtremIO.

EMC recommends leaving the default of 4096 for XtremIO systems adding VNX or VMAX.

 

vSphere 6:

EMC recommends enabling VAAI claim rule for systems connected to VMAX to override system setting to set to 240MB

EMC recommends setting to 256KB for systems only connected to XtremIO

 

vCenter Concurrent Full Clones

 

Parameter

config.vpxd.ResourceManager.maxCostPerHost

Scope

vCenter

Description

Determines the maximum number of concurrent full clone operations allowed (the default value is 8)

EMC recommends setting

EMC recommends setting to 8/Xbrick (up to 48) for systems only connected to XtremIO.

EMC recommends leaving the default for systems only connected to VNX or VMAX.

EMC recommends settingto 8 for systems with VMAX + XtremIO.

EMC recommends settingto 8 for systems with VNX + XtremIO

 

 


The Interesting Case of 2/0 0x5 0x25 0x0 – ILLEGAL REQUEST – LOGICAL UNIT NOT SUPPORTED

$
0
0

Hi,

Some heads up if you are using an Active/Active Array (XtremIO, VMAX, HDS and maybe more..)

Lately, I have worked with VMware about a strange support case, basically if you are using any A/A array and failing a path, the failover to the remaining paths will not always happen, this can effect not only path failover/failback but also NDU procedures etc’ where you are purposely failing paths during the storage controllers upgrades etc’

The VMware KB for this is:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003433

and if you look under the notes section:

  • Path Failover is triggered only if other paths to the LUN do not return this sense code. The device is marked as PDL after all paths to the LUN return this sense code.
  • A problem with this process has been identified in ESXi 6.0 where, for Active/Active arrays, failover is not triggered when other paths to the LUN are available. A fix is being investigated by VMware. This article will be updated when the fix is available.

2/0 0x5 0x25 0x0 – ILLEGAL REQUEST – LOGICAL UNIT NOT SUPPORTED

It’s important to understand that this is not an XtremIO issue, it’s an issue that started with vSphere 6.0 as in the 6.0 code that Is a change in the way that the ESXi sense a PDL scenario. If you are using vSphere 5.5 (and all of it’s updates) it’s all good for you.

As of today, if you are

Using vSphere 6 Update 1, you can ask VMware for a specific hotfix, you can reference the kb I mentioned above, this hotfix will not go public so you have to be specific when asking about it, why won’t it go public?

Because in the upcoming version of vSphere 6, VMware will include this “fix” as part of the ESXi kernel.

So, if you are anxious to solve it now and using vSphere 6, just call VMware support, if you can wait a littile bit more and prefer to have a fix that has gone through a more rigid QA, please wait for the upcoming vSphere 6 release.

lastly, if you are using EMC PowerPath/VE, you are not impact by this as PP/VE take ownership of the the NMP.


Faster Cloning Time Or Semi-Automated Space Reclamation – Revisited.

$
0
0

Readers of my blog knows that we typically reccemdend using Eager Zeroed thick VMDKs for performance, the performance aspect of it will manifest as a faster cloning time and a faster performance during the initial writing blocks inside the VM..


 

It’s a good thing that the times are changing, recommended practices are always being revised because technology itself is changing, how does it related to Eager zeroed thick VMs?

With vSphere 6, VMware introduced a semi-automated mechanism to reclaim capacity from VMs, if you are not familiar with space reclamation and why it is an absolute must on AFAs, a good place to start will be in a post I wrote here

https://itzikr.wordpress.com/2014/04/30/the-case-for-sparse_se/

so again, in vSphere 6, there is a way to run space reclamation on VMs

https://itzikr.wordpress.com/2015/04/28/vsphere-6-guest-unmap-support-or-the-case-for-sparse-se-part-2/

but the BIG but is that UNMAP only works when

* unmap of deleted blocks with in VMFS (i.e vmdk deletion, swap file deletion etc), for this you can use the EMC VSI plugin.

* Smaller unmap granularity. With existing VMFS version, the minimum unmap granularity is 1MB.

* Automatic UNMAP for support for windows version less than Windows 8 / server 2012 (no linux support)

* the device at the VM level (the drive) must be configured as “thin”, not “eager zeroed thick” or “lazy zeroed thick”

So while it is awesome that you can now almost forget about running space reclamation manually, you are still faced with a big dilemma, should I change my VMs VMDK format to “thin” which is a pre-requisite for automated UNMAP or should I stick with eager zeroed thick for performance?

You CAN have your cake and eat it to, here’s what you need to do

Below you can see a windows 10 VM that I prepared,

The Windows 10 VM was allocated a 40GB drive

 

Which out of the 40GB, only 16.6GB were actually used.

I then cloned the VM to 2 templates, the first one was cloned to a thick eager zeroed template and then I cloned the VM to a template that was based on “thin” vmdk.

I then had two templated looking like this

I then deployed a VM from the eager zeroed thick template, the cloning took:

14 seconds

You can also see the IOPS used, red means the beginning of the cloning operation, green means the time it ended.

And the bandwidth that was used

With that I moved on deploying a VM from the thin VMDK based template

the cloning took:

09 seconds

You can also see the IOPS used, red means the beginning of the cloning operation, green means the time it ended.

 

And the bandwidth that was used

 

 

And here’s how it looks if you compare the bandwidth between the two cloning operations, again, they were both deployed from the same windows templates, the only difference is that was template was thin and the other was egaer zeroed thick

 

So the question is why, why would a thin based deployment takes LESS time than an eager zeroed thick, it used to be the exact opposite!

The “magic” is happening because when I initially install windows on that VM, I then CLONED it to a template which defrag the data inside the VM, yes, I also cloned that VM to an eager zeroed thick template but now the data was structured in a much better way so the fact that the thin template had less capacity as it is thin, now means less time to deploy!

So..

Clone you VMs to a thin template, enable running space reclamation on the OS’s that support it (windows 8/8.1/10 / Server 2012/2012 R2/2016 and enjoy both worlds!


RecoverPoint For Virtual Machines (RP4VMs) 4.3 SP1 Is Out!

$
0
0

Hi, One of my favorite products in the EMC portfolio has just GA’d with a new version, one that I’m truly excited about! If you are not familiar with what RP4VMs is, a good place to start will be here

https://itzikr.wordpress.com/2015/07/14/recoverpoint-for-virtual-machines-rp4vms-4-3-is-here-and-its-awesome/

got it? Still here? Great, here’s what’s new

The first thing that was completely re-done was the deployment wizard, it’s replacing the classic deployment manager, fully web based and integrated into the vRPA, to start the deployment, you simply browse to the vRPA with https://IP/WDM Instead of bothering you with more text / slides, you can see a demo I recorded here

Some of the plugin enhancements are the ability to go back after the deployment and validate the registered ESXi hosts for potential issues, RP4VMs will even try to take the extra mile and resolve these issues for you.

Hmm..this one is interesting, RP4VMs currently support the vSCSI API for splitting the IO but in the upcoming version of vSphere and a small upgrade to RP4VMs (coming very soon as well..), it will support the new VMware API known as VIOF (IO Filters).


Providing a persistent data volume to EMC XtremIO using ClusterHQ Flocker, Docker And Marathon

$
0
0

Hi,

Containers are huge, that’s not a secret to anyone in the IT industry, customers are testing the waters and looking for many ways to utilize containers technologies, it is also not a secret that the leading vendor in this technology is docker.

But docker itself isn’t perfect yet, while it’s as trendy as trendy can get, there are many ways around the docker runtime to provide cluster management etc’..

It all started with a customer request some weeks ago, their request was “can you show us how do you integrate with docker, Marathon (to provide containers orchestration) and ClusterHQ Flocker to provide a persistent data volume”..sounds easy right? J

Wait a second, isn’t containers technologies supposed to be completely lossless, designed to fail and do not need any persistent data that will survive a failure in case a container dies??

Well, that’s exactly where customers are asking things that aren’t always part of the master design of containers technologies and where there is a gap, there is a solution..


Enter ClusterHQ with their Flocker product, taken from their website

What is Flocker?

Flocker is an open-source container data volume manager for your Dockerized applications.

By providing tools for data migrations, Flocker gives ops teams the tools they need to run containerized stateful services like databases in production.

Unlike a Docker data volume which is tied to a single server, a Flocker data volume, called a dataset, is portable and can be used with any container in your cluster.

Flocker manages Docker containers and data volumes together. When you use Flocker to manage your stateful microservice, your volumes will follow your containers when they move between different hosts in your cluster.

Container Manager and Storage Integrations

Flocker is designed to work with the other tools you are using to build and run your distributed applications. Flocker can be used with popular container managers or orchestration tools like Docker, Kubernetes, Mesos.

For storage, Flocker supports block-based shared storage such as Amazon EBS, or OpenStack Cinder so you can choose the storage backend that is best for your application. Read more about choosing the best storage backend for your application. You can also use Flocker to take advantage of pre-defined storage profiles that are offered by many storage providers. Find out more about Flocker’s storage profile support.

How is related to EMC XtremIO you might wonder, well, as the #1 sold AFA in the market, we are starting to get many requests like these and so, together with ClusterHQ, there is now a support for EMC XtremIO to provide this functionality.

If you want to see a full demo of a MySQL app failing over, look no further

I also wanted to give an huge thank you to Dadee Birgher from my team who set it all up in no time.



EMC World 2016 – The XtremIO Sessions

$
0
0

Wow, I can’t believe another year has passed since the last EMC World, time is flying by folks and that is a fact!

Here, at XtremIO we are very busy trying to have a solid agenda for you our customers, partners and SEs so you can leverage the event to come and hear about different topics, talk to the engineers and see how it all comes together.

Below you can see all the XtremIO sessions we have published so far and their dates, please notes that the dates for the sessions can still be change, looking forward to see you ALL there!

Registration for the event can be done from this url :

http://www.emcworld.com/registration.htm

my session is the one highlighted in yellow.

Topic Title

Level (select one)
1. Beginner
2. Intermeditae
3. Advanced)

Session Day #1

Session Day #2

GA

Abstract (60 words)

2016 All-Flash State of the Union: What’s New with XtremIO and How Customers Are Using It

Beginner

Monday 8:30 – 9:30

Wed. 1:30 – 2:30

60 min.

This session provides an update on the latest XtremIO capabilities and how it is transforming customers’ workloads for agility, tremendous TCO and business benefits. With customer examples via real time data, we will discuss the use cases and benefits for consolidating mixed workloads across database, analytics, business apps.

Best Practices for Running Virtualized Workloads & Containers in the All-Flash Data Center

Intermediate

Monday 4:30 – 5:30

Wed 1:30 – 2:30

60 min.

Great, you customer have just purchased a shiny new all flash array (AFA), now what? In this session we will learn the reasons for one of the quickest revolutions that the storage industry has seen in the last 20 years, and how XtremIO can enable breakthrough capabilities for your server virtualization and private cloud deployments. We will go trough specific use case issues and how to overcome them. With lots of real-world tips and tricks, you can consolidate your most demanding virtualization workloads successfully and gracefully.

Building the Ultimate Database as a Service with XtremIO

Intermediate

Tuesday 3:00 – 4:00

Wed Noon – 1:00

60 min.

Database as a Service creates exciting new possibilities for developers, testers, business analysts and application owners. However, a successful deployment requires careful planning around database standardization and service automation, and an infrastructure that can safely consolidate multiple production and non-production workloads. This session will provide step-by-step guidance on getting your DBaaS project deployed on EMC XtremIO, the ideal platform for massive database consolidation.

Business Continuity, Disaster Recovery, and Data Protection for the All-Flash Data Center

Intermediate

Monday 3:00 – 4:00

Thurs 1:00 – 2:00

60 min.

Everybody knows that Flash is fast, but do you know the best way to protect your workload on a media that is capable of servicing more than 1M IOPS with tight RPO and RTO? This session will provide details of all the business continuity and disaster recovery solutions available for XtremIO customers. It will also include customer examples and recommend the best approaches for different scenarios. This session will cover the integration of XtremIO with ProtectPoint/Data Domain, VPLEX, RecoverPoint, AppSync and more.

Broken Promises, Buyer Beware: Special considerations for evaluating AFAs

Advanced

Tuesday Noon – 1:00

Thurs 11:30 – 12:30

60 min.

Capacity & Performance are key factors in any storage purchase decision. But special consideration must be paid to them during any evaluation. In this session we’ll share best practices to be followed during evaluating an AFA to ensure a hiccup free production deployment.

Deploment Best Practices for Consolidating SQL Server and Integrated Copy Data Management

Advanced

Monday 4:30 – 5:30

Wed. 8:30 – 9:30

60 min.

Examine SQL Server behavior on XtremIO’s all-flash array vs. traditional storage, explore use cases, see demos of XtremIO’s unique integrated copy data management (iCDM) stack which increases DBA productivity, and accelerates SQL Server database application lifecycle management. See why traditional SQL Server best practices can be significantly simplified with the deployment of XtremIO. We will focus on areas like storage deployment, queue depth, multi-pathing, number of LUNs, database file layout, tempdb placement, and more!

Learn from real life experts on how to deploy Big Data Analytics On an All-Flash Array implementation

Intermediate

Monday Noon – 1:00

Wed. Noon – 1:00

60 min.

This session provides an overview on the different architectural options available for customers to streamline their big data applications on a the XtremIO All-Flash Array. Learn from customers on the tradeoff and benefits of the different options. Understand how the different architectures can help customers scale / protect and simplify the storage infrastructure required to support their analytics workload. The presentation will discuss practical implications along with proven recommendations that storage professionals can walk away with.

Desktop Virtualization Best Practices: A Customer Perspective

Intermediate

Monday 8:30 – 9:30

Tues. 8:30 – 9:30

60 min.

Health Network Laboratories (HNL), a leading diagnostic test lab, leveraged a best-of-breed technology approach to produce stellar results moving its nearly 1,000 pathologists and scientists to VDI. Architected for a 10-year growth plan, the new VDI solution uses Citrix XenDesktop for brokering, EMC XtremIO for high performing, no-compromise desktops, Dell Wyse thin clients for access from anywhere, Imprivata for one-click single sign-on, Unidesk for application layering, and active-active data center with EMC VPLEX for business continuity. In this session, HNL CIO and his team will share what they learned in their journey to desktop virtualization nirvana.

Introducing XtremIO Integrated Copy Data Management and How It will transform Your Infrastructure, Your Operations, and Your Business Process Agility

Beginner

Monday 1:30 – 2:30

Wed. 3:00 – 4:00

60 min.

Everybody knows that all-flash arrays are ideal for running database workloads – this is because flash is fast. But a database application is much more than just the production instance. Considerations in storage planning need to be made for additional copies of the database or application for things development and test, analytics and reporting, operations, and local recovery. Historically all-flash arrays have been too expensive to house these additional copies. Now, with innovations built into the EMC XtremIO all-flash array, these copies can be space-efficiently consolidated, operating at production levels of high performance, and managed easily in an automated fashion. The result is simplicity, lower cost application development, and faster software development cycles bringing agility and true transformation into your IT. Learn all about it with real customer examples in this session.

Next-Generation OpenStack Private Clouds with VMware and EMC XtremIO All-Flash Array

Intermediate

Monday Noon – 1:00

Thurs 8:30 – 9:30

60 min.

OpenStack Cloud deployments have long been notorious for being too complex and unpredictable, often requiring long implementation times and ongoing optimization. In this session, experts from VMware and EMC will discuss how VMware Integrated OpenStack (VIO), coupled with EMC XtremIO all-flash scale-out storage can dramatically simplify OpenStack implementations while delivering enterprise-class reliability, consistent predictable workload performance, and easy linear scalability.

Oracle Integrated Copy Data Management: Realizing the Power of XtremIO Virtual Copy Technology and EMC AppSync

Advanced

Tuesday Noon – 1:00

Thurs. 10:00 – 11:00

60 min.

On average enterprise Oracle Database users create between 8 and 10 functional copies of each production database—and no enterprise Oracle Database user has but a single production database. Copy Data Management is one of the most troubling technical requirements in the enterprise today and it is first and foremost on the minds of Oracle Database Administrators. XtremIO Virtual Copy Technology leads the industry in terms of copy data management performance, flexibility and ease of use. To extend these XtremIO attributes to the self-service user, customers choose EMC AppSync. This session will introduce EMC AppSync and XtremIO Virtual Copy Technology and provide a case study covering ease of use and consistent, predictable performance across source volumes and virtual copies alike.

Game Changing SAP Best Practices for HANA and Traditional SAP, Consolidation, Converged Infrastructure, and iCDM on XtremIO

Intermediate

Monday 1:30 – 2:30

Thurs. 1:00 – 2:00

60 min.

99% of the Fortune 100 run their business on SAP. Almost 90% of those SAP architectures are built on an EMC SAP Data platform. Consolidation, reduced complexity & performance are primary focal points for these businesses. As is reducing cycles spent on infrastructure management to empower more focus on business innovation via SAP. Callaway Golf and Mohawk Industries are perfect examples of this mew Game-changing All-Flash driven SAP mantra. Callaway will show how they accelerated SAP performance In Production with EMC XtremIO All Flash Storage, by as much as 110%. Mohawk will show one of the worlds most advanced virtualized vHANA architectures on VCE Converged Infrastructure. Both customer teams will share best practices & lessons learned on reducing SAP costs via XtremIO Virtual Copy (XVC snapshots) & deduplication; accelerating performance and how EMC empowers their NextGen SAP strategies.

Tales From The Trenches: End-User Computing Architect Perspectives on Next Generation VDI

Advanced

Tuesday 3:00 – 4:00

Thurs. 10:00 – 11:00

60 min.

In this session, VDI architects from VMware and EMC XtremIO will present the recent trends in desktop virtualization and role of all-flash storage in ushering the new era of VDI. Topics covered will include: (1) Just-in-Time desktops: How to orchestrate disparate desktop building blocks and application delivery schemas to deliver personalized desktops and the role of all-flash storage. (2) Application layering with VMware App Volumes: At scale performance with EMC XtremIO. (3) VDI with Windows 10: Key differences in storage access profiles from earlier Windows desktop OSes, optimization recommendations, and implications for all-flash storage.

XtremIO 101: All-Flash Architecture, Benefits, and Use Cases for Mixed Workload Consolidation

Beginner

Tuesday 8:30 – 9:30

Thurs. 11:30 – 12:30

60 min.

This session provides an overview to the EMC XtremIO all-flash scale-out array and its design objectives. The architecture will be discussed and compared to other flash arrays in the market with the goal of helping the audience understand the unique requirements of building an all-flash array, the proper methodology for testing all-flash arrays, and architectural differentiation among flash array features that affect endurance, performance, and consistency across the most demanding mixed workload consolidations.

XtremIO iCDM Best Practices: Customter Panel

Beginner

Monday 4:30 – 5:30 Trying to resched to not conflict with PR Session

Wed. 8:30 – 9:30

60 min.

Everybody knows that all-flash arrays are fast, but XtremIO enables much more than speeda and feed. Considerations in storage planning need to be made for additional copies of the database or application for things development and test, analytics and reporting, operations, and local recovery. Historically all-flash arrays have been too expensive to house these multiple copies of your production data. Now, with innovations built into the EMC XtremIO all-flash array, these copies can be space-efficiently consolidated, operating at production levels of high performance, and managed easily in an automated fashion. The result is simplicity, lower cost application development, reduce the required resources and have faster software development cycles. Learn from XtremIO customers how they implemented iCDM and how it transformed their IT to gain business agility and competitive edge.


VSI 6.8 Is Here, Here’s what’s new

$
0
0

Hi,

I’m very happy to accounce we have just released Virtual Storage Integrator (VSI) 6.8!

If you have been living under a rock for the past 6 years, this is our vCenter (web) plugin that you can use to manage all of the EMC arrays, in the context of XtremIO there are so many things you can do with it that I’m using in a daily basis to manage my vSphere lab, oh, and it’s free which is always good.

If you want an overview of what the plugin does, you can start by reading here:

https://itzikr.wordpress.com/2015/07/17/virtual-storage-integrator-6-6-is-almost-here/

https://itzikr.wordpress.com/2015/09/21/vsi-6-6-3-is-here-grab-it-while-its-hot/

so, what’s new in 6.8

Yep, the number 1 request was to support VSI in a vCenter linked mode configuration, that is now supported.

Multiple vCenters Support – Overall
Scope

Multiple vCenters Support – VSI 6.8 Scope

All features related to XtremIO


Multiple vCenters Support – Preconditions

vCenters are configured in linked mode.
ŸDeploy VSI plugin to every single vCenter in
linked mode group.


Quality Improvements

Viewing XtremIO-based Datastore/RDM Disk Properties

ŸSymptom
Take several minutes to retrieve XtremIO-based datastore/RDM disk properties.
ŸImprovement
Optimize the algorithm to quickly match the underlying registered XtremIO array for datastore/RDM disk
Apply batch API/multiple threads to retrieve volumes/snapshots from XtremIO array.

Space Reclamation gets stalled

ŸSymptom
space reclamation task seen as ongoing in the vCenter tasks.
ŸImprovement
this has now been resolved.

VMware SRM Pairing fails

ŸSymptom
Upon trying to pair the SRM servers, you get a “token” error
ŸImprovement
this has now been resolved.


EMC AppSync 2.2 SP3 Is Out, Another Must Upgrade For XtremIO Customers

$
0
0

Hi,

We have just GA’d a new service pack for EMC AppSync, as you probably already know, Copy Data Managemnet (CDM) is a key components of the XtremIO architecture, it is so big and so different than anything else that is out there that many customers are using XtremIO just for that.

But, what good does a CDM architecture and features if you don’t have an integral software the links between the storage array technology to the applications it needs to protect & repurpose. This is where Appsync comes in, if you are new to Appsync, I encourage you to start reading about it here first

https://itzikr.wordpress.com/2014/12/19/protecting-your-vms-with-emc-appsync-xtremio-snapshots/

https://itzikr.wordpress.com/2015/10/01/emc-appsync-2-2-sp2-is-out/

EMC AppSync offers a simple, SLA-driven, self-service approach for protecting, restoring, and cloning critical Microsoft and Oracle applications and VMware environments. After
defining service plans (such as Gold, Silver, and Bronze), application owners can protect, restore, and clone production data quickly with item-level granularity by using the
underlying EMC replication technologies. AppSync also provides an application protection monitoring service that generates alerts when the SLAs are not met.
AppSync supports the following applications and storage arrays:
Applications — Oracle, Microsoft SQL Server, Microsoft Exchange, and VMware VMFS and NFS datastores, file systems, and Oracle application for NFS.
Storage — VMAX, VMAX 3, VNX (Block and File), VNXe, XtremIO, and ViPR Controller
Replication Technologies—VNX Advanced Snapshots, VNXe Unified Snapshot, SRDF, TimeFinder, SnapVX, RecoverPoint, XtremIO Snapshot, and ViPR Snapshot

ok, here’s what’s new in 2.2 SP3


lService pack full install.
Until AppSync 2.2.2, service pack was always an upgrade install. However, AppSync 2.2.3 supports full install.
Unmount callout script for AIX file system.
CLI enhancements include:
Repurposing refresh
Mount option for mounting all file system copies that are protected together (include_all_copies=true)
Expire option to remove a copy which has multiple associated copies
Unmount option to specify latest or oldest mounted copy
XtremIO specific fixes.
If you are on XtremIO 4.0.2, it is recommended that you upgrade to AppSync 2.2.3 because it includes critical XtremIO specific fixes.
Improved support for Linux MPIO user friendly names.
Supports VMAX Hudson HYPERMAX OS: 5977.809.784 with SMI-S 8.2.
Supports VMAX Indus HYPERMAX OS: 5977.691.684 with SMI-S 8.1.2.
Supports VMAX All Flash Arrays – 450F and 850F Models.
Supports RecoverPoint with SRM flag set in RecoverPoint Consistency Groups.
Supports RecoverPoint 4.4.
Supports RedHat Enterprise Linux, Oracle Linux, and CentOS 7.0, 7.1, and 7.2.
Supports IBM AIX 7.2.
Supports VERITAS VxVM 7.0, VCS 7.0, and VxFS 6.2, 7.0

Fixed issues (in 2.2 SP3)

URM00105205: Fixed issues with SUDO user permissions that were leading to host deployment failure.

URM00104744: Resolves the Mail Recipient field becoming null after rediscovery of Exchange host.

URM00104571: Added support to show Phase pits and corresponding events for the time period as configured by the user.

URM00105382: Addressed an issue of Mount failure when multiple Initiators from different ESX servers into a same Initiator groups for XtremIO.

URM00105041: Fixes the issue during create copy of exchange differential copy in VMAX V3 storage arrays.

URM00104870: This fixes timeout issue while creating remote copies of VNX file.

URM00105329: Addressed an issue where 1st gen copy of RAC database cannot be mounted if redo logs in separate ASM disk group.

URM00105066: Addresses an issue of RP bookmark copy not getting deleted from AppSync, even when the bookmark gets deleted from RPA.

URM00105450: Fixes optimistic lock exception when expire the VNX snapshots.

URM00105360: Fixes the issue of unable to get options “data only” and “log only” of Exchange database restore wizard.

URM00104609: Provided a fix to avoid indefinite wait by AppSync server for a response from Array.

URM00104799: Fixes a problem of AppSync Host Plugin hotfix hang on non-English machines.

URM00105077: Fixed an issue with SUDO user that leads to append extra sudo before the command execution.

URM00104906: Resolved timeout issue during RP bookmark creation.

URM00105258: Rectified a problem of Virtual Machine being detected as physical host.

URM00105278: Added a fix that will remove extra blank lines from the command output of powermt display.

URM00105281: Fixed an issue of Oracle 12c agent installation prevents discovery of hosts.

URM00105342: Fixed the mapping issue in case of Oracle DB created with UDEV rules.

URM00105629: Fix provided to validate RecoverPoint bookmarks for a CG before restore operation proceeds.

URM00105464: Fixes a device cleanup issue after unmounting a copy on AIX machine.

URM00105759: Fixed timeout issue while running fsck at mount time.

URM00105546: Fixed the discovery issue during mount.

URM00105501: Fixed PowerPath cleanup during unmount and enhanced CLI to mount all filesystem copies that are protected together.

URM00105607: Special characters in XtremIO 3.0 for folder name is handled for expire.

URM00105798: Addressed the unexpected Error while trying to delete an RPA.

URM00105538: Addressed performance issue with Oracle ASM mount with recove


 


VPLEX GeoSynchrony 5.5 SP2 Is Out

$
0
0

 

I’m so happy this release is finally out, during the 3 years since XtremIO went GA (time passed by so fast..) we got many customers who wanted to leverage XtremIO with VPLEX, there are many reasons to do so but i would say that at least my favorite one is an Active/ Active virtualized data center based on VPLEX, as such, customers wanted to have the full feature set of XtremIO such as UNMAP and the ease of management, aka VIAS, happy to report these (and more) are now in!

VAAI UNMAP ENHANCEMENTS

Thin provisioning awareness for mirrors
– Enables host-based storage reclamation for local and distributed mirrors using UNMAP
–Enables UNMAP during Thin-> Thin migration
– Restricted to XtremIO and VMware ESXi (same as SP1)

NEW/CHANGED CUSTOMER USE CASES

. Reclaiming storage from the host using UNMAP to mirrored volumes
. Creating thin-provisioned mirrored virtual volumes
. Migrating thin-provisioned storage
. Noticing soft threshold crossing
. Handling out-of-space condition on thin mirrors

CREATING THIN MIRRORED VOLUMES
Device must be thin-capable
– 1:1 device geometry
– local and distributed RAID-1 mirrors are now supported
– underlying storage volumes must all come from XtremIO
• Virtual volume must still be thin-enabled
NEW: The virtual-volume provision –thin command can now be
used to create thin-enabled virtual volumes:
– provisions XtremIO storage volumes
– claims these storage volumes in VPLEX
– creates extents and devices atop the storage volumes
– creates virtual volumes and makes them thin-enabled

3. MIGRATING THIN-PROVISIONED STORAGE

VPLEX uses a temporary mirror during migration
thin-enabled setting from source volume is
transferred to the target volume when the migration
is committed
• UNMAP is supported during and after migration when:
– the source and target are both thin-capable (XtremIO), and
– the source is thin-enabled

4. NOTICING SOFT THRESHOLD CROSSING

• If an XtremIO array crosses its configured resource
utilization soft threshold, it raises a Unit Attention:
– THIN PROVISIONING SOFT THRESHOLD REACHED (38/07h)
• In SP2, VPLEX notices this Unit Attention and emits a call-home (limited to every 24 hours):
– scsi/170 WARNING: Thin Provisioning Soft Threshold
reached – vol <volName>
• This Unit Attention is not propagated to hosts on the
VPLEX front-end

5. HANDLING OUT-OF-SPACE ON MIRRORS
When an out-of-space error is seen on the last healthy mirror
leg of a RAID1, VPLEX propagates the failure to the host
• When an out-of-space error is seen on a mirror leg while there
are other healthy mirror legs, VPLEX:
– Marks the affected mirror leg dead
– Prevents the affected mirror leg from being automatically
resurrected
– Decreases RAID-1 redundancy, but host is unaffected
• After resolving resource exhaustion on XtremIO, admin must
manually resurrect the affected devices in VPLEX

USABILITY: DEVICE RESURRECTION

• A new convenience command can be used to trigger
resurrection of dead storage volumes:
device resurrect-dead-storage-volumes –devices []

USABILITY: THIN-ENABLED PROPERTY OF VV

The virtual volume listing contains a thin-enabled
property that was a true/false boolean in SP1
• In SP2, this property has three possible states:
– enabled
– disabled (thin-capable but not enabled)
– unavailable (not thin-capable)

USABILITY: UNMAP MONITORS

Two new statistics:
– unmap-ops
– unmap-avg-lat
• Each applicable to the following targets:
– host-init (I)
– fe-prt (T)
– fe-lu (L)
– fe-director (in the default monitors)

USABILITY: REVERT THIN-REBUILD DEFAULT

In SP1, thin-capable storage volumes were
automatically configured to use thin-rebuild
semantics
– i.e. skip the write to a rebuild target if the source and
target both hold zero data
• In SP2, this behavior has been reverted
– for XtremIO, normal rebuild gives better performance

VPLEX DATA PATH – UNMAP HANDLING

In SP1, UNMAP was converted into a WRITE SAME 16
(identical semantics for XtremIO)
• In SP2 the VPLEX I/O is tagged internally as an
UNMAP, and we initiate an UNMAP to the XtremIO

VPLEX / XtremIO, VIAS SUPPORT

VIAS now supports registering an XMS as an AMP managing multiple XtremIO arrays.
– No longer have to create one REST AMP per array as in VPLEX 5.5 and 5.5.1.
• During the REST AMP registration, it is no longer required to select the array. VIAS will automatically discover all the arrays.
• Under the cover, VIAS will either use the XtremIO REST API v1 or v2 based on the XMS version for provisioning.

Select the AMP to see the Managed Arrays on the right side.


Once the new REST AMP has been created,
managing virtual volumes with VIAS stays the same.
– There is no impact anywhere else in either the VPLEX GUI or CLI.

GUI Support for Thin Virtual Volume

  1. Overview

    A new attribute “Thin Enabled” was added to the virtual volume type.
    It can be seen in the ‘Virtual Volumes’ view and the property dialog.
    – The value can be unavailable, enabled or disabled.
    – Currently only virtual volumes created from XtremIO storage
    volumes can currently be Thin Enabled.
    – A new attribute “Thin Capable” (Yes/No for value) was added to the
    following object types:
    – Storage Volume
    – Device
    – Extent


    2.
    Changes in the GUI
    1. Views


    Storage Volumes Views



2. Property dialogs



3.
Dialogs

A “Create thin virtual volumes” options was added to the “Create
Virtual Volumes” dialog. If the virtual volumes can’t be created as thin,
the operation will succeed but it will be thick instead.


4. Wizards

In the Provision from Pools/Storage Volumes, if all selected arrays are XtremIO, the user will have the option to create the virtual volumes as thin enabled with the addition of this new page in the wizard


Below you can see a recorded demo of the new UNMAP functionality


VMware Horizon 7 Instant Clones Technology on XtremIO

$
0
0

One of the new cool technologies that VMware have come up with recently is Instant Cloning aka “forking”.

Is was actually talked about way back in 2014 & 2015 but actually saw the light in Horizon 7 as part of just in time technology for VDI VMs.

You can read about it here

http://blogs.vmware.com/euc/2016/02/horizon-7-view-instant-clone-technology-linked-clone-just-in-time-desktop.html

as someone who works a lot with VDI and helping our XtremIO customers to leverage the array in the best way they can, I wanted to examine the workload of instant clones VMs on a single XtremIO X-Brick, for this I provisioned 2500 VDI VMs running windows 10 (which in itself is very new to VDI as I only starting to see customers who are starting deploying it now) and office 2016, I didn’t take the easy route as both windows 10 + office 2016 add a significant load on the infrastructure (compute + storage) but since I want to be able to help with future deployments on XtremIO, I chose these two.

In order to generate the workload on the 2500 VMs, I used LoginVSI which is the industry standard to simulate VDI workloads.

The results can see below J


Please Vote for the EMC XtremIO Sessions At VMworld 2016

$
0
0

It’s hard to believe a year has passed to quickly since last year’s VMworld, this years, we at the EMC XtremIO have submitted many good sessions for the submission process, please vote for the ones you are interested in thanks!

http://www.vmworld.com/uscatalog.jspa?

 

Best Practices for Running Virtualized Workloads on An All-Flash Array [7574]

All-Flash Arrays (AFAs) are taking the storage industry by storm. Many customers are leveraging AFAs for virtualizing their data centers, with a majority of them using EMC XtremIO. In this session, we will examine the reasons behind strong adoption of AFAs, discuss similarities and differences between different AFA architectures, and deep dive into specific best practices for the following virtualized use cases: 1. Databases: how they can benefit from being virtualized on an AFA; 2. EUC: how VDI 1.0 started, what VDI 3.0 means and how it is applicable for an AFA; and 3. Generic workloads being migrated to AFAs. If topics like UNMAP, Queue depths etc’ rings a bell, this is the session for you!

 

 

Demystifying OpenStack Private Clouds with VMware and EMC XtremIO All-Flash Array [8330]

OpenStack Cloud deployments have long been notorious for being too complex and unpredictable, often requiring long implementation times and ongoing optimization. In this session, experts from VMware and EMC will discuss how VMware Integrated OpenStack (VIO), coupled with EMC XtremIO all-flash scale-out storage can dramatically simplify OpenStack implementations while delivering enterprise-class reliability, consistent predictable workload performance, and easy linear scalability.

 

 
 

Best Practices for Deploying Virtualized SQL Server on EMC XtremIO All-Flash Storage [8746]

All-flash storage opens up many new ways managing SQL Server database lifecycle, can impact your existing virtualized SQL Server design and deployment methodologies. Come to this session to see how XtremIO’s all-flash compare to traditional storage, explore use cases, see demos of XtremIO’s unique integrated copy data management (iCDM) stack integrate with vRealize Automation to accelerate SQL Server database application lifecycle management, and increase DBA productivities. See why traditional SQL Server best practices can be significantly simplified on XtremIO. We will focus on areas like VMFS/RDM, vSCSI adapters, queue depth, multi-pathing, number of LUNs, database file layout, tempdb placement, and more!

 

Automating Rapid and Space Efficient Provisioning of Oracle Database as a Service (DBaaS) using VMware vRealize Automation & EMC XtremIO for Mission [7569]

Automating Oracle Database as a Service (DBaaS) using VMware vRealize Automation Suite and EMC XtremIO provides accelerated deployment, elastic capacity, greater consolidation efficiency, higher availability, and lower overall operational cost and complexity for deploying Oracle Single Instance and Oracle RAC on VMware vSphere platform.

 

Automating Rapid and Space Efficient Provisioning of Oracle Database as a Service (DBaaS) using VMware vRealize Automation & EMC XtremIO [7572]

Automating Oracle Database as a Service (DBaaS) using VMware vRealize Automation Suite and EMC XtremIO provides accelerated deployment, elastic capacity, greater consolidation efficiency, higher availability, and lower overall operational cost and complexity for deploying Oracle Single Instance and Oracle RAC on VMware vSphere platform.

 

Achieving performance, scalability and efficiency for Oracle DBaaS infrastructure with vRealize Automation and EMC XtremIO all-flash scale-out storag [7626]

DBaaS has tremendous potential to transform how databases are provisioned, consumed, managed and decommissioned – simplifying day to day operations of DBAs, application owners, and developers. However, careful planning and agile infrastructure are the must for delivering DBaaS that performs consistently and scales predictably. Experts from VMware and EMC will discuss key considerations and best practices for successfully delivering Oracle DBaaS.

 

 

Achieving scalability, performance and efficiency for Oracle DBaaS infrastructure with vRealize Automation and EMC XtremIO all-flash storage [7599]

DBaaS has tremendous potential to transform how databases are provisioned, consumed, managed and decommissioned – simplifying day to day operations of DBAs, application owners, and developers. However, careful planning and agile infrastructure are the must for delivering DBaaS that performs consistently and scales predictably. Experts from VMware and EMC will discuss key considerations and best practices for successfully delivering Oracle DBaaS.

 

 

Tales from the Trenches: End-User Computing Architect Perspectives On Next Generation VDI [8222]

VDI architects from VMware and EMC XtremIO discuss the recent trends in desktop virtualization and role of all-flash storage in ushering in the new era of VDI. This technical session will examine Just-in-Time desktops and how to orchestrate desktop building blocks including Instant Clone Technology, AppVolumes and User Environment Management customizations to deliver personalized desktops. The role of XtremIO all-flash storage as the underlying high-speed engine will be discussed. Performance data for VDI with Windows 10 will be looked at in-depth with tips and recommendations for deployment and optimization.

 

 

Hardware as a Service: Case Study of automated provisioning of hardware into vSphere and Cloud Foundry environments [8609]

The Pivotal cloud building team called PEZ, has built an automated system where users submit a request in one end and fully configured vSphere with Pivotal Cloud Foundry on top comes out the other end. Full control is handed over from the bare metal hardware all the way up to the platform on top. All of this built using components of the EMC federation including vSphere, NSX (Network), PCF (PaaS) and XtremIO (Storage). This will be an exploration in automation showing how to inventory hardware, provision hardware via IPMI & PXE, install ESXi over PXE, install vCenter via OVFtool, configure vSphere components with PowerCLI, build networks with NSX over an API, provision storage over an API, and install PCF. All automated by robots. Technology covered will include IPMI, PXE, Foreman, ESXi 6.0, vCenter 6.0, NSX, vSAN, EMC ScaleIO, EMC XtremIO, and Pivotal Cloud Foundry. Tools mentioned include Python, PowerCLI, and lots and lots of magic. All materials will be published with details included. Portions of the project may be available as open source available on Github.

 

 

Cracking the Healthcare Code: The Avera Health Perspective [8828]

Healthcare IT organizations are facing a constant challenge of meeting provider needs to deliver quality patient care through always accessible systems. Healthcare providers require transformative solutions to innovate the desktop and application delivery using virtualization. Avera Health, a regional health system comprising more than 300 locations in 100 communities throughout US mid-west, faced several business challenges of aging PC’s in patient care areas, standardizing nurses’ stations, supporting BYO program with diverse devices, and remote access.  Partnering with VMware and EMC, Avera leveraged a combined solution of Horizon View, App Volumes, and User Environment Manager along with EMC XtremIO for a high performance desktop experience.  This strategy enabled Avera to create a standardized virtual desktop solution and consistent delivery of applications to providers allowing them to be prepared for a large scale deployment of Meditech to their hospitals and clinics. Providers are given easy access to applications and can enter data while preserving their need for mobility.  The result was extremely efficient workflows, improved patient safety, and reduced struggles with technology.   Attend this session to learn how VMware Horizon, App Volumes, User Environment Manager and EMC XtremIO can be implemented together to deliver a highly-available and scalable architecture worthy of the most demanding healthcare environments.

 

Under the covers of the virtualized data center [9011]

A modern businesses’ success is intrinsically linked to the operation of their End User Computing and Business Critical Application environments. The vast majority have already embraced the virtualized data center model and now enjoy the efficiencies realized through the decoupling of server hardware, operating system, and applications. The journey to further optimize your VMware hosted environment does not stop here though! In this session, product experts from Brocade and EMC will discuss what options are available to Solutions Architects and Data Center administrators to achieve further optimization and improvement in the delivery of virtualized IT services. This technical overview will focus on how to ensure you can achieve maximum usage of back-end hardware resources without compromising application performance. Verified vendor best practices to be used in the deployment of both virtual database and virtual desktop infrastructures will be presented and discussed. Opportunities for increased instrumentation across the entire I/O stack (storage and fabric) will be highlighted and examined – delivering performance you can trust with visibility into I/O performance from the application to virtual machine and through the physical layers. Vendor technologies to be discussed will include; Brocade (FC SAN, FabricVision, VM-ID, AMP), EMC (XtremIO All-Flash Storage), VMware (ESXi, Horizon, vROps


AppSync 3.0 Is Here, What’s New

$
0
0

Hi, we have just released AppSync 3.0, if you are new to AppSync, I encourage you to first read about it here

https://itzikr.wordpress.com/2014/12/19/protecting-your-vms-with-emc-appsync-xtremio-snapshots/

and here, https://itzikr.wordpress.com/2016/03/31/emc-appsync-2-2-sp3-is-out-another-must-upgrade-for-xtremio-customers/

in a nutshell, Appsync is our (Integrated) Copy Data Management (iCDM), IDC & Gartner predict that the bulk of data is coming from copies of the data itself and AppSync is our glue to manage it at the array level, traditionally, snapshots are always associated with degraded performance either at the array controllers level or at the volume level, even the majority of the modern All Flash Arrays can’t sustain a large numbers of snapshots because of architecture deficiencies.

This is not the case with XtremIO so the “glue” here is very important. So, what’s new with 3.0

VPLEX SUPPORT

Yep, It’s finally here, many customer requests came for this one as many customers are leveraging XtremIO + VPLEX for either a true Active/Active datacenter(s) topology or for the VPLEX availability features, Appsync now fully support VPLEX, here are the modes of supported operations

You need to be on the following versions for it to work:

VPLEX firmware version 5.4.1.03 – 5.5.1.1
XtremIO firmware version 4.0.1, 4.0.2, and 4.0.4

Configuring a service plan requires selecting:
Preferred cluster selection (metro only)
Cluster 1 is chosen by default
And array preferences (when R1)
Required selection
VPLEX distributed virtual volume has storage devices on two clusters
The leg to be protected is determined by the cluster that is selected in the service plan storage options

VPLEX – APPSYNC MOUNT OPTIONS
Mount as VPLEX Virtual Volumes
– The provisioned virtual volumes are added to the storage
view of the mount host
– Teared down during unmount
Mount as native array volumes
– The mount host must be zoned to the native array where the snapshot is created

VPLEX – APPSYNC MOUNT TO ESX CLUSTER

When mount host is part of an ESX Cluster, target
LUNS are presented to all nodes of the cluster


Clear this option if you do not want to perform an
ESX cluster mount

RESTORE CONSIDERATIONS WITH VPLEX

The VPLEX production virtual volume layout must be the same as it was when the copy was created
R1 and distributed devices are restored on one leg of the mirror for which the copy was created
Other leg will be rebuilt and synchronized after restore is complete
User has option to wait or run in background
AppSync does not support restore of volumes protected by RecoverPoint

APPSYNC – VPLEX LIMITATIONS

Support XtremIO backend only
Support Bronze service plan only (local copies)
VPLEX virtual volumes must be mapped 1:1 to an array volume
Concatenated devices (RAID-C) are not supported
Nested devices are not supported
Exchange is not supported
Tolerant of non-XtremIO array on second leg only

SERVICEABILITY AND WORKFLOW ENHANCEMENTS

You can now perform consistency check only on log files of Exchange database copy in DAG environment

below you can see a demo of AppSync 3.0 / VPLEX / XtremIO working as one.

 

UNMOUNT CALLOUT SCRIPTS

Provides scripting opportunity during the Unmount previous copy and Unmount copy phases
Supported on all operating systems in addition to AIX
The unmount callout scripts are placed in a predefined location with a predefined name
The scripts are run with the user credentials used to register the host

REPURPOSING ENHANCEMENTS

Enhanced scheduling ability to repurpose and refresh a copy, including ability to
schedule repurposing jobs for 2nd Gen copies
Ability to view and edit the values of various fields in the repurpose workflow
Centralized view of all database repurposed copies

VMWARE SERVICEABILITY ENHANCEMENTS

New VMware Service Plan options
Create VM snapshots for selected VMs only – ignoring newly added VMs
Can now “Exclude VMs

APPSYNC 3.0 – SQL ENHANCEMENTS

Supports taking a cluster copy and mounting into an alternate cluster, attaching/recovering databases as clustered resources if an alternate cluster SQL Server instance is selected during recovery
Restoring back as a cluster or non-clustered resource
SQL Server databases hosted on mount points are not supported – Mount to “Default Path” is not supported due to Microsoft restrictions.

SQL NON-VDI COPIES

Similar to crash consistent, but still utilizing Microsoft VSS on the file system (does not integrate with SQL)
Mount with attaching the database (recovery) on target host is not supported for non-VDI copies – all other SQL recovery options are not supported
Restoring as a file system is supported – reattaching and recovering the database is a manual process

here’s a demo about the NON-VDI integration



VMware horizon 7.0.1 / Instant Clones / App Volumes 2.11 / Windows 10 + Office 2016 On XtremIO

$
0
0

Wow, that was a long title for a blog post but then again, there are many moving parts in this cutting edge mixture of technologies..

Instant Clone Technology (that is, vmFork) uses rapid in-memory cloning of a running parent virtual machine, and copy-on-write to rapidly deploy the virtual machines.

Copy-on-write, or COW, is an optimization strategy. This means that whenever a task attempts to make a change to the shared information, it should first create a separate (private) copy of that information to prevent its changes from becoming visible to all the other tasks.

A running parent virtual machine is brought to a quiescent, or quiet, state and then forked (when a piece of software or other work is split into two branches or variations of development), and the resulting clones are then customized (receiving unique MAC addresses, UUIDs, and other information) and powered on. These clones share the disk and memory of the parent for Reads and are always ready for login instantly. After the clone is created, Writes are placed in delta disks. Both memory and disk are copy-on-write, so if a new clone modifies bits of its memory or disk, a separate copy is made for that virtual machine, thus preserving security and isolation between virtual machines.

Instant clones are installed as part for the Horizon Broker. There is no option to select to install, it just installs with the Horizon 7 broker.

The instant clone engine is a Java app that runs on the Tomcat server.

API calls are made to VMFork on vSphere.

The Master Image has the Horizon Agent installed with the Instant Clone option selected. This has the code for instant clones and the instant clone customization engine.

Any broker is capable of running the NGVC engine, but only one of them should be running NGVC. We use the existing Federated Tasks to manage this. If the broker running the NGVC were to crash/shutdown, then the task would move to another broker in the cluster. In flight operations can error out in that scenario, but newer cloning operations should work fine.

Stateless because all configurations information is stored in vCenter. No multiple database scenario – Composer database / LDAP / vCenter to get out of sync. One source of truth.

Master VM

  • Golden Image”
  • Follow build Guidelines in KB 2007319 – See Hotfix for Win7 in Step 10
  • Optimizations – OS Optimization Tool
  • Horizon Agent with Instant Clones
  • Other Agents – AppVolumes, FlexEngine
  • Snapshot(s)

Template

  • Linked clone of Master VM – Based on chosen snapshot
  • Low disk space Usage
  • Named as <cp-template-GUID> in vCenter in <ClonePrepInternalTemplateFolder>
  • Defaults to same datastore as Master VM
  • Linked to Master VM.

Replica

  • Full Clone of Template – Thin provisioned – Less total disk space than Master VM
  • Has a Digest
  • Placed on selected datasore(s) for desktops. Can be placed on a single specific datastore in case of tiered storage selection.
  • Named as <cp-replica-GUID> in vCenter in <ClonePrepReplicaVmFolder>
  • Shared read disk for desktop VMs

Parent

  • Used to fork Desktop VMs
  • Placed on selected datasore(s) for desktops. One per host per datastore
  • Named as <cp-parent-GUID> in vCenter in <ClonePrepParentVmFolder>

Desktop VMs – Instant Clones

  • Placed on selected datastore(s) for desktops.
  • Named as defined in Horizon Administrator Pool Settings
  • Very small disk space usage – Can grow over time but limited by delete on logoff

Instant Clone Components in vCenter

Here’s a deeper dive session into the concepts (and more!) detailed above

ok, so now you are familiar with Instant Clones, let’s see how do they behave on XtremIO !


XIOS 4.0.10 And XMS 4.2.0 Are Here, Here’s What’s New

$
0
0

Hi,

I’m so proud to finally announce the GA of XIOS 4.0.10/ XMS 4.2, this release contains couple of features that so many of you, our customers have been asking for so let’s highlight them and then deep a little bit deeper to each one of them

ŸXtremIO PowerShell support
ŸSMI-S support
ŸServiceability enhancements
ŸREST API Improvements
ŸVSS enhancements
ŸXMS Simulator
ŸWebUI Tech Preview
Ÿthe usual bug fixes.

XtremIO PowerShell support

  • Supported with XMS 4.2.0
    PowerShell Supported versions: 4.0 and 5.0
    Based on XtremIO REST API version 2.0
    Supports all storage management commands

    Installation

    XtremIOPowerShell.msi package – available on support page
    Verify PowerShell ISE version 4 and above is installed
    Installation package imports the module to PowerShell path


Connecting To The XMS

New-XtremSession
Connect to single cluster or all clusters manages by the XMS


Powershell Commands

Supports storage management REST commands
XtremIO command structure:
Action-Xtrem<Object>
Supported actions:
Get
New
Set
Remove
Example: Get-XtremVolumes

Capabilities

Ability to list a subset of attributes (-Properties)
Filtering logic support (-Filters)
gt
lt
like
eq
ne
-CLImode: avoid user confirmation in session level (for scripting)
-Confirm: user confirmation for single command
-ShowRest: returns the command in json format
-Full: list all object attributes


PROPERTIES: EXAMPLE


FILTERS EXAMPLE


SHOWREST: EXAMPLE


FULL: EXAMPLE


SMI-S Integration

What is an SMI-S Provider?


SNIA developed standard intended to facilitate the management of storage devices from multiple vendors in storage area networks.
Enables broad interoperable management of heterogeneous storage vendor systems
Multiple ‘Profiles’ can be implemented
We have implemented the profiles required for Microsoft SCVMM and Microsoft Azure platforms
All SCVMM operations can be done in the GUI or through CMDlets
More profiles would be implemented in future based on field requirements and roadmap features

SMI-S PROVIDER – IMPLEMENTATION DETAILS

SMI-S Provider implemented directly on the XMS
No external component needs to be installed
Better and consistent performance guaranteed
Array operations possible
Create/delete a volume
Create an XtremIO Virtual Copy – XVC (aka Snapshot)
Mount the volume on a host
The entire array is considered as a single pool
Internally uses RestAPI calls
Completely stateless and does not cache any data

SMI-S PROVIDER – USER CREATION
ECOM needs a new Read Only user on XMS
Needs to be defined on ECOM (for non-default password usage)
Same user to be defined on ECOM and XMS for adding
SCVMM
Provide the same credentials in SCVMM
LDAP users also supported

SMI-S PROVIDER – IMPLEMENTED PROFILES
The following profiles have been implemented:
Array
Masking and Mapping
Software
Disk Drive Lite
FC Target Ports
iSCSI Target Ports
Physical Package
Multiple Computer System
Location
Block Services Package
Thin Provisioning
Replication Services (Snapshots)

SMI-S PROVIDER – COMMAND FLOW

ADDING XMS IN SCVMM

Fabric Tab -> Add
Resources -> Storage
Devices
Specify the “Run As” account as defined in ECOM
and XMS
Go to the ‘Jobs’ tab to see operation status
‘Providers’ option will show the XMS information and current status


VIEWING STORAGE DETAILS

ADDING LUN IN SCVMM
Create Logical Unit’
allows to create new volumes
Right click on volume name to delete


MOUNTING LUN IN SCVMM
Right Click on Host -> Properties


SNMP HEARTBEAT
Allow active clusters to send SNMP keep alive trap to SNMP manager
Can be sent between 5 min to 24 hours , customer configurable
Working via CLI require two commands
1. Enable the feature and frequency on XMS level:
modify-snmp-notifier enable-heartbeat heartbeat-frequency=15
2. Enable the feature on cluster level (default is enabled for all):
modify-clusters-parameters enable-heartbeat

EXPORT/IMPORT XMS CONFIGURATION

ŸGUI and CLI now support the option to export and
XMS configuration for back-up and restoration
ŸThe following configuration elements are exported
XMS: User Account, Ldap Config, Syr Notifier, Syslog Notifier, Snmp Notifier, Email Notifier, Alert Definition
Cluster: Cluster Name, X-Brick Name, SC Name, Target Ports, iSCSI Portal & Routes, IG, Initiator, Volume, CG, LUN Mapping, Tag, Scheduler


Snapshot Enhancements
Native VSS support for application aware snapshots
Snapshot Enhancements
The new VSS supports working inside the VM guest using RDM volumes.


WebUI Technical Preview

Yea..that’s the one you have all been waiting for but couple of disclaimers, it’s a technical preview which means we ask you to test and provide feedback for, it’s not the final Web UI and It’s likely to change before GA and again, the reason we are releasing it is that you can contact us and let us know your opinion about it, good, bad, ugly, it’s all good! Please note that the classic Java is still working and provides the full functionality, so it’s

WEB BASED BROWSER ACCESS
100% pure HTML5 (no Java) !!!
Just enter XMS WebUI URL
Enter your standard XMS User credentials
Nothing to install
J


HOMEPAGE OF WEBUI
In Multi-cluster setup: Multi-Cluster Overview
In Single cluster setup: Single-Cluster Overview
To access WebUI Homepage: Click on WebUI Logo


CLUSTER/S HOMEPAGE
Single cluster homepage


MULTIPLE CLUSTERS HOMEPAGE

NAVIGATION
2 main navigation
components:
Menu bar
Context selector


CONTEXT SELECTOR
Only object types supported
by the selected menu items will appear in the Context Selector
Filtering capabilities:
Direct: Text, selected properties, tags
Indirect: Filter based on relationship to other objects


CLUSTER/S OVERVIEW

 


DRILL DOWN TO MOST ACTIVE INITIATOR GROUP


VIEW ALL VOLUMES OF AN APPLICATION



Reports

For each object type in the Context selector there is a list of supported reports
ŸAll reports support single/multiple object reports, for example:


Troubleshoot an object with all available data:
One-click navigation between pre-defined reports


Track capacity and data savings over time


Track endurance and SSD utilization

Notifications Events


Events screen

Alerts
ŸDrill down to Critical/Major Alerts from Cluster Overview

Storage Configuration & mapping

Configuration Screen
Create/delete, perform all actions on selected objects.


Storage configuration & mappings
Configuration Screen
No context selector
Indirect cluster filtering


Mappings Screen
Map between selected Volumes/CGs/Snapsets and selected Initiator Groups (many-to-many object mappings supported)


Main provisioning flow

(1)Create Volumes (2)Create Initiator Groups (3)Map


3. Map

Add volumes to Consistency Group


Local Protection

Create 1-time local protection or define local protection policy


Refresh copy


EXPORTING DATA TO CSV FILE


  • Export configuration/inventory object data


Hardware view



Inventory




A New VDI Reference Architecture

$
0
0

Hi,

We have just released a new VDI Reference Architecture based on all the latest and the greatest

This reference architecture discusses the design considerations that give you a reference point for deploying a successful Virtual
Desktop Infrastructure (VDI) project using EMC XtremIO. Based on the evidence, we firmly establish the value of XtremIO as the
best-in-class all-flash array for VMware Horizon Enterprise 7.0 deployments. The reference architecture presents a complete VDI
solution for VMware Horizon 7.0, delivering virtualized 64-bit Windows 10 desktops. The solution also factors in VMware App Volumes
2.10 for policy-driven application delivery that includes Microsoft Office 2016, Adobe Reader 11 and other common desktop user
applications. This reference architecture discusses the design considerations that will give you a reference point for deploying a
successful VDI project using XtremIO.

You can download the RA from here:

http://www.emc.com/collateral/white-papers/h15250-vdi-with-xtremio-and-horizon-enterprise-7.pdf

and here’s a demo to show it all for you!

Virtualization of Windows 10, Office 2016 desktop environment on XtremIO

         Deploying 3000 virtualized desktops (Linked clones and full clones) managed by VMware Horizon 7 on EMC XtremIO 4.0.10

         On-demand application delivery using VMware App Volumes 2.10 to 3000 Desktops on EMC XtremIO 4.0.10

         Performance evaluation of virtualized desktops deployed at scale (3000 desktops) using loginVSI on EMC XtremIO 4.0.10

         Common considerations for deploying VDI at scale using EMC XtremIO 4.0.10

         XMS Web GUI technical preview


EMC AppSync 3.0.1 Is Out, Here’s what’s new

$
0
0

Hi,

Building on the top of the 3.0 release of AppSyc, we have just GA’d the first service pack for it,

AppSync 3.0.1 includes the following new features and enhancements:
Agent robustness – Allows you to configure a common time out value for commands on the UNIX agent for time-out flexibility.


Error reporting – The UNIX host based error messages have been enhanced to provide more specific information that helps in identifying the cause of the problem.


Event message accuracy – The event messages have been enhanced to provide more specific information that helps in identifying the cause of the problem.


Error logging.


Enhanced logging for workflows.


Enhanced the Unix plug-in log for readability


Configurable retry for failed jobs – Allows you to set a retry count and retry interval to perform VSS freeze/thaw operation in the case of VSS failures (for example, VSS 10 second timeout issue) that are encountered during copy creation.


The AppSync User and Administration Guide provides more information on how to set a retry count and interval for failed jobs.


Mount multiple copies to the same AIX server – Allows you to mount multiple copies of the protected application consecutively on the same AIX server used as the mount host.

Exchange support for VPLEX – Allows you to create and manage copies of your Exchange data. This addition completes VPLEX support to cover all the five AppSync supported applications (Oracle, Microsoft SQL, Microsoft Exchange, VMware Datastore, and File System).


Selective array registration – For XtremIO and VMAX, you can now select the arrays that you want AppSync to manage when the XMS and SMI-S provider are managing multiple arrays.


 


VMworld 2016, EMC AppSync & XtremIO Integration with VMware vCenter

$
0
0

What an awesome year for us at EMC XtremIO, so many things going on!

One of the announcements we are making at VMworld is the integration between Appsync to the vCenter but what does it actually mean?

Well, if you are new to Appsync, I suggest you start here:

http://xtremioblog.emc.com/xtremio-appsync-magic-for-application-owners

so still, what’s new?

We are now offering you, the vCente / vSphere / Cloud administrator to operate Appsync from the vCenter Web UI, why? Because we acknowledge that every admin is used to work with one tool as the “portal” to his / her world and instead of forcing you to learn another UI (in our case, it’s the AppSync UI), you can do it all from the vCenter UI.

Apart from the repurposing your test / dev environment which Appsync is known for (utizling the amazing XtremIO CDM engine) , I want to take a step back and focus on one use case that is relevant for EVERY vSphere / XtremIO customer out there which is backing up and restore VMs for free, no, really. You can now take as many snapshots that you want and be able to restore from each one, you can either:

  1. Restore a VM or VMS

  1. Restore a full volume (datastore) and the VMs that were in it

  2. Restore a file from within the VM c: / d: etc drive! No agent required!

    Very powerful engine since the majority of the restore requests from the last week or so, so you can happily use the XtremIO XVC engine to restore it from, easy, powerfull and again ,free!

    See the short demo here:


    See a full demo here:


Viewing all 202 articles
Browse latest View live