Quantcast
Channel: itzikr – Itzikr's Blog
Viewing all 202 articles
Browse latest View live

EMC World 2015 – The XtremIO Sessions

$
0
0


Wow, I cant believe emc world 2015 Is just around the corner, this year will be super big for us at XtremIO, there are so many awesome sessions and who knows, maybe a new product announcements. For your convenience I have put all the sessions related to us and the labs below, you can schedule them here:

https://www.emcworldonline.com/2015/connect/search.ww

Transform Your Real Time Analytics With XtremIO & Splunk In this session, we will discuss how you can transform your real time analytics with Splunk using EMC storage solutions. We will provide technical validation and customer proof points along with a reference architecture using XtremIO and Isilon products. We will discuss benefits of using flash storage for improving performance by placing hot/warm buckets while optimizing the cost using Isilon for cold/frozen buckets.



What’s New With XtremIO For 2015 This session discusses what’s new with XtremIO for 2015 as announced in our 4.0 release. We also discuss the coolest new capabilities and how our beta customers leveraged them, including replication, online cluster expansion, sophisticated new snapshot management, new X-Brick options, expanded cluster options, and new portfolio integrations.


XtremIO: Transforming Your Workloads, Enabling The Agile Data Center This session provides an update on XtremIO and how it is transforming customers’ workloads for agility and tremendous TCO and business benefits. With customer examples and best practices, we will discuss the use cases and benefits for consolidating mixed workloads across database, analytics, business apps, and hybrid cloud. And we will explore customer examples of how these transformed workflows for real-time analytics and agile dev/ops. Specific best practices include converged infrastructure VSPEX & Vblocks, in-memory copy services, data reduction, virtualization, and QoS.



Transforming Agile Software DevOps With XtremIO More than ever, agile development and software DevOps drives critical top-line business impact for customers across a broad range of industries. Learn how XtremIO is fundamentally enabling the next generation of agile DevOps with customer use cases to • Improve developer and overall product quality by providing full copies of production applications & datasets to all developers with zero-overhead XtremIO in-memory copy services • Dramatically accelerate performance across the entire DevOps ecosystem, enabling 1000’s of developers in a real-time • Deliver consistent, predictable performance for developers, automated build systems, and automated QA via sub-millisecond All-Flash DevOps storage • Accelerate adoption of a continual experimentation and learning through rapid repetition of prototyping.



Transformational Technology Inside XtremIO This session provides an overview to the EMC XtremIO all-flash scale-out array and its design objectives. The architecture is discussed and compared to other flash arrays in the market with the goal of helping the audience understand the unique requirements of building an all-flash array, the proper methodology for testing all-flash arrays, and architectural differentiation among flash array features that affect endurance, performance, and consistency across the most demanding mixed workload consolidations.



Data Protection For EMC XtremIO With EMC RecoverPoint (Hands-on Lab) XtremIO is the only scale-out storage array designed from the ground up for flash. In this lab, see how EMC is bringing business continuity and data protection to XtremIO through EMC RecoverPoint. In this lab, users will get an overview of the RecoverPoint 4.1.2 Unisphere interface, learn about RecoverPoint integration with XtremIO, and protecting applications running on XtremIO arrays.



XtremIO In The Wild: Insights, Data Reduction, Performance & Operational Best Practices As the market’s #1 all-flash array, XtremIO has deployed to almost 2000 customers to date. This session looks through some of the most interesting insights from that install base to help discern the future of storage and all-flash application workloads. We will spotlight general and specific results for data reduction, performance, and operational best practices.



Transforming End-User Computing With XtremIO: DaaS & New Use Cases For Graphics & Beyond Learn how XtremIO is transforming the End User Computing landscape by offering a full range of desktops – non-persistent/persistent, high-performance, and NEW graphics-intensive – with consistent and predictable performance at a low $/desktop and total cost of ownership. Going beyond VDI, XtremIO is enabling newer EUC use cases like Desktop-as-a-Service where best of both worlds meet – uncompromising experience for the end-users and radically simple deployment model with breakthrough costs for the IT organizations. Customers are sharing their EUC transformational journey with XtremIO giving you the opportunity to apply the lessons learned in your EUC environments.    



XtremIO For SAP: Game Changer For SAP Landscapes Learn how EMC is helping customers increase SAP performance by 70% and reduce TCO dramatically using XtremIO All-Flash Arrays. This customer panel teaches you how to move your SAP applications from traditional infrastructure to all flash arrays at incredible speed, while consolidating multiple copies for test and pre-production landscapes onto one XtremIO platform.



Business Continuity For XtremIO All Flash Array The session discusses the range of XtremIO high availability / business continuity solutions that enable XtremIO customers to protect mission-critical workloads. EMC offers…


Brocade Communication Systems, Inc.: Redefining Storage Connectivity For 3rd Platform From Isilon To XtremIO EMC and Brocade are redefining storage connectivity for Fibre Channel and IP storage to enable the journey to the 3rd platform.  Whether it is XtremIO AFA with FC SANs or a Big Data Hadoop application with Isilon Scale out NAS a Brocade Storage Fabrics with Fabric Vison is required.



XtremIO Native Replication With RecoverPoint The session provides a deep dive of the new native replication for XtremIO. It articulates the innovative unique replication technique that leverages the power of XtremIO’s in-memory snapshot architecture to perform replication with breakthrough efficiency and RPOs. We will discuss the different design considerations when applying replication to all flash workloads and how XtremIO and RecoverPoint successfully implement those considerations. We will show how customers can achieve best in class replication while keeping XtremIO’s consistent high performance. The session includes a demonstration of the solution.  



XtremIO For Microsoft Workloads This session explores how the transformational capabilities of XtremIO can benefit your SQL Server, Hyper-V and Exchange deployments. We also explore consolidation and agility across SQL Server environments enabled by zero cost writeable snapshots and the game-changing benefits of deduplication and compression within Exchange and Hyper-V VDI deployments.  


XtremIO v3.0 GUI & CLI Simulator (Hands-on Lab) Use this simulator as an introduction for a look and feel of the XtremIO XMS GUI Dashboard and CLI. You will be able to navigate through the dashboard as if it is a real environment, including provisioning storage, managing performance, and creating monitors.


Top 10 Tips & Tricks To Rock Your XtremIO World Learn the top 10 things to get the most out of your XtremIO system. Our panel of technical experts will share best practices of configuration, operation and management, and will  provide insights into your specific questions during a live ‘Ask-The-Expert’ session.


Accelerating SQL Server With XtremIO This session provide a deep dive into XtremIO all-flash technologies for SQL Server. We begin with looking at the XtremIO features that are unique for SQL Server deployments, going over test some results we are seeing in the lab with different feature implementations. Then switch gear into a demo, and dive into detailed use case studies of where you could benefit from deploying your SQL Servers on XtremIO. We wrap up the session with discussing some common pitfalls, deployment considerations, and provide some best practices guidance.



No Storage Tuning & Complete Operational Simplicity For Databases Running XtremIO In a 2014, IT Resources Strategies Survey, DBAs and storage teams today spend nearly 70% of their time maintaining their database environments versus focusing on application integration. This session reviews detailed solutions and case studies to showcase the value of EMC XtremIO all-flash-arrays to dramatically reduce steps needed for database infrastructure performance tuning, provisioning, and replication management.


SAS Analytics: Transformed With XtremIO & Isilon This session provides an overview of how your mixed analytics SAS workloads can be transformed on XtremIO & Isilon scale-out platforms. Whether your working with one of the three primary SAS file systems or SAS Grid, performance and capacity will scale linearly, significantly eliminate application latency and remove the complexity of storage tuning from the equation. Learn about best practices from fellow customers and how they deliver new levels of SAS business value.  


Intel: Intel Technologies To Optimize Storage System Data Movement & Operations Faster response times for storage applications and deployments, (such as big data and hybrid cloud), are becoming a critical requirement.  Intel’s hardware and software technologies can help product developers and IT managers build the infrastructure needed to successfully deploy services.  Intel’s hardwire technologies including CPU, networking, SSD and software technologies provide faster data movement though the storage systems such as EMC VMAX, VPLEX, VNX, XtremIO, Isilon  and faster data operations for storage functions such as compression, deduplication, encryptions and RAID in these storage systems.  


RecoverPoint: What’s New In 2015 Protection and mobility is essential in the datacenter, across datacenters and to the cloud. Learn about new ways introduced in 2015 to protect your applications and data with RecoverPoint and RecoverPoint for Virtual Machines – EMC’s software-only, hypervisor-based VM-level disaster recovery solution. New enhancements and integrations include: replication for VSPEX Blue, XtremIO, ScaleIO, EMC Enterprise Hybrid Cloud and protection of virtual environments.


FC SAN: Should I Stay Or Should I Go… Refreshing your infrastructure?  Wondering “should I converge my network now and go with an IP based Storage Area Network or should I wait just a little longer to see what happens in the industry”?  Have you started the process of migrating to an IP based SAN and uncovered some of the IP SAN pitfalls?  Wondering why you can’t connect your vmknics to your NSX Logical Switches??  Thinking about how to support the connectivity needs of ScaleIO, VMware Virtual SAN or XtremIO?  If you answered yes to any of these questions, join us for a candid discussion that will be focused on storage networks and the intersections with Ethernet, IP and Network Virtualization.  We’ll provide some insight into the trends we’re seeing and you can feel free to share your thoughts and concerns or just take the opportunity to vent.


Best Practices For Running Virtualized Workloads On XtremIO Great, you customer have just purchased a shiny new all flash array (AFA), now what? In this session, learn the reasons for ones of the quickest revolutions that the storage industry have seen in the last 10 years, we will understand the different architectures and the different use cases, we will go trough specific use cases pains and how to overcome this, Lastly, we will put an emphasis on specific gotcha’s that AFA possess and how to overcome these, so your customer won’t have to.  

VPLEX: Advanced Configuration & Design – Performance, Design, Failure Modes & More This session covers advanced aspects of using VPLEX, such as performance optimization, scalability, configuration, and failure modes and will provide customers with deeper understanding of its architecture to design for optimal behavior. This session also discusses in details best practices of architecting and deploying VPLEX with all-flash XtremIO environments for performance and availability, and virtual-environments for maximum flexibility.  


ViPR Controller: Automate Delivery Of Storage Services (Hands-on Lab) ViPR Controller is storage automation software which transforms multi-vendor storage into a simple, extensible and open platform.  During this lab you will learn how to provision XtremIO and Hitachi storage, VPLEX volumes and create snapshots with ViPR Controller.  You will also learn how to provision ViPR Controller managed storage via the plug-in to VMware vRealize Orchestrator.

  • There aren’t any available sessions at this time.


Global Partner Summit – Flash Breakout for Partners Did you know  XtremIO is the fastest growing product in EMC’s history and is running away with the market share in this emerging space?  In this  session, led by Mike Wing, Global VP, XtremIO, you’ll learn about this transformational story, how to position yourself as a leader, and how to help your customers Redefine Possible with the power of Flash.

Tuesday, May 5, 3:40 PM – 4:30 PM


EMC IT: Leveraging The Power Of Flash In The Data Center How did EMC IT get breakthrough shared-storage benefits for application acceleration, consolidation, and agility with a reduction in cost plus scale-out architecture, consistent low latency and IOPs, deduplication, and compression? This session provides an overview, discussion, use cases, benefits and lessons learned from EMC IT’s implementation of Flash storage/XtremIO.


Evaluating All-Flash Arrays AFA’s are evaluated differently from hybrid arrays. Learn the latest tools and techniques from XtremIO engineers and each other. Share your challenges and successes to help fellow attendees with their testing. Bring your ideas to advance the state of the art and improve available tools. This is a no-holds-barred session aimed to get a clear picture of AFA capabilities.


Enterprise Hybrid Cloud & Database-as-Service Best Practices Learn best practices for Enterprise Hybrid Cloud deployments with Tier 0 storage services build on scale-out all-flash arrays. This session discusses key use cases and benefits like Database-as-a-Service, leveraging pre-defined service catalog capabilities; empowering VM Admins, Application Admins, and Infrastructure Admins with self-service; streamline EHC management; eliminate silos, accelerate workflows, and handle cloud bursting. Customer examples show how XtremIO can take you hybrid cloud to the next level for all workloads, with breakthrough consolidation, data reduction, workload agility, and elastic capacity


OpenStack Enablement With EMC This session provides an overview of different data storage services in OpenStack, then details how various EMC products (VMAX, VNX, XtremIO, Isilon, Scale IO, etc.) integrate with OpenStack, covering best practices, use cases, and unique value-added benefits that can be derived from each.





vSphere 6 guest unmap support (or the case for sparse se, part 2)

$
0
0

Hi,

if you are a follow of my blog, you know how much space reclamation is a topic that is closed to my heart, in fact, almost exactly an year ago, I wrote this post : https://itzikr.wordpress.com/2014/04/30/the-case-for-sparse_se/ which I highly recommend you to read if you want to get a full understanding of space reclamation, UNMAP and where is VMware in regards to this.


times are changing and im so happy that in 1 year, VMware had made some changes to UNMAP as well, is only a start but it’s a good one.

so what’s so different?

well, in vSphere 6, a guest VM can now issue an UNMAP and the ESXi host will not block it, is it a new thing?

 The config option, EnableBlockDelete, is not new, it even existed prior to vSphere 6.0 (However description of that config option is changed in vSphere to reflect the actual implementation). However, prior to vSphere 6.0, Windows guests did not do the automatic UNMAPs because on the non-supportability of B2 mode page from vSphere side which is required by the windows 8 and above. vSphere 6.0 implements B2 mode page.

as long as the guest OS follows the alignment requirements reported to them in read-capacity16 and B0 mode page it will work


Caveats / Limitations

it’s not perfect, there are many cases where this will not work:

  • unmap of deleted blocks with in VMFS (i.e vmdk deletion, swap file deletion etc), for this you can use the EMC VSI plugin.
  • Smaller unmap granularity. With existing VMFS version, the minimum unmap granularity is 1MB.
  • Automatic UNMAP for support for windows version less than Windows 8 / server 2012 (no linux support)
  • the device at the VM level (the drive) must be configured as “thin”, not “eager zeroed thick” or “lazy zeroed thick”

Still ok with you? You just upgraded to vSphere 6 and you are running modern guest OS’s like windows server 2012 and windows 8, great!, see the video I made below



XtremIO : Integrating With VMware Storage IO Control (SIOC) , vSphere 5.5 and 6.0

$
0
0

Hi,

A common question that i get is how to apply a Qos at the VM level if and when its needed, this can come because of many reasons

  • you are a service provider and you want to cap your clients
  • you are a customer and you want to prevent a VM going wild “the noisy Neighbor”
  • you have a snapshot volume/VM where you don’t want to allocate the same amount of storage resources to it as you allocate for your “production” VM.

Luckily VMware have made a lot of investment in this front in both vSphere 5.5 and 6.0,

in vSphere 5.5, the default scheduler has changed to mClock scheduler which you can read about more here:

http://cormachogan.com/2014/09/16/new-mclock-io-scheduler-in-vsphere-5-5-some-details/

and in vSphere 6.0, it added a “reservation” per VM, the “reservation” part still need to be configured in the VM .vmx file.

i made a video showing all of this, enjoy and i hope you’ll use this feature, it’s really cool!


XtremIO 4.0 – XDP Enhancements (part1)

$
0
0

one of the best part of this release is the effort we put toward anything related to protection of the data, XtremIO is becoming a general all purpose flash array and as such, customers demands nothing but the best when it comes to protecting their data, lets examine the features:

COPY Data Management

image

image

Improve the ease of use and productivity around snapshots, XtremIO will introduce:

•Read only snapshots

•Consistency management (CG)

•Snapshot set

•Local protection using scheduler

•Application aware (Native VSS)

•Refresh and recovery

image

Snapshots can now be

Read-Write (default)

*NEW* Read-Only snapshots

Read-only are immutable and cannot be changed

Main use case: protection from logical data corruption

New entity: Consistency Group

image

image

Manage snapshots on a set of volumes

Members: volumes or snapshots

Snapshot on a CG, creates a cross-consistent point in time snapshots from all members

The result of snapshot on a CG is a Snapshot Set that points to the created snapshots

Snapshot Sets are managed by the CG

image

New entity: Snapshot Set

image

An entity that points to related snapshots

Correlate a set of snapshots to a
single snapshot operation

Created as a result of a snapshot on:

Consistency Group

Snapshot Set

Set of volumes

image

Snapshot SCHEDULER

image

•Auto-creation of snapshots based on
policy:

–Supports definition of job (creation, deletion, suspension, modify)

–Each job define the scheduling of:

Snapshot creation

•Exact time

•Or Interval/Frequency

Retention policy (either age or number)

–Supports: CGs, Snapshots Sets, or a Volume

REFRESH & RESTORE

image

•Restore operation

–Enables recover of data from snapshots

–Allow restore from snapshots that are direct descendent

–Provides immediate Recover Time Objective (RTO)

•Refresh operation

–Enable to refresh/reassign any entity (within a VSG) to any entity

–Supports refresh to any direction

»Prod to snap

»Snap to snap

»Snap to prod

–Instantaneous refresh

•Immediate refresh (no copy of data or metadata)

•Keeps the SCSI face of the refreshed entities

•No change in the SCSI serial ID (NAA)

•No re-mapping

•No OS side device rescan

•Support volume resize (if was changed)

•Only requires to unmount before refresh, remount after refresh

image

image

image

image

•Create-snapshot command

image

•Create Snapshot and Reassign

image

VSS Hardware Provider

Native VSS support for application aware snapshots

image

•Native support for Volume copy shadow services using XtremIO provider

–Using MS framework to pause application

–Enable the creation of application aware snapshots

–Support for Windows Server 2012R2/2008

–Support all VSS writers

•Simple MSI installation

•Simple configuration

image

•In the control panel find the XtremIO VSS Panel

•Configure the XMS address and the user

Writers and Requestors

image

App-Consistent Copy Services

image

SIMPLE & COMPREHENSIVE

–Full scheduling and automation

POWERFUL COPY MANAGEMENT

–Full orchestration of XtremIO snapshots

–Empowers application administrators

USE IT FOR

–Policy-based protection

–Database repurposing

–Integration to Micrsoft SQL Server/Exchange

–Integration to Oracle

–Integration to VMware data stores

here’s a demo I recorded showing some of what we discussed above

XtremIO–XDP Redefined

XtremIO 4.0 – XDP Enhancements–Replication (part2)

$
0
0

Remote Protection Challenge

clip_image002

Lets talk about remote replication challenges, traditionally, you would need different tools in order to protect different applications and /or arrays, how do you replicate to more than one site? How can you achieve applications consistency and lastly, how can you utilize your WAN link in the best possible way?

XTREMIO native replication With RecoverPoint

Think about the following question:

clip_image004

Im so happy to announce the native integration of XtremIO with RecoverPoint!, this is not your traditional grandfather replication solution, why do we need it,

clip_image006

How can you take the array storage controllers that are so busy serving primary IO’s and now hammer them with replication as well? Well, you don’t!, you really need to offload that task to an entity that can scale-out for replication purposes as well!, think about it this way, you don’t fly abroad every time you want to send a package, however, you DO prepare the package and “offload” it to UPS/ Fedex etc’, this is EXACTLY how this replication will work, we didn’t just utlizie the existing RecoverPoint technology, our goal was to make it better and as someone who have tested this quite heavily, here’s how it work:

clip_image008

1. An host that is connected to XtremIO is issuing an write request (that was defined as part of a replicated Consistency Group), because of the CG, the array (XtremIO) will issue a snapshot for that volume, the snapshot interval will be explained later on)

clip_image010

2. RecoverPoint will then deliver this snapshot to a remote RecoverPoint Appliance (RPA) or RPA’s (scale out for specific volumes as well!), that will then deliver it to the remote array, the remote array can be XtremIO of course BUT it can be ANY array that RecoverPoint support (heterogeneous support, we don’t lock you in to buy XtremIO at the remote site as well)

The remote array will receive the IO from the RPA based on the technology it support, if its XtremIO, it will be a snapshot, if it’s a VNX/VMAX, it can be either a splitter or a snapshot

clip_image012

3. RecoverPoint will continue sending these XtremIO based snapshots to the remote site until the replication is finished

clip_image014

4. Our first replication is done, great!, now let’s assume our RPO is 60 seconds, so RP will ask the array for the diff SCSI (deltas between what it sent to what it has), this is where the beauty of utilizing the amazing XtremIO snapshot technology kicks in, for us, snapshots are jut like any other volume, we dedupe and compress them globally!

clip_image016

5.. so in our case, we send the Diff answer to the RPA’s which will then build the bitmap of changes and store them in their “journals”, this is why in this integration, the RP journal can be very small, it doesn’t store anything but metadata, this is a radical shift from the traditional splitter based RecoverPoint where it used the journal to actually store the Diffs of the data as well, remember, we let each “entity” do what they do best, we (XtremIO) create and store the snapshots, RecoverPoint send and store the journal, scale out for primary IO’s and scale out for replication!

clip_image018

5. The snapshots are then deliver to the remote site, and stored as snapshots, this process will of course return based on the RPS’s so lets dive to these as well:

clip_image020

In this version of RecoverPoint/XtremIO integration we have two best of breed RPO’s

· Periodic – sets the minimum time between cycles, the minimum is 60 seconds, the maximum is 1 day.

· Continuous – no wait time, provides the best RPO in the market (for all flash array), guaranteed 60 Seconds!

clip_image022

lastly, management of the entire solution is so easy, below you can see the entire workflow, this is a long video I made that’s shows many of the “advanced” parameters, one of things I like about the RecoverPoint solution is that it has an “easy” and advanced’ buttons, an entire replication can be configured in 10 seconds and on the other hand, you can go in an push knobs to your liking as well, I really cant wait for customers to start using it!


XtremIO 4.0–XMS / Management Improvements

$
0
0

The fourth release of XtremIO is all about scalability, management and matureness, this post covers the management aspects of the array

clip_image002

We started to have many many customers with multiple XtremIO clusters and they loved the XMS concepts but asked if we can have one XMS manage more than one array, version 4.0 delivers that!, a single XMS can now manage multiple clusters and much much more.

clip_image004

Up to 8 clusters are supported in this release. The managed arrays will have to be (NDU) upgraded to version 4.0.

Multi-Cluster MANAGEMENT

clip_image006

Here’s how it looks, you can simply click the array you want to manage / view

clip_image008

..or, you can simply see it in the main console, things like aggregated performance metrics across your datacenter, things like data reduction and more, again, you can view it as an aggregated view or per cluster, this really gives you a good overview of your XtremIO clusters from a centralized management console.

 

Tags Management

clip_image009

Think about Gmail, it uses tags for easy searching, we have something similar in mind.

It can align storage management with business needs.

clip_image011

For example, you can tag volumes to “production” and “test”, applications, clusters, consistency groups, snapshots sets. Many entities can be tagged.

Ÿ Flexible tagging – Create tags for any object

Ÿ An object can have multiple tags

Ÿ Filter objects using tags for reports or operations

Ÿ Model hierarchy in tags – if folder-like hierarchy is required

clip_image013

Making Use of Tags

clip_image015

You can then of course, filter your results based on the tags you input.

clip_image017

clip_image019

clip_image021

clip_image023

Managing tags

clip_image025

Tags-managed per object type

However, you can create in GUI tag for multiple objects

A tag has the following properties:

Name

Object Type

Hierarchy (default is root)

Color

Read-only and up user role

Associating tags to objects

clip_image027

 

 

 

XtremIO 4.0 Reporting

Two years historical data for Cluster & Objects

GUI/CLI/REST interfaces to access historical data

Multi-cluster reporting

Security – Reports for a single user or publicly available for all users

Tagging support

Report templates

clip_image028

Ÿ Pre-loaded reports for common queries

Ÿ Use templates as the basis for new report

What can you do with reports?

clip_image030

Provide both real-time & historical data

Stores up to 2 years with variable granularity

Support reporting at different desired levels: Cluster, volumes, IGs, Group of objects etc.

Aggregate objects based on business needs (using tags)

Multiple modes of data access

– Online GUI, printed report, png/csv files

– APIs are available to consume data programmatically

 Key Benefits
Provides better visibility of cluster performance and capacity usage over time

Use cases

– Better planning and monitoring

– Trend analysis of performance and capacity

– SLA tracking

– ROI and TCO performance

– Forensics/analytics of performance issues

– Analysis of a single application workload or consolidated workloads

What’s New IN Version 4.0?

Ÿ What did we have in Version 3.0?

– Real-time monitoring data of Clusters & Objects

– GUI based real-time monitor generation

– Limited historical data (7 days of cluster-level performance data)

Ÿ What NEW in version 4.0?

– 2 years historical data for Cluster & Objects

– GUI/CLI/REST interfaces to access historical data

– Cluster-level capacity tracking

– Multi-cluster reporting

– Improved security – Enable reports for a user or make it publicly available for all cluster users

Reports data: Avg, Max, Min time aggregations

clip_image032

Average, Maximum and Minimum historical data values are calculated for the different time aggregation units

Default is Average

Reports Definition

clip_image034

Public/Private reports

clip_image036

Public/private reports

– Private reports are visible only to the user that created the report

– Public reports are visible to all users; editable only to the user who created the report

– By default all reports are Private

– Reports names do not have to be unique

Reports time definition

clip_image038

Time definitions:

– Real-time (real-time monitoring data display; like in version 3.0)

– Last hour (60 min); Last day (24 hours); Last week (7 days); Last year (365 days)

– Custom time – any From date-time to any To date-time

clip_image040


EMCWorld 2015–Best Practices for running virtualized workloads on XtremIO

$
0
0

 

Hi,

as part of this year EMCWOrld, I ran a best practices session for XtremIO & vSphere, the session was full of useful data, an amazing customer testimonial (VMware) and some really bad jokes Winking smile

this year I had a pleasure to co-host with a special guest, Shane from VMware, Shane covered some of the use cases VMware have been leveraging XtremIO for.

I got many request to post the deck online and luckily, a friend has also recorded the session so here it is:

 

 


Configuring XtremIO & RecoverPoint for VMware SRM

$
0
0

One of the software that VMware have released back in 2008 and that was always my favorite after vSphere itself was always SRM (Site Recovery Manager), back then, I installed it in so many customer sites and even have a session about it at VMworld 2009 as one of the first production installation at a customer site..back then configuring the SRA (the “communicator / translator” between the vCenter to the storage array) was a pretty difficult task and im so glad things have changes so drastically over the years, im also very happy to see that as of SRM 5.8 (and of course 6.0), it has been fully merged into the vCenter web interface as seen below

image

another new feature is that there is No need to manage IP address changes on an individual level anymore (though those options do remain if needed). These can now be mapped from one subnet to another and applied at the Site>Network Mapping level. There is the option of using both, eg. Subnet mapping for the subnet, and individual mapping for VMs within that subnet

image

also, as part of VMware global initiative to not force you the customer you use MS-SQL or ORacle DB, you can now use the embedded vPostgres Database option that is built into the installer for SRM. It is an additional option beyond the currently available Databases and is supported, though not tested, up to the SRM maximums. There isn’t a way to convert or migrate an existing database to vPostgres.

image

SRM Architecture

•Site Recovery Manager is designed for virtual-to-virtual recovery for the VMware vSphere environment

•Built for two-site scenario, but can protect bi-directionally. Can also protect multiple production sites and recover them into a single, “shared recovery site”.

•Site Recovery Manager integrates with third-party storage-based replication (also known as array-based replication) to move data to the remote site, our focus in this post is the RecoverPoint / XtremIO SRA

Script:

Site Recovery Manager is designed for the scenario that we see our customers most commonly implementing for disaster recovery—two datacenters. Site Recovery Manager supports both bi-directional failover as well as failover in a single direction. In addition, there is also support for a “shared recovery site”, allowing customers to failover multiple protected sites into a single, shared recovery site.

The key elements that make up a Site Recovery Manager deployment:

-VMware vSphere: Site Recovery Manager is designed for virtual-to-virtual disaster recovery. It works with many versions of ESX and ESXi (consult product documentation for more details). Site Recovery Manager also requires that you have a vCenter Server management server at each site; these two vCenter Servers are independent, each managing its own site, but Site Recovery Manager makes them aware of the virtual machines that they will need to recover if a disaster occurs.

-Site Recovery Manager service: the Site Recovery Manager service is the disaster recovery brain of the deployment and takes care of managing, updating, and executing disaster recovery plans. Site Recovery Manager ties in very tightly with vCenter Server — in fact, Site Recovery Manager is managed via a vCenter Server plug-in.

-Storage: Site Recovery Manager requires iSCSI, FibreChannel, or NFS storage that supports replication at the block level. in our case, we support FC /iSCSI

-Storage-based (also called array-based) replication: Site Recovery Manager relies on storage vendors’ array-based replication to get the important data from the protected site to the recovery site. Site Recovery Manager communicates with the replication via storage replication adapters that the storage vendor creates and certifies for Site Recovery Manager. VMware is working with a broad range of storage partners to ensure that support for Site Recovery Manager will be available regardless of what storage a customer chooses, so expect the list to continue to grow.

-vSphere Replication has no such restrictions on use of storage-type or adapters.

image

User Interface

image

Users can manage both protected and recovery SRM instances from a single UI interface, obviating the need to open multiple clients or run particular management tasks from a specific location.

This is completely independent of vCenter Linked Mode. Linked mode is still helpful, because it will automatically migrate SRM licenses from site to site as VMs are migrated or fail, and also for standard non-SRM related infrastructure management.

SRM 5.8 /6.0 is fully supported with the vSphere Web Client and no longer available for use with the vSphere Client.

in the case of RecoverPoint / XtremIO there is a special UI to cover two very specific features SRM itself can only failover to the last point in time which isnt that helpful, especially in our case, you see, the value of RecoverPoint/ XtremIO is the ability to go to ANY point in time, so we can leverage the vCenter plugin to select the point in time you want to failover to and then SRM “think” it’s the last point in time (see a screenshot below)

image

the other special feature is the ability to give you, the vSphere admin or the storage admin tge insight to see which VMs are protected, which VMs aren’t and which VMs are partially protected, this is done with the unique integration of the RecoverPoint GUI to vCenter (as seen below)

image

Array Replication

image

If using storage-based replication, integration with the arrays with vendor-specific replication and protection engines are a very fundamental. This integration is provided via code written by the array vendors themselves. the SRA for RecoverPoint that support XtremIO is 2.0.2

SRAs have advanced for SRM 5, improving the integration with array-replication software for functionality like reprotect/replication reversal and failback.

image

SRA information is enhanced within SRM 5 and shows not only information about paired remote devices, datastores, and relevant protection groups, but will also show an arrow indicating the direction of replication for each device.

This gives very quick visibility into what is being protected and to where. This is particularly important during reprotect and failback operations.

Installing & Configuring the XtremIO / RecoverPoint SRA

The installation itself is pretty trivial, just download the SRA form the VMware SRM web site and install the executable at both the SRM servers or in my lab case, the vCenter servers which are also acting as the SRM servers, once the SRA have been installed, you will have to restart the SRM service.

image

once everything is installed, you will have to configure the SRA using the SRM web interface, configuring it gets as simple as it can gets, basically, you need to point the SRA to the RecoverPoint virtual management IP and feed it with the username / password to manage the RPA’s cluster, you will need to then repeat it at the recovery site as well.

image

lastly, in order for the SRM SRA to control RecoverPoint, you need to change the management of the consistency group (CG) to SRM, again, this will allow RecoverPoint to be managed by an “external application” which is in our case, VMware SRM

Storage Layout

image

lets take a look at the example above, at the protected site I have couple of datastores, each one can contain 1 VM or more, each datastore (lun at the storage level) can be a part of a protection group however, if you take a look at the purple example, a vm CAN spab across multiple datastores and hence, the protected group can span across multiple datastores (luns)

then, on the right side (my recovery side),I define the recovery group which is really a logical container for the protected groups I put inside of it.

image

By ensuring virtual machines are stored in a logical fashion on disk according to their protection group, administrators can minimize “shuffling” of VMs to fit optimal layouts for SRM.

VMDKs of a similar priority, or that will belong to the same protection group should be stored in the same datastores to minimize the amount of replication required to create efficient protection groups and thereby recovery plans.

Ensuring that your storage layout and VM placement has been organized with this in mind will mitigate many issues.

Workflows and Use Cases

Planned Migration

Allows for a data synchronization as part of the process, Will stop on errors and allow you to resolve them before continuing Since it shut’s down the virtual machines being migrated, application consistent VM’s are recovered on the recovery side!

DR Event

Allows for a data synchronization as part of the process, Will not stop on errors If the protected site is available, than the virtual machines being migrated will be application consistent at the recovery side. If the protected site is not available the consistency state will be what was designed in the solution.

Test Recovery

Allows for a data synchronization as part of the process, Supports a recovery that uses a different network, uses a clone or snapshot for the test.

Reprotect

Can be run following a successful recovery. Reverses the direction of replication, and protects virtual machines back to the original site. This enables a failback to recover the environment back to the primary site.

Cleanup

This is done following a test recovery. Removes the snapshot or clone created during the test. Powers off and deletes test VMs. Recreates the shadow VM indicating protection of the relevant VM from the primary site. The cleanup creates its own history report. Following a cleanup, the relevant plan is once again ready to be run.

Use Cases

Unplanned Failover

Recover from unexpected site failure, Full or partial site failure. The most critical but least frequent use-case. Unexpected site failures do not happen often. When they do, fast recovery is critical to the business

Preventive Failover

Anticipate potential datacenter outages, For example: in case of planned hurricane, floods, forced evacuation, etc.Initiate preventive failover for smooth migration. Graceful shutdown of VMs at protected site. leverage SRM ‘planned migration’ capability to ensure no data-loss

Planned Migration

Most frequent SRM use case, Planned datacenter maintenance, Global load balancing. Ensure smooth site migrations. Test to minimize risk. Execute partial failovers Use SRM planned migration to minimize data-loss. Automated Failback enables bi-directional migrations

Running a Test Recovery Plan

image

SRM offers two UI buttons to run test recoveries, or a test may alternately be initiated through a call to the API. Note the “Synchronize storage” option. This ensures very current copies of the VMs for the test.

image

This is a test recovery ready for users to test. Cleanup would occur after testing is complete by simply pressing the “cleanup” button. The virtual machines run from the cloned / snapshot environment at the recovery site, and replication and protection of the protected environment is not impacted during tests.

Following a cleanup, there is are no running virtual machines associated with the recovery plan that was tested, and associated snapshots / clones created by the test plan have been eliminated.

Shadow VMs have been recreated on the recovery site to indicate those VMs that are protected on the primary site and will be instantiated on the recovery site when a recovery plan is run.

Running a Recovery Plan

image

Two different UI buttons can start recoveries, or alternately it may be executed by an API call.

A recovery plan can be run as either a Planned Migration, or a DR event. Note that both types of execution will attempt to synchronize storage early in the recovery.  The data synchronization attempt is to ensure application consistency, and will execute as an early initial step in a recovery plan after an attempt to shut down the protected VMs, to ensure data is recent and synchronized after the VMs are quiescent.

Planned Migration

image

The difference between a Planned Migration and a Disaster Recovery is that a Planned Migration will automatically stop on errors and allow the administrator to fix the problem. A Planned Migration is designed to ensure maximum consistency of data and availability of the environment. A DR scenario is instead designed to return the environment to operation as rapidly as possible, regardless of errors.

Disaster Recovery

image

If a Recovery Plan is run as a disaster recovery, the goal is an aggressive Recovery Time Objective, and SRM will not halt the plan from continuing regardless of any errors that might be encountered.

Running a Recovery Plan – Storage Layer

image

Notice that during a recovery plan execution, replication is interrupted. The mirror image, or replication destination datastore, is now promoted and made read/write. The virtual machines in it are registered in vCenter in place of the shadow VM placeholders.

Failback is a process of “Reverse Recovery”

image

Failback combines recovery plans and reprotect.

“Failback” is the capability of running a recovery plan *after* an environment has been migrated or failed-over to a recovery site, to return the environment back to its starting site.

After a failover has occurred, the environment can be reprotected back to the original environment once it is again safe. Following this reprotect the recovery plan can be run once more, moving the environment back to its initial primary site.

Next it is imperative to reprotect once more, to ensure the environment is once again protected and ready to failover.

With SRM 5 VMware introduced the “Reprotect” and failback workflows that allowed storage replication to be automatically reversed, protection of VMs to be automatically configured from the “failed over” site back to the “primary site” and thereby allowing a failover to be run that moved the environment back to the original site.

After running a *planned failover only*, the SRM user can now reprotect back to the primary environment:

Planned failover shuts down production VMs at the protected site cleanly, and disables their use via GUI. This ensures the VM is a static object and not powered on or running, which is why we have the requirement for planned migration to fully automate the process.

Once the reprotect is complete a failback is simply the process of running the recovery plan that was used to failover initially.

ok, if you have read to this point, you probably want to see it all in action, please see a demo I made showing the integration of VMware SRM and XtremIO/RecoverPoint



VMworld 2015– please vote for the EMC XtremIO Sessions

$
0
0

 

clip_image002

Hi,

Its about the time of the year again where EMCWorld is behind us and VMworld is across the corner.

Here at XtremIO, We have put forward a lot of submissions for this year, I would appreciate your voting for the ones you like.

You can vote here:

https://vmworld2015.lanyonevents.com/scheduler/publicVoting.do

5431 Scaleable Database as a Service for the All Flash Software Defined Datacenter with VMware Integrated OpenStack(VIO) and Trove

Abstract

 

Redefine your virtualized environment or Infrastructure-as-a-Service cloud into a scalable and highly elastic database as a service platform. Deliver a fully managed and scalable relational or NoSQL database-as-a-service by easily integrating Trove with VIO and EMC XtremIO. Large Enterprise customers that have grown in a non structured approach require a common interface to the myriad internal cloud platforms. The OpenStack interface provides a set of common APIs which developers can programmatically deploy and manage virtual workloads. The VMware Integrated OpenStack provides an orchestrated workflow for the deployment and configuration of the various capability areas within Openstack. Trove is OpenStack’s implementation of a scalable and reliable cloud database service providing functionality for both relational and non-relational database engines. We will present best practices for deploying the Cassandra NoSQL engine on this platform as well as testing results demonstrating key metrics that will be of interest to customers and service providers alike. Trove automates the configuration and provisioning of new database servers. This includes deployment, configuration, patching, backups, restores, and monitoring. The XtremIO all flash storage rounds out the capabilities by providing superior performance, rich set of copy services and the ability for failover and replication.

5720 Architecting All-Flash Storage for Private Cloud- a Clean-Slate Approach to Realize Unprecedented Agility, Efficiency and Performance

 

Flash is rapidly gaining popularity as viable high-performance data center storage. But not all flash storage platforms are created equal. Many vendors replace spinning media with flash drives in a legacy storage architecture. This is an easy but sub-optimal approach. Flash is fundamentally different than spinning media and needs a fresh approach to take benefits of flash technology capabilities. In this session we will explore some of the critical architectural tenets for getting the most out of flash storage. You will learn how scale-out architecture, in-memory metadata, smart data reduction, and inline always-on data services can help transform your infrastructure and ensure consistent high IO performance for all workloads, lower your infrastructure footprint and TCO, and enable agile service delivery.

5731 Reimagine Mixed Workload Consolidation- Break the Traditional Infrastructure Barriers

Virtualizing all of the applications, mission critical and non-mission critical, has been an unattainable goal for many IT organizations. But with the right storage architecture and hypervisor integrations, you now can. You can realize exceptional infrastructure and application agility by consolidating workloads of mixed characteristics and of varying criticality. You can scale your environment non-disruptively and cost-efficiently to meet the needs of enterprises small and large. You can safely consolidate production, test, development, QA and analytics environments on a common platform while compromising no production performance and with marginally incremental capacity consumption. We will explore real life examples demonstrating how many IT organizations have broken the traditional storage barriers. You will learn about the key architectural elements that can ensure consistent predictable IO performance to help you meet and beat the strictest performance SLAs while lowering the total cost and complexity of your infrastructure and application lifecycle

5765 Transforming Agile Software DevOps with Flash

More than ever, agile development and software DevOps drives critical top-line business impact for customers across a broad range of industries. Learn how XtremIO is fundamentally enabling the next generation of agile DevOps with customer use cases to
• Improve developer and overall product quality by providing full copies of production
applications & datasets to all developers with zero-overhead XtremIO in-memory
copy services
• Dramatically accelerate performance across the entire DevOps ecosystem, enabling 1000’s of developers in a real-time
• Deliver consistent, predictable performance for developers, automated build systems,
and automated QA via sub-millisecond All-Flash DevOps storage
• Accelerate adoption of a continual experimentation and learning through
rapid repetition of prototyping

5740 Best Practices for Running Virtualized Workloads on XtremIO

Great, you customer have just purchased a shiny new all flash array (AFA), now what? In this session we will learn the reasons for one of the quickest revolutions that the storage industry has seen in the last 20 years, and how XtremIO can enable breakthrough capabilities for your server virtualization and private cloud deployments. We will go trough specific use case issues and how to overcome them. With lots of real-world tips and tricks, you can consolidate your most demanding virtualization workloads successfully and gracefully.

5747 VMware’s Internal Private Cloud – Choosing the Right Storage to Meet User Demands

VMware has embarked on a Private Cloud initiative during the last 5 years. From a humble infrastructure, hosting a few VMs it has now grown to host more than 200k VMs, that is supported 24 x7, year around, across multiple geographies. A very diverse sets of business units consume a smorgasbord of Applications & Data from the platform. The internal private cloud initiative at Vmware is at the core of providing an agile datacenter to meet the various demanding business needs of our end users. This session will explore the lessons learned in selecting the right storage infrastructure. It will dive into the role cutting edge technologies like EMC XtremIO (All Flash Array) plays to meet the demanding requirements. Hands on experts who build, operate and manage the private cloud will share their best practices, tricks and techniques they have perfected over the last 5 years. Learn from the architects the storage optimizations, fine-tuning and configurations that you will need to scale your Private Cloud.

5755 Revisiting EUC Storage Design in 2015 – Perspectives from VMware EUC Office of the CTO

Storage landscape of EUC has changed a lot in the last 5 years. The latest generation of EUC software, along with the advent of purpose-built all-flash arrays, software-defined storage, latest generations of hybrid arrays, have all helped mainstream the EUC/VDI technology in IT.

4560 Re-Imagining your Virtualized Workloads with FLASH

Re-imagining your virtualized workloads with FLASH Moore’s law changed everything, compute and network started to get faster and faster and the only kid that left alone was the storage. storage was always considered as the last frontier for virtualized workloads, as a customer, you had to adopt to its limitations instead of it answering your business demands, no more! in this session you will hear why customers decided to go all in for an all flash array, what was their motivation and even concerns of doing so and how did it all changed for them once they did.

4916 Virtualized Oracle On All-Flash: A Customer’s Perceptive on Database Performance and Operations

In tIn the virtualized infrastructure the new technology wave is all-flash arrays. But today all administrators (virtual, storage, DBA) need to know how changing an essential part of the virtual infrastructure impacts critical applications like Oracle databases. This joint customer and XtremIO presentation acts as a practical guide to using all-flash storage in a virtualized infrastructure. The emphasis will be on value realized by a customer using all-flash together with findings from third party test reports by Principled Technologies. You will learn how all-flash storage is changing performance intensive applications like virtualized databases.

5652 Best Practices & Considerations for the Sizing and Deployment of 3D Graphics Intensive Horizon View 6 Desktops in an All Flash Datacenter

An all-flash datacenter hosting a VMware vSphere infrastructure with NVIDIA GRID can be used to deliver a robust and wide ranging set of graphics accelerated capabilities to end-users. NVIDIA GRID technology offers maximal choice in terms of both user density and GPU resource allocation. GRID vGPU can be used to handle the vast majority of both vSGA or vDGA use cases, allowing the infrastructure administrator to support a wide spectrum of virtualized resource options on a much reduced hardware set. To ensure the best end-user experience, and maintain a effective and efficient deployment, it is required that the infrastructure administrator be knowledgeable of the various graphics accelerated user profiles with their corresponding application and vGPU requirements. Information detailing the server-side considerations involved in the sizing exercise are generally available and so-far, somewhat understood. The corresponding storage-side requirements for the various configuration options and use-cases are not. This presentation will address both server and storage sizing considerations, and aim to offer a more complete knowledge-set to be used in the successful sizing of graphics accelerated VMware Horizon 6 DaaS projects.


The Evolution Of VDI – Part 1

$
0
0

This Blog Post is based on the work of Chhandomay Mandal, Michael Cooney and myself.

This is the first part of a two-part series blog post around evolution of VDI over the past years and how EMC XtremIO works across all of them. So let’s start.

How many of us remember Terminal Services?

Terminal services were quite popular in late nineties through early 2000s. It provided the ability to host multiple, simultaneous client sessions on a single Windows Server OS instance.

It was cost effective, easy to deliver, mature technology reaching north of 80 million users in its heyday.

 However, it had limited use cases with application compatibility issues. The most problematic of all was the fact that the application instances were shared. If somebody could crash Office, it was down for everyone on the system. If someone could get the OS to panic, then everybody was presented with the blue screen of death.

On the storage side, the hard disk drive or HDD arrays of the day with ho-hum performance and limited data services were sufficient to host the Terminal Services.


As server virtualization gained strong foothold in the data center, desktop virtualization became the next logical candidate. We could decouple the software from the hardware dependencies, and run desktops as VMs.

As the data center grade storage costs an order of magnitude higher than that of desktop storage, we were introduced to the concepts of gold images and differential data to make the storage economics work. Instead of everybody having their own desktop image, all desktops shared the same read-only gold image and the individual desktop writes destined to the OS got written to the unique differential data space of a desktop.

 So, in this model, you need, say, 40 GB for gold image, and 2 GB for differential data per desktop. For a 10,000 desktop deployment, your capacity requirement is roughly 20TB, which is reasonable.

 On the flip side, differential data get discarded when users log off or desktops reboot. So we solved the capacity problem at the expense of user personalization. This type of desktops are commonly referred as non-persistent desktops as they don’t retain changes made by the user, and are commonly deployed for task worker use cases like call centers.

During the same time, hybrid arrays came to the market that added SSDs into HDD arrays. The hybrid arrays leverage SSDs as a cache or a tier in front of HDDs to provide some performance acceleration. Now, VDI desktops create periodic I/O storms. For example, the gold image gets hammered with read I/O requests when many desktops boot simultaneously. Although hybrid arrays provide some relief to the IO storms because of SSDs in their stack, they neither scale for thousands of desktops nor have the agile, efficient data services needed in a modern data center.

Persistent desktop, where you take everything you have in your physical desktop – OS, applications, user settings, data – and just make it a VM, is an intuitive solution.

In this model, users are happy because they retain all their personalization including the applications they installed. Moreover, desktop admins are happy too because all the existing desktop management tools like System Center Configuration Manager, Patch Manager and other agents continue to work exactly as before.

Now, persistent desktops generate higher IOPS per desktop than their non-persistent counterparts, but, most importantly, this VDI model has extremely large capacity needs. Continuing with the same example as before, for 10,000 users at 40GB per persistent desktop, your capacity need is 400TB. So the capacity requirement increased 20x, from 20TB in non-persistent desktops to 400TB in persistent desktops for the same number of desktops. A lot of these are data are common, as Windows OS and applications constitute the bulk of it. A highly efficient data reduction technology at the heart of the storage layer is the most critical component for any persistent desktop model to be remotely economically feasible.

On the storage side, around 2012, All-Flash arrays started to come to the market. Some of them provided high performance with zero data services; others provided performance improvements beyond what the hybrid arrays could deliver along with limited data services including some data reduction. However, as these All-Flash arrays are based on scale-up architectures, their controllers become the bottleneck for performance much earlier than their backend SSDs. Moreover, data reduction service, like data deduplication, is a post-process activity for these All-Flash arrays. So a typical scale-up architecture based All-Flash array in the market can’t deliver the performance or the high data reduction efficiencies that are critical for a successful persistent desktop deployment at large scale.


In recent months, newer technologies are coming up in the VDI platform software side, that has the promise of delivering a no-compromise, all-inclusive desktop experience, when paired with the right storage platform. We can now deliver desktops at scale that can easily handle graphics-rich applications, seamlessly work across all use cases and can be better than physical desktops for both user experience and cost.

Storage is the critical component in delivering the promise of next-gen desktops. A scale-out, truly N-way active-active architecture is needed to deliver the high performance at a consistently low latency needed to scale your virtual desktop environment while maintaining a great end-user experience. Inline, all-the time data reduction technologies are critical to reduce the capacity footprint. XtremIO is the only solution in the market today that can not only satisfy all the requirements of non-persistent and persistent desktops at scale but also deliver on the emerging VDI platform software technologies.

As many of you can attest, XtremIO can run non-persistent desktops very well. It delivers high performance with consistently low latency, thereby enabling you to host a large number of desktops on a single X-Brick, which is the basic building of an XtremIO cluster with two controllers and a DAE that can have 25 SSDs. You can run storage-intensive operations like desktop refresh and recompose in a non-persistent desktop environment while other active desktops are running, and the array will continue to deliver high level of user experience.

However, if you are running non-persistent desktops only because of capacity savings you need, then XtremIO has good news for you.

The same single X-Brick can host a high number of persistent desktops as well. Your users can have a highly responsive desktop with all their personalization and your desktop admins can continue to use the same set of desktop administration tools while enjoying the superior storage capacity reduction from XtremIO to make the solution very cost-effective.

What makes XtremIO unique for VDI?

  1. In a nutshell, XtremIO’s unique content-based metadata engine coupled with its scale-out architecture delivers the unique value for VDI. You grow an XtremIO cluster by non-disruptively adding additional X-Bricks, linearly increasing both capacity and performance at the same time.

2. The volumes are always thin-provisioned, the only writes performed to the disks are for data that are globally unique across the entire cluster.

3. All the data services are inline, all the time. So as the writes come in, the metadata engine looks at the content of the data blocks, performing the writes to the SSDs only if the cluster didn’t see the data before; for duplicate data, it just acknowledges the writes to the host with appropriate updates to in-memory metadata without actually performing any writes to the SSDs. This inline deduplication saves tremendous capacity for persistent desktops without affecting performance at all.

4. Inline compression adds to XtremIO’s data reduction efficiencies.

5. XtremIO has a proprietary Flash-based data protection algorithm that offers better than RAID-10 performance with RAID 5 capacity savings.

6. Finally, XtremIO’s differentiated agile copy data service helps you provision and deploy desktops at a much faster rate than any other Flash-based solution in the market.

XtremIO’s architectural advantages translate into very significant savings for our VDI customers. Today we have over 2.5 million virtual desktops running on XtremIO, and our customers typically see 10:1 or more data reduction ratios for persistent desktops with 50% less cost per desktop than traditional storage.

A single X-Brick can host up to 2500 persistent desktops and up to 3500 non-persistent desktops.

I want to add that beyond storage, XtremIO helps in reducing server infrastructure footprint as well for VDI. As an example, we have had customers who could reduce the RAM allocation from 2 GB/desktop to 1.5 GB/desktop, that’s a 25% reduction in RAM, and XtremIO handled the resulting increased IOPS due to more swapping by the OS with the same latency for the same number of hosted desktops on the array.

With XtremIO, Desktop Virtualization has experienced some breakthroughs including:

  • 10:1 Reduction in capacity requirements per desktop
  • 50% Reduction in Storage Cost Per Desktop
  • 25% Lower RAM requirements
    • Based on reducing RAM needed per desktop from 2GB to 1.5GB
      • XtremIO delivers the performance experience of a 2GB desktop using only 1.5GB of RAM
      • Not all IO performance comes from RAM. Flash is faster than the desktop O/S, Desktop kernel & Hypervisor

expects disk to be. Previously adding desktop RAM solved some IO problems. This is no longer as necessary.

The disk performance now comes directly from the XtremIO disk, not virtual desktop “RAM trickery”.

  • 40% Reduction in Server Infrastructure.
    • If you need to support all desktop services at scale
    • Example: Suspend & Resume desktop services for 100% of desktops = a lot of extra storage sitting around

being idle 99% of the time. With XtremIO, admins can oversubscribe the storage needed for desktop

services & provision less XtremIO storage (don’t need 1:1 mapping).

  • Each X-Brick can support up to 2,500 persistent virtual desktops and up to 3,500 non-persistent desktops.

Now let’s see how a single X-Brick handle a boot storm of 2,500 VDI VMs!

now lets see how the same single X-Brick handle the load of 2,500 Full clones VMs (Office is installed locally)

note that im using LoginVSI 4.1 Knowledge Worker which creates additional load across the board (everyone else out there are still using the “medium” workload which is lighter on the CPUs and the storage array)

see the full differences here: http://www.loginvsi.com/documentation/Changes_old_and_new_workloads

kworker

Now let’s take a look at some of the emerging trends in VDI. First, just-in-time desktops.

In a traditional desktop model, applications, user data, profile settings, OS are all intermingled together. When you deliver persistent desktops using this traditional model, it is the old wine in a new bottle and you miss out the potential of improving the desktop management workflow itself.

Instead, what if you could virtualize each application or a set of related applications separately in their own containers? Alongside, each user gets their own container for their specific data & applications. Think of it, each of these containers being a vmdk file. And then, you can put together everything at run time by collecting the appropriate set of containers for a specific user as and when he needs to have access to his desktop. Let’s illustrate this further.

You have a pool of stateless desktops with nothing but OS. Applications have their own containers; same for each user specific data.

Office worker Joe comes in. He gets one of the stateless desktops, Adobe & Office applications get added, Joe’s personal data along with the applications he installed on his desktop get mounted from Joe’s user data volume, and he gets his own customized desktop.

Now let’s consider designer Bob. When he logs in, it is the same process but this time, based on Bob’s profile, the graphics-intensive design applications get added too for Bob’s personalized desktop.

So this combines the best of both non-persistent and persistent world. Users get customizable desktops and apps with consistent experience across sessions. IT can easily update and deliver apps, secure the data and enjoy the better economics.

The chart shows the results of application response times, as measured by the industry-standard real world VDI load generator tool LoginVSI, of 2,500 desktops running on a single 10 TB X-Brick. First, LoginVSI generated the VDI load for 2,500 persistent (full clone) desktops. The application response times – as the number of desktops are increased – are shown in blue. Then the 2,500 persistent (full clone) desktops were converted into layered desktops. In this specific example, VMware App Volumes were used to create the application containers and delivery of applications. When the same 2,500 layered desktops were run, the application response times showed similar patterns but were roughly 15% higher throughout the test than before.

The takeaways here are:

  1. The layered desktops create more IOPS/desktop than their persistent full clone counterparts. So a platform that can deliver high IOPS with consistently low latency is more important than ever in this type of next-gen desktops.
  2. The I/O profiles change from the typical VDI I/O patterns that we are used to. For example, application container volumes will see high I/Os as the application volumes are effectively read-only. However, there will be variations of the read I/O intensity; e.g. the Microsoft Office container volume will likely see higher read I/Os than a specialty application as there are many more users of Office than a department-specific specialty application. On the other hand, containers for user specific information will see high write I/O profile. Unless the storage array balances the load uniformly and automatically across all controllers and all SSDs in a true N-way active-active scale-out architecture with radically simple array management, the SAN administrators will be hard pressed to optimize the storage platform to deliver best user experience with these next-gen desktops.
  3. Finally, you can see there were some high utilization spikes in persistent (full clone) desktops in blue; the spikes with layered desktops (in purple) were less in magnitude but higher in numbers (existing pretty consistently throughout the tests). It shows that the underlying storage platform needs to deliver high performance with consistently low latencies irrespective of the VDI I/O load for these next-gen desktops.

High IOPS is a critical need for any VDI deployment. Here is an example of what XtremIO can deliver for VDI. In this example, the XMS dashboard is showing that the X-Brick is delivering nearly 125K IOPS when 1,000 persistent (full-clone) desktops are booted simultaneously.

High throughput is another critical need for any VDI deployment. Time needed to clone many desktops from a template is dependent on the throughput that an array can deliver. Here is an example of what XtremIO can do for VDI. In this example, the XMS dashboard is showing that the X-Brick is delivering nearly 45 GB/s throughput when 1,000 persistent (full clone) desktops are cloned simultaneously from a template. 45GB/s is an insanely high throughput, the array is able to deliver this throughput at the target side due to XtremIO’s in-memory metadata architecture, and unique in-memory metadata based implementation of VMware’s VAAI Copy Offload API.

Consistently low-latency is another important criteria for the array to deliver highly-responsive, better-than-physical user experience to all the users, all the time, at any scale. Here is another example of what XtremIO can deliver for VDI. In this example, the XMS dashboard is showing that the X-Brick is delivering sub-millisecond latency for 1,000 persistent (full clone) desktops at steady state.

Below you can see a testing I ran with Horizon 6 + App volumes for 2,500 users running on a single 10TB X-Brick!

in part 2, i will discuss Desktop As A Service (DaSS) and virtualizing GPUs


Upcoming Joint Citrix / XtremIO Webcast

$
0
0

Citrix XenDesktop, a leading desktop virtualization platform, offers a variety of desktop image management options including Provisioning Services (PVS) and Machine Creation Services (MCS) for different VDI use cases. EMC XtremIO, the industry’s #1 All-Flash array, delivers uncompromising end-user desktop experience – at scale, all the time, and for any user type – with its scale-out architecture, intelligent content-based addressing, in-memory metadata, inline all the time data reduction technology, unique copy data management and advanced XenDesktop integration.
In this session, we will be discussing how XenDesktop’s advanced image management efficiencies coupled with XtremIO’s unique architectural advantages translate into best price per desktop and drastically lower TCO for VDI deployments large and small. We will also be presenting customer case studies highlighting how organizations can start small and grow incrementally to nearly any scale without any service disruption while maintaining the same end-user experience with scale-out storage. Together, Citrix and XtremIO deliver an unparalleled VDI experience to end-users and administrators alike.

Register here:

http://www.citrixreadypromo.com/partners/webinar/cr-webinar-emc/?pcode=emc

Psss..wanna see some demos that will be shown in the webcast?


EMC XtremIO Sessions At VMworld 2015

$
0
0

Wow! This year, we @ XtremIO have some really good sessions lined up at VMworld, im so happy that 3 out of the 4 sessions are actually coming from the solutions team which I lead as part of the CTO role. The sessions will go really deep into the weeds so if you are expecting the usual vendor marketing, look elsewhere J

EUC4879 – Horizon View Storage – Let’s Dive Deep!

Some people say that if you get the storage right in your VDI environment, everything else is easy! In this fun-filled technical workshop, attendees will receive a wealth of knowledge on one of the trickiest technical elements of a Horizon deployment. This session will cover storage architecture, planning, and operations in Horizon. The session will also profile a new, innovative reference architecture from the team at EMC XtremIO.

My take on this session:
Michael is a long runner, he has invested a lot of time in writing up the VDI white paper and if you can see the rest of the team at this session, you know it’s a must if you are planning to do VDI this year, you can see some of the work led to this session here:

https://itzikr.wordpress.com/2015/06/26/the-evolution-of-vdi-part-1/

 

VAPP4916 – Virtualized Oracle On All-Flash: A Customer’s Perceptive on Database Performance and Operations

In the virtualized infrastructure the new technology wave is all-flash arrays. But today all administrators (virtual, storage, DBA) need to know how changing an essential part of the virtual infrastructure impacts critical applications like Oracle databases. This joint customer and XtremIO presentation acts as a practical guide to using all-flash storage in a virtualized infrastructure. The emphasis will be on value realized by a customer using all-flash together with findings from third party test reports by Principled Technologies. You will learn how all-flash storage is changing performance intensive applications like virtualized databases.

 

My take on this session: vinay and sam are the go to people when it comes to DB’s, there so much stuff going in the universe of XtremIO and DBs that youll be amazed how much you can learn just by attending this session!

VAPP5598 – Advanced SQL Server on vSphere

Microsoft SQL Server is one of the most widely deployed “apps” in the market today and is used as the database layer for a myriad of applications, ranging from departmental content repositories to large enterprise OLTP systems. Typical SQL Server workloads are somewhat trivial to virtualize; however, business critical SQL Servers require careful planning to satisfy performance, high availability, and disaster recovery requirements. It is the design of these business critical databases that will be the focus of this breakout session. You will learn how build high-performance SQL Server virtual machines through proper resource allocation, database file management, and use of all-flash storage like XtremIO. You will also learn how to protect these critical systems using a combination of SQL Server and vSphere high availability features. For example, did you know you can vMotion shared-disk Windows Failover Cluster nodes? You can in vSphere 6! Finally, you will learn techniques for rapid deployment, backup, and recovery of SQL Server virtual machines using an all-flash array.

  • Scott Salyer – Director, Enterprise Application Architecture, VMware

  • My take on this session: wanda is a SQL guru, she can have an entire day just explaining how to properly virtualize MS SQL with XtremIO and vSphere, the foundation for this session is a white paper she’s working on as well, I also know scott from VMware and I even had the pleasure to co-present with him on the same topic some years ago, again DB and XtremIO are like peanut butter and chocolate, do not miss it if you care to know about compression, performance, snapshots etc’!

VAPP6646-SPO – Best Practices for Running Virtualized Workloads on All-Flash Array

All Flash Arrays are taking the storage industry by storm, many customers are leveraging them for virtualizing their data centers, trough the usage of EMC XtremIO, we will start examining the reason this is happening, what are the similarities and differences between the AFA’s architectures, we’ll then go deep into some specific best practices for the following virtualized use cases: 1. Databases, can they benefit from being virtualized on an AFA. 2. EUC, how VDI 1.0 started, what does VDI 3.0 means and how is it applicable for an AFA. 3. Generic workloads being migrated to AFAs


  • My take on this session: what can I say, this is me I guess but seriously, this session has 2 parts, the first one really goes deep into the different type of AFAs architectures so if you are considering evaluating / buying one, I highly encourage you to attend the session, the 2nd part of the session really goes into the dirty tricks that have to do with AFAs and vSpherre AND more importantly, how to overcome these, even if you are not an XtremIO customer, you will benefit from it as well.

     

    I hope to see you ALL at VMworld!

    Itzik


RecoverPoint For Virtual Machines RP4VMs) 4.3 is here and it’s awesome!!

$
0
0

I’m super excited today about a new product that we have just released, RecoverPoint For VMs 4.3 (RP4VMs) is out, this release is a substantial one since it contains so many new features that were asked by you our customers, so before we talk about these features, let’s do a quick recap of what RP4VMs is


The RecoverPoint product family is comprised of two different products that are powered by the same technology but targeted at two different audiences, solving different business problems.

The traditional RecoverPoint is targeted for storage administrators that want to protect array LUNs of over 50 supported EMC and non-EMC storage arrays.

RecoverPoint for Virtual Machines is geared towards vAdmins of VMware environments for protecting individual VMs or multiple VMs in consistency groups for crash consistency protection.


RecoverPoint for VMs is targeted at the VMware admin (vAdmin) and protects at the virtual machine level. It is fully managed through vCenter, is deployed as a virtual appliance on existing ESXi hosts, has an embedded I/O splitter within the ESXi host kernel, and is storage agnostic and supports any SAN, vSAN, NAS or DAS storage arrays certified by VMwares Hardware Compatibility List (HCL).

RecoverPoint for VMs is a completely separate product from the traditional RecoverPoint product. There is no upgrade, no downgrade and no interoperability with the traditional RecoverPoint products.

RecoverPoint for Virtual Machines simplifies operational recovery and disaster recovery with built-in orchestration and automation capabilities accessible via VMware vCenter. vAdministrators can manage the lifecycle of VMs directly from the vSphere Web Client GUI and perform Test Copy, Recover Production, and Fail Over to any point in time.

RecoverPoint for VMs is a software-only solution that is installed in a VMware vSphere environment. The virtual RecoverPoint Appliances (vRPAs) are delivered in an OVA format. A RecoverPoint for VMs cluster can have 2 – 8 vRPAs.

The RecoverPoint for VMs splitter is installed on all the ESXi hosts involved in VM replication and is delivered as a vSphere Installation Bundle (VIB). The ESXi software iSCSI adapter is utilized to communicate with the vRPAs.

Management for RecoverPoint for VMs is fully integrated as a plugin into the VMware vSphere Web Client.

Now, lets talk about the enhancements in the 4.3 release and boy, we have so many of these!

Multi Site Support


RecoverPoint for VMs 4.3 supports concurrent local and remote replication for production VMs.

Multisite configuration of 2:1 (fan-in) or 1:2 (fan-out) for flexible disaster recovery topologies are possible and scalability of inventory and protected VMs has increased.

RecoverPoint for VMs delivers data protection for VMs. It is designed with vAdmins in mind to enable them with the proper tool. With RecoverPoint for VMs, vAdministrators can take a more active role in data protection which has traditionally been served by storage administrators. This tool allows them to protect and recover VMs in which their mission critical applications are deployed. Data protection Service Level Agreements (SLAs) can be met, VMs are protected at the hypervisor level with individual VM granularity, and the tool now offers better visibility and recoverability in the data protection processes. RecoverPoint for VMs 4.3 can have up to three (3) RecoverPoint for VMs clusters in a RecoverPoint for VMs system. In this star topology, cluster 1 is connected to cluster 2 and cluster 3, but there is no connection between cluster 2 and cluster 3. One site is connected to all the remote sites.

Increased Scalability


RecoverPoint for VMs 4.3 supports a vCenter inventory of up to 5,000 registered VMs.

Large numbers of protected VMs and VMDKs can be supported and VMDKs can be as large as 40 TB.

A consistency group can have up to 32 VMs while a RecoverPoint virtual appliance cluster can support up to 128 consistency groups.

VMware clusters can have up to 32 ESXi hosts and up to four (4) ESXi clusters can connect to a single vRPA cluster.

Orchestration enhancements

RecoverPoint for Virtual Machines simplifies operational recovery and disaster recovery with built-in orchestration and automation capabilities accessible via VMware vCenter.

Virtual machine hardware changes are now replicated to its copies. RecoverPoint for VMs 4.3 extends the use of wizard guided work-flow for recovery orchestration and prioritization. It determines the VM start-up priority as well as the integration of scripts for the start-up sequence.

If necessary, replica VMs can now have their network automatically re-configured when they come online in a disaster recovery situation.

Real time and historical activity reporting for the recovery activities are available.


RecoverPoint for VMs 4.3 automatically replicates virtual hardware changes made to protected Production VMs. The motivation is to keep the Replica VMs inline with the Production VMs in terms of virtual hardware configuration. Replication of hardware changes occurs periodically. Replica VM hardware configuration is changed when performing Test Copy. Select the Virtual Machine or Consistency Group or Group Set that you want to use for testing a copy. The Test a Copy Wizard is started and when the image is selected, the hardware configuration is applied as well.


Use the Protect VM Wizard to protect a Virtual Machine (VM). Under Advanced Options you will find the configuration parameters for how a VM and its vHW is protected. By default, the Replicate Hardware Changes option is enabled. If this is not desirable, then unchecked it.

RecoverPoint for VMs 4.3 has multiple enhancements and new features around virtual machine automation in protected and disaster recovery environments. Built-in orchestration and automation capabilities are integrated with VMWare vCenter at no additional cost. These features work on Replica VMs when performing Test Copy, Recover Production, and Failover.

Examples of use cases for why this automation is provided: modifying parameters within the VMs to match the different environments, pausing a start-up process and allowing manual intervention, or configuring network parameters like DNS, Firewall, DHCP and so on before starting up the VM copy.

The Startup priority is improved for recovery orchestration and prioritization through a wizard guided workflow. It supports application dependencies with a complex power-up sequence priority for each VM within a Consistency Group (CG) and to each CG within a group set.

Another option of the RecoverPoint for VMs 4.3 orchestration features is the before-power-on and after-power-on scripts. An external host can run scripts when executing a Test Copy, Recover Production or Failover to ensure environment or application consistency.

Furthermore, prompted messages provide the possibility of administrator configurable messages during the recovery flow.

The Re-IP information for Replica VMs is automated to avoid IP duplication and can be applied through scripts and network information in CSV format.

RecoverPoint for Virtual Machines does not work with VMware Site Recovery Manager (SRM). SRM should only be used with traditional RecoverPoint and the data protection for storage arrays.


All of the orchestration features just listed can be used for Replica VMs when performing Test Copy, Recover Production, and Failover. Here is an example showing how these advanced configuration options could all work together. Two VMs with the highest start-up priority 1 in a consistency group start first when, for example, a Test Copy on this CG is performed.

Each VM can have a Pre-Power-on external script assigned and a Pre-Power-on prompt configured. User prompts define a message to be displayed in vCenter to prompt the administrator to perform specified tasks before continuing with the start-up sequence.

The power-up process then continues. If a RecoverPoint for VMs system environment does not have an L2 network between sites, Re-IP of the VMs is necessary when the Replica VMs are brought up on the secondary site (for testing copies) in order to avoid IP conflicts. Additionally, a Post-Power-on external script and prompt with stop can be set per VM.

All these advanced configuration steps can be set for the highest priority VMs in the highest priority consistency groups within a group set.

These VMs could have network services (like DHCP, DNS, FTP and so on) running on them that need to come up before application servers or other VMs.

After the highest ranked VMs in the highest ranked CGs have started, all the VMs with Priority 2 in the highest ranked CGs will start. These VMs can again have all the advanced configuration possibilities configured that were just described for the Priority 1 VMs.

After all the VMs in the highest ranked CGs have started, VMs in CGs with Priority 2 would be next and so forth.


we configure the start-up priority for VMs by selecting RecoverPoint for VMs Management in the VMware vSphere Web Client. Here select Protection and then Consistency Groups. Select the CG to be configured from the list of CGs. The button Edit Start-Up Sequence can now be selected which opens the Start-up Sequence of VMs in this Group window. Here it can be checked if the VM is critical (by default) or not. The Start-up priority default settings is 3. Open the dropdown list and select a setting from 1 to 5.


Before- and After-power-on scripts can be configured. The purpose of the scripts is to tie Disaster Recovery (DR) and Business Continuity (BC) tasks into the recovery flow.

Select RecoverPoint for VMs Management and the CG to be configured. Open the Start-up Sequence of VMs in this Group window. These are the same windows where the start-up priority for VMs was set before.

The additional RecoverPoint for VMs 4.3 automation enhancements allow administrator scripts to run before power-on and directly after power-on on virtual machines. The scripts are executed with SSH on an external host. Check Run script to activate this feature, enter the script name and command for the script. Each script also has a mandatory timeout period. The recovery flow is blocked until the script has successfully executed. If the script does not execute within the set time or the script fails, the system will retry the script a pre-defined number of times. The administrator receives a prompt indicating if the script failed.

The after-power-on script can be entered here as well.


The Dashboard > Overall Health window displays Recovery Activities. The activity report shows the involved consistency group, the activity and state, if a User prompt is waiting, the involved copy, the transfer state, and the image access progress. The User prompt number is linked to the Recover Production wizard. If selected, the wizard will open. Here it shows that a User prompt is waiting and the activity Recovering Production has been paused by the system as it awaits the administrator to dismiss the prompt.

RE-IP

IP information can be imported and exported for automated IP addressing to avoid IP duplication between Production VMs and Replica VMs when testing a copy, recovering production, and failing over.

The Re-IP information can be applied to Replica VMs through scripts and CSV file network information. The scripts are executed directly on the Replica VMs and parameters are passed to the script via VMware tools. In case VMware tools are not installed, a warning is displayed and Re-IP does not take place.

The Re-IP can be set on a batch of VMs. The virtual network interfaces are used in the reconfiguration.

The supported changes can be IPv4 and IPv6 addresses, subnet masks, gateway, DNS, and NTP information.

The RecoverPoint for VMs 4.3 software release also provides CSV template files, Linux bash and Python scripts, and Windows batch and Python scripts. The user can choose to use his own scripts and leverage the parameters passed by RecoverPoint for VMs. The user is responsible to place the scripts in the appropriate location for the changes to be persistent across reboots (OS dependent).


Go to RecoverPoint for VMs Management > Administration > vRPA > Networking. Here you can import a CSV file for a system-wide network configuration and export a CSV file with the current network configuration.

In this example CSV file, VM names are highlighted. Multiple vCenter servers and their names are displayed as well. This is an RP system-wide network file. There is an option to generate an empty template CSV file with all VMs and NICs in order to import them again.

Reports


The reports for recovery activities can be viewed in the RecoverPoint for VMs Management window under Reports. Select the relevant consistency group on the left, and then scroll through the list of recovery activity reports on the right.

Clicking on a report displays the details of the specific activity report underneath in the same window. It is an overview of the type of activity, and the start and complete time. This is where a report can be exported in CSV format.

Within each activity report, you can expand the steps to view more details.

Deployments Enhancements

The vRPA and RecoverPoint cluster deployment has been simplified, now running faster, and reboots have been eliminated.

This is also true for the ESXi splitter installation.

The RP for VMs 4.3 release supports the VMware 6.0 environment and vCenter servers can run in linked mode.

Support for a deployment server is added to RecoverPoint for VMs 4.3 which allows automated deployments using API calls.

The deployment process remains mostly the same in this release of RecoverPoint for VMs 4.3 compared to the 4.2 release. Changes, enhancements, and critical steps are covered in this module.

The deployment needs a VMware vSphere environment 5.1 U1 and above with ESXi hosts 5.1, 5.5 and/or 6.0. A minimum of two ESXi hosts are needed since the RecoverPoint for VMs cluster has a minimum of two virtual RecoverPoint Appliances that need to run on different physical ESXi hosts for high availability.

First, install the RecoverPoint for VMs ESXi host splitter on all the ESXi hosts in a VMware cluster that hosts protected Production VMs or their Replicas.

Prepare the virtual network configuration (WAN, LAN, iSCSI1, and iSCSI2) on the ESXi hosts in the VMware cluster where the vRPAs are deployed. The iSCSI network is required between the vRPAs and ESXi host splitters. They are not visible to VMs or storage and require the same broadcast domain as the VMkernel ports for the Software iSCSI Adapter enabled on each ESXi host. For best practice, separate iSCSI traffic from LAN/WAN traffic. It is recommended that each network maps to a separate physical NIC. NIC teaming is not supported.

After you deployed the planned number of vRPAs using the OVA file, run the Deployment Manager and configure a RecoverPoint for VMs cluster, including the installation of the vSphere Web Client plugin for RecoverPoint for VMs. The Deployment Manager is also used to connect a new RP cluster to an existing RecoverPoint for VMs system.

The RecoverPoint for VMs splitter comes as a vSphere Installation Bundle (VIB) file and is installed with the user root through SSH and the esxcli command.

The RecoverPoint for VMs ESXi host splitter is to be installed on all ESXi hosts in the VMware cluster. If a new ESXi host is added without the splitter to an ESXi cluster, a warning is displayed and protected VMs will not be able to use this ESXi host.

The splitter installation in RecoverPoint for VMs 4.3 no longer makes a reboot of the ESXi host necessary, simplifying the splitter installation.

Additional Product Enhancements

RecoverPoint for VMs 4.3 has added and simplified functionality and management for VMDKs, RDMs, thick and thin provisioning, and resizing of VMDKs.

The license is based on the production site vCenter ID. Furthermore, application consistent bookmarks with RecoverPoint KVSS are supported.

The upgrade from RecoverPoint 4.2 to 4.3 can be performed online.

With VMware Thin provisioning operating at the virtual disk level, VM administrators gain the ability to allocate virtual disk files (VMDK) as thick or thin. Thin provisioning of virtual disks allow virtual machines on VMware ESXi hosts to provision the entire space required for the disks current and future activities, but at first commits only as much storage space as the disk needs for its initial operation.

RecoverPoint for VMs 4.3 allows you to replicate data in a mixed environment of thick-thin configurations across sites, including thin-to-thick and thin-to-thin.

An advanced option in the Protect VM wizard allows you to choose a different configuration for the Replica VMDK copies.

When setting up a Replica VM with automatic provisioning, it will have the same provisioning as the Production VM.


When the Protect VM wizard is used to setup protection for a VM, the Advanced Options are available under the Configure production settings. The Advanced Options offer settings for the Protection Policy for new VMDKs, Disk Provisioning settings, and Hardware Changes. These options are set within the VM production copy settings and are valid for all the Replica copies of this Production VM.

Notice the option Disk Provisioning with its default setting Same as source. Clicking the drop down list shows the available options.

Application Consistency with VSS

Point-in-Time snapshots with RecoverPoint for VMs systems are typically crash consistent.

Application consistency can be achieved, for example, by leveraging the RecoverPoint KVSS utility, which is a command-line utility. It enables application consistent bookmarks for a certain VM. This utility can be installed on Windows 2008, 2008 R2, and 2012 R2. The commands are issued against the CG and not the VM.

Recovery execution is possible for any application supporting Microsoft Volume Shadow Copy Service (VSS). Validated applications include MS SQL 2012 and later and MS Exchange 2010 and later.

you can download RP4VMs 4.3 from here, http://www.emc.com/products-solutions/trial-software-download/recoverpointforvms.htm

111

Ok, that was a lot to cover up, time to show you some demos,

Let’s start with the actual deployment

Now, lets show a basic protection workflow

And here’s an advanced protection workflow

And finally, the recovery (failover) workflow



Virtual Storage Integrator 6.6 Is (almost) Here !

$
0
0

Hi,

One of the things I’ve been always passionate about is integration, you see, no man is an island and when it comes to storage management, that rule is certain, we always aspire to bridge the gap between the storage administrator to the application owner, one of this integration points is a vCenter plugin known as VSI (short for “Virtual Storage Integrator”).

It’s one of these plugins that we released back in 2009 and kept improving it, the latest version, 6.6 adds so much in the context of XtremIO that I would like to take a step back and explain again its architecture, how to install it and of course, describe the features

Architecture

at its core, VSI is a vCenter plugin, its an OVA that gets deployed and registered against the vCenter server, it support the entire EMC storage portfolio and its providing viewing capabilities of your storage array from one hand and provisioning and management capabilities on the other, however, don’t get confused here, the “provisioning” part is not limited to a basic VMFS/RDM provisioning, we have special integration with VDI (both CITRIX XenDesktop and VMware Horizon VIEW, we also have special integration with VMware Site Recovery Manager (SRM) in the context of being able to failover to a specific Point In Time (PiT) image set so you see, its becoming a centralized place to do everything related to XtremIO / vSphere..

if you just want to see a demo how VSI 6.6 works with XtremIO, please watch this recorded demo i made:

Deploying VSI 6.6



deploying the OVA appliance (Solutions Integration Service) is very easy, you first need to download it from https://support.emc.com

(look for Virtual Storage Integrator), once downloaded, just use the “deploy OVF template” from vCenter, give it an IP address (or use DHCP one if it wont change) and that’s it in terms of deployment, you now need to register it.


once deployed, Browse To Solutions Integration Service with port 9443, for example, https://192.168.0.2:9443/vsi_usm/admin , then Login And Change Password (the default username is “admin” and the default password is “ChangMe”)


if you are upgrading an existing VSI deployment, please take a backup of the old one by Clicking Database in the left panel, then Select Take a Backup, click Submit, and save output file.


then, once you deployed the VSI 6.6 version, you can migrate the VSI 5.5 DB to it by Selecting Data Migration, then Choose File and Submit.


if you have migrated from VSI 5.5, there’s no need to re-register it against the vCenter, however, if it’s a new VSI deployment, you now need to register it with the vCenter server, go to “VSI setup” and enter the vCenter IP/FQDN in the “Host” box, please also ensure to insert the “Username” and the “Password” where appropriate. Lastly, press the “Register” button. You should get a confirmation in the “status” window if the registration went ok.


you can now go ahead and open the vCenter web interface and log in to it.

When Logging In, the Web Client Will Download The Registered VSI Plugin, This May Take Several Minutes.


Click vCenter In The Left Navigation Panel And You Should See The Category EMC VSI At the Bottom, VSI 6.6 Has Been Successfully Installed.

Registering Arrays

Ok, at this stage, we need to register the XtremIO array XMS, we can either register one, many, or one XMS that manage multiple XtremIO clusters (version 4.0 and above).


Click Solutions Integration Service, Select Register Solutions Integration Service from The Action Menu.

Complete Information To Register The Current VMware Account With A Solutions Integration Service Account.


you should now see this screen which indicates that the Solutions Integration Service Has Been Successfully Registered.


Similar To Solutions Integration Service Registration, you Now need to Register the Storage Systems.


as explained earlier, XtremIO v4 has multi-cluster support , that means that a single XMS can support multiple clusters, VSI 6.6 support this feature, Use the same storage registration wizard, VSI will auto discover all clusters and register them all, showing them as separate arrays. During provisioning, you can choose which array to use (Cool!)


this is how it looks like when a single XMS that’s connected to multiple XtremIO clusters looks like after being registered with VSI 6.6, note that the management IP looks the same but it actually details the multiple clusters it manage (in this example, its two.)

Registering RecoverPoint

One of the most important features of XtremIO 4.0, is the ability to support native replication with RecoverPoint (which is by far, my most favorite EMC replication technology), this phase is of course not mandatory if you aren’t using RecoverPoint but if you are, please read ahead


registration is very easy, you can either use the SIS interface (the one that you first use to log in to the appliance (screenshot below), or you can use the vCenter SIS plugin to do do (screenshot above), I will vote for the 2nd option, you simply add the RecoverPoint management cluster IP and it will automatically detect all the other RecoverPoint clusters that have been connected to the cluster IP you specified!


remember the VMware SRM PiT integration, now its time to register VMware SRM with the plugin itself, this will allow us to then specific a specific Point In Time to be used when running a SRM Failover, this is very important since the entire design center of RecoverPoint is the ability to recover from almost any point in time but VMware SRM only allow you to failover to the last point in time, so by registering the RecoverPoint cluster IP and the SRM IP, we can now achieve this functionality! Very very cool.


registration of SRM simply can be done using the “data protection” part of the VSI plugin.

Ok, now comes the really important stuff, below you can see a demo that I recorded that actually shows all the features of VSI 6.6, if you have any question, please drop me an email, I really hope you will use and love the VSI plugin as much as I do.

Features

There are many features VSI can do for you so let’s have a recap of everything it can (in the context of XtremIO):

Setting ESXi Host / Cluster Best Pracitces


there are many best practices when it comes to XtremIO running with vSphere, you could always apply these using the CLI or powershell scripts but you can now do it all using the VSI plugin, moreover, you can now apply these best practices at the ESXi cluster level instead of manually pointing it to each ESXi host, this version also have a new XCOPY optimization so please make sure to leverage it.

VMFS / RDM Provisioning


you can provision either VMFS and / or RDMs volumes, you can provision multiple datastores at the same time, do not forget to also select the specific XtremIO best practices like VAAI TP alerts (in case of running out of array space, vCenter will pop up an error that you ran out of space). As documented before, if you are using XIOS 4.0 when a single XMS manage multiple arrays, VSI understand this and will ask you from which XtremIO cluster to provision the datastores from.

Extending a Volume / VMFS Datastore.


in case of actually running out of space at the datastore level and assuming you DO have enough capacity at the array level, you can now use VSI to extend both the physical lun size and the datastore size, saving you two different places you had to do it manually.

Reclaiming Unused Space at the Datastore Level


VSI can now also run space reclamation at the datastore level and also schedule the operation, moreover, it can now be applied at the folder level and run space reclamation on all the datastores grouped under that folder, it will run on one datastore followed by the next one and so on, space reclamation is very important when running with vSphere, since you want to utilize every bit of the flash media you are using.

Note that the VSI space reclamation functionality covers only the datastore level and doesn’t perform In-Guest space reclamation, datastore space reclamation is good for the following cases, VM deletion, VM migration to another datastore, VM based snapshot deletion or refresh, VM powering off and powering on (which creates a new .vSwp file) for a more comprehensive details about the differences between volume level space reclamation to the In-Guest one, please see the following post, https://itzikr.wordpress.com/2014/04/30/the-case-for-sparse_se/

Viewing XtremIO used Logical Capacity at the VMFS level


VSI can also show you the logical capacity a VMFS datastore consume on the array level, this is also very useful on the following scenario, let’s assume you have provisioned a 5TB datastore and you have deployed 2 VMs which are 40GB each in terms of their provisioned capacity but since they only consume 20GB each (assuming no deduplication or compression for the sake of simplicity, vSphere will still show 80GB while the array will show 40GB, this is because XtremIO will discard zeroes and will not count them as part of the data reduction numbers and / or the consumed logical capacity so now you can view the logical capacity from the vCenter “datastores” tab.

Taking An Array Lun Based Snapshot


in VSI 6.6, we also introduced the ability to take array based snapshots on a VMFS datastore, this will allow you to invoke a lun level snapshots using the vCenter plugin,


once the snapshot has been taken by VSI, you can distinguish it from other snapshots that may have been taken by someone else by the fact that is will have a tag named “VSISnapshots” associated with it.


you can also distinguish between snapshots that were taken by VSI by viewing the “VSI created Snapshots” tab.


when you want to restore VMs from this VSI created snapshot, simply press the “Mount” button and select the ESXi host you want VSI to mount this volume to, VSI will do the lun mapping for you and will signature and present the VMFS datastore to the ESXi Host, you can then copy VM’s form the snapshot VMFS datastore to other datastores.

Viewing RecoverPoint Information


if you are using RecoverPoint to replicate volumes from your XtremIO array, you can now view the consisteny group details like how many point in times can this VM recover from (2)


and how many other VMs are part of the same consistency group (2)


if you are using VMware Site Recovery Manager (SRM), you can now enforce SRM to use a specific RecoverPoint Point In Time to recover from, simply press the “Apply” button. This will ensure this specific PiT to be used for SRM failover.

AppSync Integration

While the VSI is capable of taking lun based snapshots by itself, you may want to leverage EMC AppSync for more complex scenarios where VM backup and restores are needed at the application level (application level consistency) and where you want to support an automated snapshot refresh, for example, you have your production DBs and you then want to refresh your test/dev environment with the production copy on a scheduled interval, AppSync also support physical environments in addition to virtual environments where the VSI plugin only support VMs running on VMware vSphere.


subscribing (adding) a datastore to an existing / new AppSync SLO is very easy, you can simply right click the datastore and press “Susbscribe” or “Create and Subscribe”, in my case, I already created the SLO, so I chose “subscribe”



once you subscribed the datastore, you can see its subscription details under the “Manage” tab, “EMC VSI” tab, and the “AppSync Management” tab. In my example, you can see that a local snapshot will be taken every 24 hours and it will include VM snapshot as part of the backup (you could disable it and just go with a volume only – crash consistent backup)


once I ran (or waited for the next interval that will invoke the run), you can see under the “Copies” tab if the backup was successful, here you will also see if more than once backup ran, remember, in my example I chose to run a backup every 24 hours, so I should see the “copies” tab being populated with more and more backups every 24 hours.


here you can see that since I ran two backup jobs so far, the XtremIO array will list these snapshots under the “AppSyncSnapshots” tag.


the restore operation is also very straight forward, you can either restore a VM, VMs, files from within a VM or mount a datastore, in my case I just want to restore a VM so I right click a VM, “All EMC VSI Plugin Actions”, “AppSync VM Restore”

that’s it, as you can see, the release of the VSI 6.6 plugin is very rich in terms of the capabilities it offer, below you can see a demo showing everything that we discussed in the post.



VPLEX GeoSynchrony 5.5 Is Here

$
0
0

Hi,

VPLEX has been an hugh success for EMC customers and I have to admit, when we started shipping XtremIO, I didn’t expect it to be such a popular solution for our XtremIO customers as well, I was wrong. It turned out that many of the customers who already took a leap of faith with us, took another one and decided to go Active / Active with their datacenters, come to think about it, it makes sense, you are already investing money in flash technologies, why won’t you ensure its actually being utilized across both of your datacenters…

Since day 1 we heard you our customers loud and clear, you loved the technology but you were asking to integrate it even further, you didn’t want to provision the luns and the extents manually, you wanted to do it all from the same place, so today I’m really happy to see this feedback manifest itself as a GA feature (across with many others) to the VPLEX 5.5 release!

VPLEX VIAS (Integrated Array Services Enhancements)

The VPLEX Integrated Array Services (VIAS) feature enables VPLEX to provision storage for EMC VMAX, VNX, and now also XtremIO storage arrays directly from the VPLEX CLI, UI, and REST API.
VPLEX uses Array Management Providers (AMPs) to streamline provisioning and allows you to provision a VPLEX virtual volume from a pool on the storage array.
The VIAS feature uses the Storage Management Initiative-Specification (SMI-S) provider to communicate with the arrays that support integrated services to enable provisioning. The SMI-S
provider is used for VMAX and VNX. After the SMI-S provider is configured, you can register the SMI-S provider with VPLEX as the
Array Management Provider (AMP). When the registration is complete, the managed arrays, pools and storage groups are visible in VPLEX, and you can provision virtual volumes from those pools
and assign storage groups. The pools used for provisioning must have been previously created on the storage array, as VIAS does not create the pools for provisioning.
VPLEX 5.5 also supports a REST AMP used with XtremIO arrays.

VIAS allows administrators to provision storage using a single management interface. The VPLEX 5.5 release continues to build on the VIAS momentum from the VPLEX 5.3 and 5.4 releases and
enables more VIAS usability enhancements along with support for VMAX3 Service Level objectives (SLO) and XtremIO provisioning.
Provisioning storage from VPLEX is made easier for the administrator when using VMAX, VNX and
XtremIO arrays. It is time-efficient and allows an end-to-end stack integration.
VPLEX must be zoned to the arrays. Administrators can provision volumes from existing pools directly from VPLEX. VPLEX communicates with arrays to create volumes from pools and masks
them to VPLEX. Volumes are imported into VPLEX. Once imported, VPLEX can present these volumes to host(s) as virtual volumes.

To use the VIAS feature, first register an array management provider with VPLEX. Second, register the initiators that access VPLEX storage. Third, create a storage view that includes virtual
volumes, initiators and VPLEX ports to control host access to the virtual volumes. Fourth, use the Provision-from-Pools wizard to select storage pools from which to create the virtual volumes and
storage groups for the virtual volumes. These steps can be performed in Unisphere for VPLEX or the VPLEX CLI.
Consult the VPLEX Administration Guide and supporting white papers for planning,
implementation and best practices. Make sure that you are familiar with the pools on the array before provisioning storage.


The VPLEX Integrated Array Service (VIAS) uses the REST AMP type for the XtremIO array support. The AMP is a VPLEX construct created for each XtremIO array. As a prerequisite, the
VPLEX and XtremIO array need to be zoned following best practices. The REST AMP supporting VIAS with XtremIO arrays does not require additional software. The
provider is on the XtremIO array itself. You need to register the AMP as type REST within VPLEX.
Each XtremIO array is registered for VIAS in a 1-to-1 relationship with a VPLEX cluster. Multiple XtremIO arrays need to be individually registered in VPLEX. This is different from SMI-S AMPs
where multiple storage arrays are managed by the SMI-S provider, then the SMI-S provider is registered with VPLEX.
Once successfully registered provisioning jobs can be created. VPLEX with VIAS can provision XtremIO volumes, import them, then create and provide virtual volumes to application hosts.
The Integrated Array Services of VPLEX 5.5 support the XtremIO 3.x and 4.x releases while new features of 4.x are not supported.


Use Unisphere for VPLEX and select Register under Array Management Providers in Provision
Storage. This opens the Register Array Management Provider window. When registering the REST AMP
for an XtremIO array, select the correct Provider Type from the drop down list. Select an array from the Array drop down menu.
Select the XtremIO array and fill in the other mandatory fields as well. Remember that each XtremIO AMP needs to be individually registered.


Next, during the REST AMP registration process, the user is required to accept the array certificate. The SSL certificate is mandatory for VIAS to work with the XtremIO array as HTTPS is
used to communicate.


After accepting the certificate, you should see the REST AMP provider in the list of registered
Array Management Providers. Continue with the following steps to set up VPLEX to provision storage. (click-2)Register Initiators
and Create Storage View might already have been done before the configuration of array management providers. Manual provisioning on the storage arrays is possible, but would need
several more steps. VIAS simplifies the storage provisioning. Step 2 in Step-by-Step: Using VPLEX to Provision Storage shows Register Initiators. The initiators (application hosts) that will access VPLEX storage need to be registered. Step 3 is used
to create storage views. Storage views include the initiators and VPLEX ports to control host
access to virtual volumes. Step 4 launches Provision Storage. This provisioning wizard guides you through provisioning
storage and exposing the storage to hosts.


In the Provision Storage Wizard you need to select an existing VPLEX consistency group or create a new one. If the provisioning is done in the CLI, the CG has to already exist when running the
command. The storage being provisioned through this wizard is added to the selected or newly created CG.
The selection of the CG determines where and how the Storage Provision wizard will create storage.


Depending on the consistency group selected, volumes are local or distributed. Additionally, local mirroring can be selected.
Other details entered in this step are capacity per volume, a volume base name and the job name.


Under Storage, the storage array and one corresponding storage pool is selected. XtremIO does not use the concept of pools. In order to provide consistency across all array types, a default pool
is created for each array named DefaultPool. Each XtremIO array will have this default pool. Storage Groups can be automatically assigned, or when the drop down is changed to Select from
list, then the XtremIO initiator groups are presented as storage groups in VPLEX. Only the initiator groups containing VPLEX ports are shown.


The next step lets you select a storage view. Through the Storage View the provisioned storage
is exposed to hosts. The provisioned storage does not have to be assigned to a storage view, in which case no host is
able to access the virtual volume.


The Review screen allows you to look at the job details before the job is started. The Provision Storage button starts the job.


The results can be viewed by using the provided link for the job status. This opens the Provision
Jobs Status window. Select the job from the jobs list and open the Provision Job Properties window. The job status indicates if the job is still running or is completed. Creating storage on XtremIO arrays through VIAS is extremely fast and might only take seconds.


Through the VIAS storage provisioning, multiple configuration changes on the XtremIO array occurred:
A VPLEX_VIAS volume folder was created, the folder name format is: VPLEX_VIAS_<TLAID>_<ID>. The VIAS created XtremIO volumes names begin with VPLEX and include the base name entered
in the wizard, making it easy to identify the volumes. Furthermore, the details of the XtremIO volumes show a tag for the VPLEX-VIAS created volumes. The volumes were then mapped to the VPLEX initiator group.


Details in Unisphere for VPLEX under the Array Management Providers show the AMP for the XtremIO array, details on the XtremIO array name, the default pool and the storage group used.


The Storage View in Unisphere for VPLEX shows the storage view that was selected for the job and the added virtual volumes that were created from the XtremIO volumes.


The VPLEX command line can also be used to register the REST AMP for the XtremIO array. All the information needed in the GUI is used here in the command. The password of the XtremIO array needs to be entered twice. The certificate is displayed and needs to be confirmed.


Once the AMP is registered, the command ll storage-groups/ lists the VPLEX storage group
(=initiator group) to be used for the provisioning command. Only initiator groups with VPLEX
initiators are displayed. The command ll storage-pools/ lists the storage pool to be used when provisioning storage
on the XtremIO array. It will always be the pool DefaultPool specifically created to stay consistent with other storage arrays.

Here’s a demo of VIAS and XtremIO



VSI 6.6.3 Is Here!

$
0
0

Hi,

Rolling on with the Q3 updates to XtremIO, I’m very happy to share with you an updated version of our VSI (Virtual Storage Integrator) vCenter plugin.

If you are new to VSI, I highly encourage you to read about it here first, https://itzikr.wordpress.com/2015/07/17/virtual-storage-integrator-6-6-is-almost-here/ , seriously, it’s pretty much the glue between XtremIO to VMware vSphere & vSphere and many other things and its FREE!

Ok, you’re back? Cool! this 6.6 SP3 release contains the following improvements:

Take Snapshot

This feature help user to take snapshot for qualified XtremIO datastore.

There is dedicated menu item named “Take Snapshot” which will be shown only for XtremIO based datastore.

User can take snapshots by right clicking on an XtremIO datastore. On the right-click menu, choose All EMC VSI Plugin Actions

Take Snapshot.


The popup dialog shows loading during the back end process.


XtremIO 4.0, read-only snapshot is supported


Take Snapshot by Scheduler (4.0 ONLY)

This feature help user to take snapshot for qualified XtremIO datastore by scheduler.

There is dedicated menu item named “Take Snapshot by Scheduler” which will be shown only for XtremIO 4.0 based datastore.

User can take snapshots by right clicking on an XtremIO 4.0 datastore. On the right-click menu, choose All EMC VSI Plugin Actions
Create Snapshot Scheduler.

The popup dialog shows loading during the back end process.


After loading is finished, user can configure scheduler as wanted:


After pressing ‘Submit’ button, the task will be submitted automatically to SIS server. User can press ‘OK’ button to close this dialog.

Managing Snapshot Scheduler

This feature enables end user managing XtremIO snapshot scheduler directly in vSphere client. The management includes view, modify, remove and disable/enable. The snapshot schedulers were created on one XtremIO volume which is the backend of one qualified vSphere datastore.

NOTE: It’s for XtremIO 4.0 based datastore only. There will no snapshot scheduler shown for XtremIO 3.0 based datastore.

View Snapshot Scheduler

User can click one qualified datastore, and then view defined snapshot scheduler which is on the backend XtremIO volume at storage side under tab “Scheduler Management”.


Modify/Remove/Disable/Enable Snapshot Scheduler

After viewing snapshot scheduler, end user can select one to modify, remove, suspend or resume it.

By navigating path as: Datastores -> Manage -> XtremIO Management -> Scheduler Management, use can manage defined scheduler on one XtremIO volume.

001

Choose one from listed schedulers:

002


User can choose “Suspend” to suspend or disable one scheduler, choose “Modify” to update one scheduler and choose “remove” to remove one scheduler. For the “Suspend” button, its value will vary based on the status of selected scheduler. If it’s an on-going scheduler, the value will be “Suspend”. Otherwise, the value will be “Resume”.

If you click on “Suspend” or “Resume” button, there will be a dialog shown to confirm user’s action. After pressing “OK” button on this dialog. The process of suspending or resuming one scheduler will be triggered. After its completion, selected scheduler will be suspended or resumed.

If you click on “Modify” button, there will be a pop up dialog to enable user edit this scheduler:

 003

Pressing “Submit” button will trigger scheduler updating process. After this process completed, the scheduler will be updated:

004

View Snapshot for VM

This feature lists the XtremIO’s snapshots (aka, Restorable Point in Time for VM) details in the table under the tab of “XtremIO Management” after user selecting one qualified VM. Those snapshots are on the XtremIO volume which is the backend LUN of datastore hosting the selected VM.


Restore VM from one PiT

This feature enables user select one of Point in Time (PiT) to restore VM to corresponding status.

User can firstly select one PiT, and then press ‘Restore’ button. After pressing ‘OK’ button on confirm dialog, there will be a task triggered underlying to do VM restore automatically.


The whole process in backend consist of several sub tasks as followed:

  1. Take a writable snapshot (snap-B) based on the selected snapshot snap-A.
  2. Mount the snap-B to the host which is the same as the one hosting the target virtual machine vm-A, a new temp datastore (datastore-B) is ready.
  3. Search vmx file on datastore-B based on the one of vm-A.
  4. Register new vm (vm-B) with the searched vmx file.
  5. Check whether the available datastore capacity is enough or not:
    1. Calculate the vm size (size-1) on datastore-B.
    2. Calculate the available capacity (size-2) of datastore (datastore-A) in which the vm-A is residing.
    3. Check whether size-2 > size-1, if true, continue, if false, go to step 12.
  6. Shutdown vm-A
  7. Take a snapshot (snap-C) based on the volume which is mounted as the datastore-A, snap-C can be considered as a backup.
  8. Unregister vm-A
  9. Clone vm-B to datastore-A, get a new vm (vm-C)
  10. Power on vm-C
  11. Destroy vm-A
  12. Unmount datastore-B
  13. Clean snap-B

    View Snapshot for Datastore

This feature lists the XtremIO’s snapshots details in the table under the tab of “XtremIO Management” after user selected one qualified datastore. Those snapshots are on the XtremIO volume which is the backend LUN of selected datastore.

NOTE:
The ‘qualified’ here means this datastore is based on one LUN, and this LUN is related to single XtremIO volume. If one unqualified datastore was selected, XtremIO Management tab would be invisible.


Restore Datastore from Snapshot (XIOS 4.0 ONLY)

This feature enable user to restore one datastore from one selected XtremIO snapshot.

NOTE: It’s for XtremIO 4.0 based datastore only. If one snapshot is from XtremIO 3.0, the ‘Restore’ button will be disabled.

User can firstly select one snapshot, and then press ‘Restore’ button. After pressing ‘OK’ button on the confirm dialog, there will be a task triggered underlying to do datastore restore automatically.


Assuming in the front-end specifying the target snapshot (snap-A) used to restore the datastore (datastore-A). Then the underlying in back-end is like:

1.     Shutdown/Poweroff all vms based on datastore-A

2.     Take a snapshot (snap-B) based on the volume (vol-A) which is mounted as the datastore-A, snap-B can be considered as a backup.

3.     Restore vol-A with snap-A, remove the auto backup snapshot.

4.     Handle virtual machines on restored datastore

5.     Unregister not-existing virtual machines

6.    Power on existing vms

Mount Datastore from Snapshot

It’s totally migrated from existing features of VSI 6.6 using HTML based technology. Previous one is removed from VSI.

When a snapshot is selected in the table, mount button would be enabled to allow user to mount the snapshot as a datastore.


When Mount button is clicked, a window will pop up for configuration. At the beginning, the window tries to load all available hosts in the vCenter.

After loading is finished, all connected hosts display in the dropdown list. User needs to choose one host to mount the selected snapshot on.

During the whole process of mount configuration, user can cancel the mount operation by clicking Cancel button or clicking “x” on the top left of the window. But once the OK button is clicked, the task will submit to the SIS server and the operation cannot be canceled anymore.

When the OK button is clicked, the whole web page would be locked for processing. After the in progress bar disappears, user can monitor the task status by My Tasks in the Recent Tasks panel.

After all tasks finished, user can find the newly mounted XtremIO datastore by going to Home > vCenter > Hosts and selected the host which you choose to do mount in the previous steps. And on the panel right to it, you will find the datastore by clicking on Related Object > Datastores.

You can see all the “old” and the “new features” of the VSI plugin here:



EMC AppSync 2.2 SP2 Is Out!

$
0
0

Its not a secret that copy management plays an Huge role with our customers, they have been looking for ways to utilize array based snapshots without impacting the performance of the parental volumes for ages, the best proof is the fact that clones still are widely used by so many customers out there and its not because of “DR”, its simply because customers are afraid of impacting their “crown jewels”, e.g, the primary DB.

The bigger issue than just performance impact is the data growth (or should I say, explosion) due to copies, IDC predict that more than 60% of the data we store is derived from the actual copies as oppose to the primary data itself.

We have been shipping our unique snapshots mechanism which we are now calling “XVC” (XtremIO Virtual Copies”) since April 2014 and it’s been highly successful by our customers, almost overnight, we started to see an adoption of XtremIO for this practical use case.

Talking to our customers (and I do that a lot), there has always been one repeated feedback in regards to snapshots, “we LOVE it but we want a single engine that can automate, orchestrate and provide a self-service for our applications owners (mainly DBA’s) to do it all from one place”.

So back in December 2014, we announced the first version of Appsync that integrates with XtremIO, I highly encourage you to first have a read about it here, https://itzikr.wordpress.com/2014/12/19/protecting-your-vms-with-emc-appsync-xtremio-snapshots/

EMC’s AppSync enables Oracle / MS SQL database copy management automation to deliver a “self-service” experience to create, protect and recover databases in minutes versus traditional database lifecycle management with manual or script driven provisioning, protection and recovery that take hours to days and require an DBA and storage administrator.

AppSync takes care of the provisioning in 3 modes. On-demand , Scheduled and Expired. While the first 2 are meant for provisioning while the third option is related to the removal of database copies which were created earlier. During the copy process the replication operation is taken care by the RecoverPoint or VMAX or VNX replication tools. We need to add the storage option or RecoverPoint Appliance option to take the advantage for the replication operation.

In the first option we can take the copy of the database as per our convenience while in the second option we can schedule the operation without any DBAs intervention.

The second option ensures that the performance of the production database is not impacted and the database copies can be created during business off-hours.

The third option purges the database copies created earlier and it recovers the earlier used capacities. Here we can also schedule the Rotation policy where the purging stuff can happen automatically.

..it wasn’t perfect, it lacked the ability to do an XtremIO XVC refresh / restore, refresh is the use case that is so popular, imagine the following scenario, you have a production DB running on XtremIO, you then want to take a point in time copy of this DB volume(s) and present these to your test/dev environment to run some analytics on it or maybe just to run your newly launched code against, you then want to repeat this process every night / week/month basically, refreshing your test/dev with the latest copy of your production DBs volumes, this is what Appsync now support, XtremIO XVC refresh / restore + the Support for RecoverPoint with XtremIO, it support the following applications, Oracle, MS SQL, MS Exchange and vSphere

Now let’s have a more detailed explanation for all the features that AppSync 2.2 SP2 support with XtremIO

RESTORE SUPPORT FOR XTREMIO 4.0 COPIES

restore

In AppSync 2.2.2 user can restore the copies created on XtremIO 4.0.
Copies which are created on XtremIO 3.x or less are still not restorable and user needs to follow the manual steps in order to
restore those copies.
If user upgrades the XtremIO from 3.x or below to XtremIO 4.0
then the copies created on XtremIO 3.x or below will be restorable.
As part of restore, XtremIO created snapshots are kept under a new tag so that admin can take appropriate action on these
snapshots. ( /Volumes/AppSyncSnapshots/RestoredSnapShots )


REFRESH SUPPORT FOR XTREMIO 4.0 COPIES

This is the crown jewels of the AppSync product which now includes support for XtremIO, the “refresh” operation has many use cases, lets cover some of them

refresh

the most common one is refreshing your test / dev DBs with the latest copy of your production environment every X interval, it can be once a night, once a week etc, it can be refreshing only some of your test / dev copies, the point is that now for the first time, you can work on your test/dev environment without messing with your production instance and all of this, with a click of a button


XtremIO makes the DBA’s production maintenance easier:

  • Run DBCC against a snapshot of your database and move the CPU usage and locking off your production servers.
  • Verify your backup chain by restoring full, differential and log backups without using up extra space.
  • Make full backups from a snapshot of the database.

reporting

Quoting some customers of ours from EMC World:

“Our busiest season is in August, when students come back to school in waves and we learn which of the features we deployed over the summer don’t actually scale. This past August, we had the XtremIO in place, so our I/O speeds were great – so great that CPU on the production SQL cluster became our bottleneck.

We sat in the high 90s for weeks while developers scrambled to refactor queries.

To alleviate some of the CPU pressure, we leveraged XtremIO snapshots and offloaded some heavy real-time reports to another SQL box.

Another way to scale out is to deploy SQL Server AlwaysOn, and enjoy a transactionally consistent copy (or three) of your database without any space penalty thanks to inline deduplication. We’re deploying AlwaysOn this month with a copy of the database for each of our intensive workloads. Before XtremIO, my IT team would just have laughed at me if I’d asked for that many spindles. Now I can do it without buying anything.

Don’t have SQL Server Enterprise? Snap a copy of the database to another production server and point read-only workloads that can use point-in-time data at it. Refresh them every hour, every night or whatever makes sense for your application.”


Upgrading SQL Server is much safer with XtremIO’s snapshots.

  • Protect the database by snapping it prior to a SQL Server – if the upgrade is a disaster, you can point SQL Server at the snapshot and recover.
  • Test the upgrade on a snap of the database until you get all the scripting right.
  • Run through the upgrade on a snap and get a real sense for how long the upgrade will take.

Snapshots are also great for providing safety and predictability around application updates and deployments.

LENGTH OF VOLUME NAME IS INCREASED.

XtremIO 3.x and below had a restriction of volume name length of 64 characters and hence AppSync used to recommend
volume name to be 40 characters in length ( to support repurposing use case and snapshots).

XtremIO 4.0 has increased the volume name character length limit to 128 and hence AppSync recommends the volume length
to be less then 104 characters ( to support repurposing use case and snapshots).

MULTIPLE XTREMIO MANAGED BY SINGLE XMS


AppSync 2.2.2 supports the configuration where user can
manage multiple XtremIO arrays under one XMS.



RecoverPoint SUPPORT FOR XTREMIO 4.0

From AppSync 2.2.2 onwards, users can create RecoverPoint bookmarks with XtremIO 4.0 in the backend.
Mixed storage (VNX to XtremIO and vice versa is also supported) with both static and dynamic mount.
.Please follow the RecoverPoint guide to configure RecoverPoint consistency groups with XtremIO in backend.
RecoverPoint Repurposing is also supported for XtremIO 4.0.
Minimum RecoverPoint Version to be used is 4.1.2 in order to use XtremIO 4.0 in backend.

Parallel bookmarks are not supported if any volume in the group set is on an XtremIO array. That means users can’t
create simultaneous bookmarks on two RecoverPoint Consistency Groups. So AppSync also doesn’t protect two
consistency groups together in the same ServicePlan run. The recommendation is to have one Consistency Group
protection per service Plan run from Appsync if XtremIO 4.0 is in the backend.

RecoverPoint XtremIO bookmarks supported Image Access Mode is “Physical” while mounting RecoverPoint copies.

You can download Appsync from http://support.emc.com

You will have to install AppSync 2.1 and then upgrade to 2.2 SP1 followed by SP2, no external DB is needed, AppSync uses its own DB so the installation is literally “next->next” , done type of experience, the software is free for 90 days so I highly encourage you to take it for a spin!

Below you can see a demo of AppSync with Oracle DB,

and you can also see a demo of AppSync refreshing your dev/test environments with the latest production copy when it run against MS SQL running on Microsoft Clustering Services (MCSC)


EMC XtremIO Sessions At VMworld 2015 Europe

$
0
0


 

Wow! This year, we @ XtremIO have some really good sessions lined up at VMworld, im so happy that 3 out of the 4 sessions are actually coming from the solutions team which I lead as part of the CTO role. The sessions will go really deep into the weeds so if you are expecting the usual vendor marketing, look elsewhere

EUC4879 – Horizon View Storage – Let’s Dive Deep!

Some people say that if you get the storage right in your VDI environment, everything else is easy! In this fun-filled technical workshop, attendees will receive a wealth of knowledge on one of the trickiest technical elements of a Horizon deployment. This session will cover storage architecture, planning, and operations in Horizon. The session will also profile a new, innovative reference architecture from the team at EMC XtremIO.

Wednesday, Oct 14, 5:00 PM – 6:00 PM – Hall 8.0, Room 21

My take on this session:
Michael is a long runner, he has invested a lot of time in writing up the VDI white paper and if you can see the rest of the team at this session, you know it’s a must if you are planning to do VDI this year, you can see some of the work led to this session here: https://itzikr.wordpress.com/2015/06/26/the-evolution-of-vdi-part-1/

VAPP4916 – Virtualized Oracle On All-Flash: A Customer’s Perceptive on Database Performance and Operations In the virtualized infrastructure the new technology wave is all-flash arrays. But today all administrators (virtual, storage, DBA) need to know how changing an essential part of the virtual infrastructure impacts critical applications like Oracle databases. This joint customer and XtremIO presentation acts as a practical guide to using all-flash storage in a virtualized infrastructure. The emphasis will be on value realized by a customer using all-flash together with findings from third party test reports by Principled Technologies. You will learn how all-flash storage is changing performance intensive applications like virtualized databases.

Thursday, Oct 15, 9:00 AM – 10:00 AM – Hall 8.0, Room 38

My take on this session: vinay and sam are the go to people when it comes to DB’s, there so much stuff going in the universe of XtremIO and DBs that youll be amazed how much you can learn just by attending this session!

VAPP5598 – Advanced SQL Server on vSphere

Microsoft SQL Server is one of the most widely deployed “apps” in the market today and is used as the database layer for a myriad of applications, ranging from departmental content repositories to large enterprise OLTP systems. Typical SQL Server workloads are somewhat trivial to virtualize; however, business critical SQL Servers require careful planning to satisfy performance, high availability, and disaster recovery requirements. It is the design of these business critical databases that will be the focus of this breakout session. You will learn how build high-performance SQL Server virtual machines through proper resource allocation, database file management, and use of all-flash storage like XtremIO. You will also learn how to protect these critical systems using a combination of SQL Server and vSphere high availability features. For example, did you know you can vMotion shared-disk Windows Failover Cluster nodes? You can in vSphere 6! Finally, you will learn techniques for rapid deployment, backup, and recovery of SQL Server virtual machines using an all-flash array.

  • Scott Salyer – Director, Enterprise Application Architecture, VMware
  • Thursday, Oct 15, 1:30 PM – 2:30 PM – Hall 8.0, Room 24

    My take on this session: wanda is a SQL guru, she can have an entire day just explaining how to properly virtualize MS SQL with XtremIO and vSphere, the foundation for this session is a white paper she’s working on as well, I also know scott from VMware and I even had the pleasure to co-present with him on the same topic some years ago, again DB and XtremIO are like peanut butter and chocolate, do not miss it if you care to know about compression, performance, snapshots etc’!

VAPP6646-SPO – Best Practices for Running Virtualized Workloads on All-Flash Array

All Flash Arrays are taking the storage industry by storm, many customers are leveraging them for virtualizing their data centers, trough the usage of EMC XtremIO, we will start examining the reason this is happening, what are the similarities and differences between the AFA’s architectures, we’ll then go deep into some specific best practices for the following virtualized use cases: 1. Databases, can they benefit from being virtualized on an AFA. 2. EUC, how VDI 1.0 started, what does VDI 3.0 means and how is it applicable for an AFA. 3. Generic workloads being migrated to AFAs

My take on this session: what can I say, this is me I guess but seriously, this session has 2 parts, the first one really goes deep into the different type of AFAs architectures so if you are considering evaluating / buying one, I highly encourage you to attend the session, the 2nd part of the session really goes into the dirty tricks that have to do with AFAs and vSpherre AND more importantly, how to overcome these, even if you are not an XtremIO customer, you will benefit from it as well.

  I hope to see you ALL at VMworld! Itzik


AppSync 2.2 SP2 Fix1 Is Out, A Must For EMC XtremIO Customers

$
0
0

Hi,

We have just released the first hotfix for EMC AppSync 2.2 SP2, this release is really a must for the EMC XtremIO customers, it contains many hotfixes based on the feedback we got and I’m so happy about this feedback.

As always, you need to go to https://support.emc.com to download the patch, just look for “appsync”

 

 

 


EMC ProtectPoint, Now Support XtremIO

$
0
0

Wow, this has been highly on my radar, one of the most anticipated integration point to XtremIO is almost available!

You wouldn’t drive a racecar without a helmet… and you shouldn’t sell XtremIO without data protection. But not just any data protection solution will do – XtremIO requires flash optimized data protection that can protect workloads at the speed of flash. ProtectPoint is the only solution in the industry that delivers this and now it supports XtremIO.

EMC ProtectPoint, an industry first data protection solution that integrates primary storage and industry leading protection storage, now supports XtremIO – the world’s #1 all flash array, which:

Enables up to 20x faster backup to meet application protection SLAs in addition to

Eliminates the backup impact on application servers and

Reduces cost and complexity by eliminating excess infrastructure including a traditional backup application

When buying an all flash array, performance is nonnegotiable and applications on your XtremIO contain some of your business’ most valuable data. So why leave things to chance when it comes to protecting that data? Ensure data protection doesn’t become a bottleneck by leveraging EMC’s flash optimized data protection solution – ProtectPoint for XtremIO.

The secret sauce of ProtectPoint is the combination of best of breed technology – that only EMC could bring together. ProtectPoint combines three components: application integration to ensure an application consistent backup, a change block tracking/data mover engine to ensure minimal infrastructure required and minimal impact on the network, storage and servers and protection storage to ensure cost effective retention and reliable recovery. The exact components may vary with different primary storage arrays, but ProtectPoint still brings the same value no matter what.

By leveraging best of breed technology from across the EMC family, we can:

  • Allow you to leverage common data protection infrastructure for multiple use cases (i.e. backup and replication)
  • Provides seamless integration between different parts of the EMC data center stack and easily scale with your environment
  • By developing ProtectPoint across multiple EMC teams, we are able to drive faster innovation (resulting in a broader ecosystem more quickly vs. having the engineer for each storage array and application separately) and broad investment across EMC exemplifies the strategic importance of ProtectPoint

Now, here’s a quick peek at the technology behind ProtectPoint with XtremIO.

  1. First is the agent installed on the application server – which could be the application agent (for Oracle, SAP or DB2) or the generic file system agent (for other applications). These agents provide an application consistent backup without impacting the host and are designed to empower application owners to control their own backup and recovery and eliminates the requirement for a traditional backup application.
  2. Next, the engine is powered by XtremIO and RecoverPoint technology (via a RecoverPoint appliance). This ensures that there is no impact to the production workloads on the XtremIO and that only unique blocks are sent to Data Domain– minimizing time and bandwidth required to backup an application.
  3. Finally, the Data Domain system has the capability to ingest blocks directly from the XtremIO, deduplicate and store them. But most importantly, even though the system only ingests unique blocks, it stores each backup as a FULL native image copy every time.

At a high level, this is how ProtectPoint works to backup directly from XtremIO to Data Domain. After an initial configuration by the storage administrator, which makes a point in time copy of the data to be protected and seeds the initial data on the Data Domain system, the environment is ready for it’s first full backup via ProtectPoint.

First… to trigger a ProtectPoint backup, an application owner, like an Oracle DBA, triggers a backup at an application consistent checkpoint. This pauses the application momentarily simply to mark the point in time for that backup.

Then… only unique blocks are sent from XtremIO directly to Data Domain. This is enabled by leveraging industry leading XtremIO virtual copy and RecoverPoint technology as the change block tracking engine, which efficiently tracks changes since the last backup.

Finally… the Data Domain system will ingest the unique data and use it to create an independent full backup in native format, which enables greatly simplified recovery.

With ProtectPoint, you do a full backup every time, but only send unique blocks, so the full comes at the cost of an incremental.

In addition, by leveraging RecoverPoint techonlogy, ProtectPoint for XtremIO also offers a new type of backup – triggered by the infrastructure (RPA) to backup data without an application owner triggering it. Let’s take a look at how that works:

First… the infrastructure (specifically the RecoverPoint appliance) triggers a backup based on a previously defined policy.

Then… only unique blocks are sent from XtremIO directly to Data Domain.

Finally… the Data Domain system will ingest and storage the unique data and use it to create a full backup.

[Note: even though she did not trigger it, the app owner can see these backups]

Let’s take a look at how a recovery works with ProtectPoint for XtremIO – first we’ll review a traditional full recovery, which would be recovering an entire LUN.

First, the app owner will trigger the recovery …

Then, the app agent reads the backup image from the Data Domain system and pulls the appropriate data back over the network.

At which point, the RecoverPoint will replace the appropriate production LUN with the recovered copy.

Next let’s take a look at how a full recovery works if it’s done via the storage admin leveraging RecoverPoint, which enables change block tracking on recovery.

First, the storage admin will trigger the recovery …

Then, the RecoverPoint appliance reads the backup image from the Data Domain system and then pulls only the required data back over the network.

At which point, the RecoverPoint will replace the appropriate production LUN with the recovered copy.

This enables a full recovery at the speed of a granular recovery – it is very similar to the Avamar capability with VMware support change block tracking on recovery.

In addition, you can also do a granular recovery with ProtectPoint via instant access. For an Oracle database this might be recovering a specific database, table or record – as opposed to the entire LUN.

First, the app owner or DBA triggers the recovery

Then, the application server connects to the backup image on the Data Domain – but the image doesn’t move off the system. This gives the DBA instant access to their protected data, although it is still on the Data Domain.

At this point, the app owner can recover the specific object she desires to the production database.

With ProtectPoint XtremIO integration, we get a few additional benefits by using RecoverPoint technology as the engine:

  • By leveraging a shared RecoverPoint appliance, ProtectPoint is a simple extension of our best in class replication offering and your customers can leverage their existing RecoverPoint appliances for ProtectPoint backups.
  • All replication and backup can be centrally managed via the RecoverPoint GUI for centralized storage consistent data protection. Alternatively, to empower app owners, day to day backup and recovery can all be centrally controlled by DBA via their native utilizes (i.e. RMAN).
  • By integrating with XtremIO virtual copies and leveraging a RecoverPoint appliance to do the processing, ProtectPoint ensures that the backup does not impact production workload performance on the XtremIO

ProtectPoint offers integration at two layers – either the native integration at the application layer or integration at the “file system” layer for applications that aren’t supported with a native appliance agent. ProtectPoint currently natively integrates with three major applications and databases: Oracle (via RMAN), SAP (via BR*Tools) and IBM DB2 (via Advanced Copy Services).

here’s a good white board discussion

and another overview with a demo at the end of it


Viewing all 202 articles
Browse latest View live