Hi
It took us a while but here it is. Our most comprehensive white paper details the integration and best practices between XtremIO and Microsoft Hyper-V 2016
https://www.emc.com/collateral/white-papers/h15933-microsoft-hyperv-xtremio-wp.pdf

Hi
It took us a while but here it is. Our most comprehensive white paper details the integration and best practices between XtremIO and Microsoft Hyper-V 2016
https://www.emc.com/collateral/white-papers/h15933-microsoft-hyperv-xtremio-wp.pdf
Hi
VMware have just released a new MS SQL best practices white paper.
The paper details the results shown by running multiple SQL servers on XtremIO
http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf
The full xtremio focused paper can be downloaded from here
http://www.emc.com/collateral/white-paper/h14583-wp-best-practice-sql-server-xtremio.pdf
Hi,
We have just released a minor update to AppSync 3.1, many bugs have been fixed in this version so I highly encourage you to upgrade, the AppSync upgrade is as easy as downloading the updated files and next ->next..
Table 1 Fixed Issues for 3.1.0.2
CQ |
Fix Description |
Version Fixed |
108119 |
Resolved an issue related to the license |
3.1.0.2 |
109173 |
Provides a server setting |
3.1.0.2 |
110004 |
Resolved an issue where Copy BCT file failed |
3.1.0.2 |
110318 |
Resolved an issue where mount failed for all |
3.1.0.2 |
CQ |
Fix Description |
Version Fixed |
you selected Native Array Volume as the |
||
110441 |
AppSync now supports RHEL 7 cluster. |
3.1.0.2 |
110446 |
Resolved an issue where the license status was |
3.1.0.2 |
110500 |
Supports VCS 7 Cluster. |
3.1.0.2 |
110547 |
Improves handling of XtremIO connections. |
3.1.0.2 |
110570 |
Provides a server setting to enable the |
3.1.0.2 |
110582 |
Resolved an issue where Enable/Disable of a |
3.1.0.2 |
110603 |
You can now select the Mount on Standalone |
3.1.0.2 |
110605 |
Resolved an issue where a virtual machine |
3.1.0.2 |
110668 |
Resolved an issue related to Oracle database |
3.1.0.2 |
110701 |
You can now perform datastore mount without |
3.1.0.2 |
110708 |
Resolved an issue related to mount and |
3.1.0.2 |
110797 |
Resolved an issue where protection failed for |
3.1.0.2 |
110799 |
Resolved an issue where mount failed in the |
3.1.0.2 |
110987 |
Resolved an issue related to datastore restore |
3.1.0.2 |
111022 |
Resolved an issue related to the mounted |
3.1.0.2 |
CQ |
Fix Description |
Version Fixed |
111004 |
Resolved an issue related to Oracle database |
3.1.0.2 |
111072 |
Resolved an issue related to unmounting a |
3.1.0.2 |
111074 |
Resolved an issue related to the patch version |
3.1.0.2 |
to download the update just click the link below..
Yes, its this time of the year where we are getting very close to DellEMC World 2017, this year will be special in so many ways that I can’t really wait!
Below you can see the XtremIO sessions, their abstracts and timing so you can plan accordingly, Do NOT miss them, as always, the timing, content may chance so ensure to check the url below for the most up to date information.
Session Name | Abstract | Scheduling Options |
XtremIO: Next Generation XtremIO Deep Dive – Architecture, Features & Innovations | ![]() |
Monday, May 8, 4:30 PM – 5:30 PM |
Top 10 Tips & Tricks To Rock Your XtremIO World | Learn the top 10 things to get the most out of your XtremIO system. Our panel of technical experts will share best practices of configuration, operation and management, and will provide insights into your specific questions during a live ‘Ask-The-Expert’ session | Tuesday, May 9, 1:30 PM – 2:30 PM |
XtremIO: Virtualization Best Practices With XtremIO | XtremIO provides extreme performance and efficiency for virtualized environments and pioneered integrated Copy Data Management (iCDM). Join us as we detail what the new capabilities of next generation XtremIO will mean for virtualized enterprise solutions hosted on this platform. Various deployment models will be examined and best practices for both VMware vSphere and Microsoft Hyper-V environments will be discussed. In addition, integration options for enhanced monitoring, orchestration & automation when using management platforms such as VMware vRealize and Microsoft System Center with XtremIO will be highlighted. This is a complete overview for those looking to better understand what XtremIO can do for their business, or alternatively for those interested in learning how customers are leveraging XtremIO today to efficiently run their virtualized workloads with stringent SLAs | Tuesday, May 9, 12:00 PM – 1:00 PM Wednesday, May 10, 3:00 PM – 4:00 PM |
XtremIO: Empowering MS SQL & Oracle DBAs With XtremIO | Microsoft SQL and Oracle DBAs can benefit greatly from the unique Integrated Copy Data Management (iCDM) capabilities of XtremIO. Use cases will be explored in a series of demos, highlighting XtremIO’s application integrations and workflows. Learn how to enable your DBAs to deploy multiple terabyte AlwaysOn databases replicas in <5 minutes without massive network traffic. Enable self-service for rapid and nearly limitless copies of databases. Watch how a PowerShell script can take XtremIO Virtual Copies, remove sensitive data and make the database available for repurposing in a test/dev environment. Empower your DBAs today with an updated look at best practices for SQL and Oracle on XtremIO. | Tuesday, May 9, 12:00 PM – 1:00 PM Thursday, May 11, 11:30 AM – 12:30 PM |
XtremIO: A Deep Dive In Integrated Copy Data Management | XtremIO’s unique integrated copy data management offering combines performance & scalability with reduced operational risk through automation and self-services. Maximize the value of your XtremIO deployment by simplifying application workflows and reducing development timelines. In this session, we will discuss how application admins and DBAs can repurpose different copies of Oracle, SAP and SQL Server DBs instantaneously and become more agile. | Tuesday, May 9, 3:00 PM – 4:00 PM Wednesday, May 10, 8:30 AM – 9:30 AM |
Introduction To Next Generation XtremIO All Flash Array (Hands-on Lab) | Welcome to the next generation XtremIO platform! In this lab, you can explore the new HTML.5 based GUI. You will also be able to explore the workflows of the XtremIO data services being released. | |
XtremIO: Advanced Data Replication With Next Generation XtremIO Platform | Come and learn about Dell EMC’s replication innovations for XtremIO! In this session, we will first provide an overview of replication options currently available with XtremIO. Then we will introduce the industry unique metadata-aware, highly efficient native replication for the next generation XtremIO platform. Details of the replication architecture — built on inline data reduction and XtremIO in-memory snapshots to reduce data transfers between primary and secondary sites — will be presented. We will also discuss different key features required to manage DR requirements for business critical applications in a customer-centric model. | Wednesday, May 10, 12:00 PM – 1:00 PM Thursday, May 11, 8:30 AM – 9:30 AM |
XtremIO: Simplicity & Extensibility Of The XtremIO Management System – A Guided Tour | A simplistic management model is one of the corners stones of XtremIO’s differentiation. The next generation XtremIO redefines storage management and the user experience. In this session, we will go over the 1-2-3 provisioning model built with new HTML5 web UI. In addition, we will discuss the simple user guided wizards, new auto-balanced architecture, single click real-time reports, new data savings tracking, data protection, and scalability. | Tuesday, May 9, 12:00 PM – 1:00 PM Wednesday, May 10, 1:30 PM – 2:30 PM |
XtremIO: End-User Computing – Updates & Best Practices | Hundreds of customers are running millions of virtual desktops in production today with XtremIO. The XtremIO platform offers opportunities to start even smaller than ever before, scale more granularly. In this session, we will present a holistic overview of an XtremIO enabled VMware End-User Computing (EUC) environment. You will also learn about XtremIO sizing and best practices for VMware EUC. An end-to-end EUC datacenter solution leveraging Dell EMC servers, networking, and storage will be presented and examined in detail. | Wednesday, May 10, 1:30 PM – 2:30 PM Thursday, May 11, 11:30 AM – 12:30 PM |
XtremIO: Introducing The Next Generation XtremIO Platform | Come and learn about the next generation XtremIO All-Flash array! This session will review our learnings from the largest All-Flash customer base and provide an update on the latest XtremIO capabilities and how they wil help transform many workloads to deliver even greater agility, tremendous TCO, and business benefits. We will discuss the use cases and benefits for workloads focusing on VDI and business apps leveraging integrated Copy Data Management. | Monday, May 8, 12:00 PM – 1:00 PM Wednesday, May 10, 3:00 PM – 4:00 PM |
Transforming Healthcare IT With Dell EMC All-Flash Storage | In this session, learn about practical IT solutions from Dell EMC that are accelerating insights and transforming patient care shaping the future of health. We will discuss consolidation and simplification of your Electronic Health Records (EHR) infrastructure with integrated Copy Data Management (iCDM), transformative VDI solutions, highest level of data protection and availability, and performance at cloud scale leveraging VMAX All-Flash and XtremIO. Customer perspectives of their healthcare IT journey with All-Flash solutions will be another cornerstone of this session. | Monday, May 8, 1:30 PM – 2:30 PM |
World Wide Technology: The Evolution Of Flash | Key decision points between the various flash arrays, based on our flash lab findings and customer use cases for Unity, XtremIO, All-Flash VMAX | Monday, May 8, 1:30 PM – 2:30 PM |
Enterprise Copy Data Management: Primary & Protection Copy Management Best Practices | See how eCDM discovers data copies on VMAX, XtremIO, and Data Domain. Learn how to create Protection Plans within eCDM to monitor service levels with storage groups, consistency groups, snapshots, and ProtectPoint copies. This session demonstrates how eCDM helps manage and monitor the creation and deletion of copies to ensure compliance of SLOs. A customer will speak to their experiences. | Monday, May 8, 1:30 PM – 2:30 PM |
Data Migration: Cheaper, Faster, Easier…& Now At Scale! | It is your most precious possession: your data. And usually one of the last and most frightening steps in modernizing your business critical systems is migrating the data. It doesn’t have to be so scary. It doesn’t have to be so hard. This session demonstrates the migration tools we’ve built into VMAX, VNX and XtremIO, and share the software, resources, and services that will make your data migrations dreamlike instead of scream like. | Tuesday, May 9, 8:30 AM – 9:30 AM Wednesday, May 10, 3:00 PM – 4:00 PM |
A Bonus Session!
RecoverPoint & RecoverPoint For Virtual Machines: What’s New For 2017 | So much has happened in the past year with RecoverPoint and RecoverPoint for Virtual Machines. See how fast deployment can be done and how seamless operations are. Understand how to achieve the most efficient data protection for your VMware environment. A customer will speak to their experiences. | Monday, May 8, 4:30 PM – 5:30 PM |
My session is the virtualization best practices one, can’t wait to see you all there
Hi,
I’m super excited to write these series of blog posts about the new XtremIO generation we have just announced, XtremIO X2. I won’t cover every aspect of the new product but rather try to give you my CTO’ish overview, if there is one word I can describe it, it will be “Maturity”, This project was a very ambitious one, we already had a first hit record (XtremIO X1), an huge install base, the largest in the world ( https://www.emc.com/collateral/analyst-reports/emc-xtremio-ranks-1-research-proves-it.pdf ) and so, while we had to take care of our beloved install base, we also started to develop what would eventually be called “X2”
Below, you can see a part of our huge lab in Durham. Walking through the lab is giving you the “Matrix” kind of feeling. The site in Durham is only one lab out of many we have and that is an important thing that we could only achieve with the budget of EMC and later on, the DellEMC R&D Budget.
When we started building XtremIO in 2009 (god, how time goes by!), we could choose between the easy way and the hard way
Easy meant developing a Scale-UP architecture, a typical dual controller architecture that can only scale with capacity which meant that if you need more performance than what the array storage controllers can provide, you need to buy a new storage frame that won’t share the capacity of the performance with the array you already have, the benefits for this type of architecture are in favor for the R&D team only, it’s a much shorted R&D development, however, as a customer that will require the performance one day? Not so much.
Hard meant developing a Scale-Out architecture, one where you have more than a dual controller’s architecture which can squeeze every bit of performance from all the available storage controllers in the system, we of course chose the hard way. I wrote many posts on the topic, including, XtremIO 3.0–Why Scale Out matters? ,
A Tale of Two Architectures — Engineering for the 99.999% versus the 0.001%
and if it wasn’t obvious, we chose the hard ward for us which meant the RIGHT way for you the customer, no one is dare asking “why do you need 1M IOPS or why do you need sub ms latency” anymore.
Product development is a never-ending cycle
While XtremIO boomed in the market place, we knew that there are areas to improve, mainly around density, additional data services and even better stability as the array is hosting the most mission critical applications out there.
Enter X2
Years of R&D development were invested in this platform as we will cover the highlights in the upcoming posts, below you can see my solutions pod hosting single X1 arrays, single X2 arrays and Dell FX2 blades.
The People or, the MOST important thing you company has
Such an huge task like X2 is an astounding project that can only be achieved with dedicated engineers, countless meetings, countless architectural meetings, lots of code lines, its like a family with many parents, I can’t tell you how many times I emailed people in the middle of the night after finding some bugs during my testing and the fix was waiting for me in the morning, guys and girls, I salute you!
Here’s the index for the following X2 Posts:
DellEMC XtremIO X2 – Part 2, The Hardware.
DellEMC XtremIO X2, Part 3, Performance galore or, “you built a time machine out of a Delorean??”
DellEMC XtremIO Integrated Copy Data Management (iCDM) Or, Not All Snapshots Are Born (or die) Equal
DellEMC XtremIO X2/X1 Management, Part 1, The CAS Architecture = Simplicity?
DellEMC XtremIO X2/X1 Management, Part 2, Troubleshooting.
DellEMC XtremIO X2/X1 Management, Part 3, Simplicity in the provisioning / repurposing workflows.
DellEMC XtremIO X2/X1 Management, Part 4, APIs and the Eco system integration.
DellEMC XtremIO X2 Tech Preview #1 – Quality of Service (QoS)
XtremIO Content Addressable Storage (CAS)
How does that simplify Storage Management?
You might be asking yourself, what does the XtremIO Content Addressable Storage (CAS) architecture have to do with Storage Management Simplicity?
Well lets start with a quick recap of the XtremIO Content Addressable Storage (CAS) architecture. Following, I will explain why this architecture, besides providing consistent performance, in-line data services etc. also simplifies the daily tasks of the Storage Admin.
Back to the basics: Content Addressable Storage (CAS) architecture: What is it?
The classic metadata structure of traditional storage systems directly links the Logical Address of a Block to the Physical Location of the Block. In this metadata structure, every logical block written, has a physical block linked directly to it. In addition, as most traditional storage systems were architected for a spinning disk storage medium optimized for Sequential writes the address of the logical address affects the physical location that the data is stored. This can lead to an unbalanced storage array that can suffer from hot-spots as specific address space ranges may experience more performance / IOPs than other address space ranges.
The XtremIO took a different approach of managing it Meta Data structures. XtremIO was targeted from day one to support a flash/random access medium. Deduplication and Copy Data Management operations were designed in an always on, inline architecture. The result is a metadata structure that completely decouples the Logical address space address from the Physical one. This is done by leveraging a dual metadata structure:
Stage 1: Logical address to Hash fingerprint
Stage 2: Hash fingerprint to Physical address
If an identical data block is written twice, resulting in deduplication, than only the logical address to the hash needs to be registered. The physical location of the content hash value remains the same. Only the reference count of this hash value is increased by one. This is to indicate that now there are more than one logical address referencing the physical location of the content hash value.
Stage 1: Incoming data is hashed. Logical address is mapped to Content block hash.
Picture #1: All data blocks are hashed/ fingerprinted
Stage 2: Hash address space is distributed evenly to all particpating Storage Controller resources. This results in an auto-balanced system. So if, for example, a specific data base table/ volume is hammered, this will not result in any hotspots, as the data of All Volumes is Evenly Distributed to ALL SSDs.
Note: XtremIO supports a Scale-out architecture. Therefore the total hash range will always be evenly distributed between all of the Storage Controllers of the XtremIO cluster.
Picture #2: Hash range (color-coded below) distributed evenly between all Storage Controllers
Okay, so now the answer to the expected question: How does the unique content addressable storage architecture simplify management?
In XtremIO , Pool Management is not required. All volumes are serviced by All Storage Controller resources leveraging a single Storage Pool.
So how did we simplify the life of the Storage Admin? No Pool Management required.
XtremIO was designed to be auto-balanced. Therefore there is no need to optimize the system to support hot spots/ data pounding of specific logical address ranges.
So how did we simplify the life of the Storage Admin? No need to monitor and troubleshoot hot spots in the storage system.
The metadata structures of the XtremIO architecture are VERY EFFICIENT. Copies are super-efficient metadata constructs. Metadata between the Source and Copy is shared. There is no copying of the Meta data structures. Therefore, there is no need to allocate and increase memory resources when additional Copies are made.
So how did we simplify the life of the Storage Admin? There is no need to tune the system and allocate additional memory resources when Copies are made. There is no need to increase Memory resources as the number of Copies is increased.
All Copy operations are always performed in memory. So if a X-COPY operation, a Protection Copy (backup) or a Repurpose Copy (clone) is created – there is no performance impact, as none of these operations result in actual data movement. The above operations result in a super-efficient meta-data operation.
So how did we simplify the life of the Storage Admin? The Storage Admin can create as many Copies demanded by his organization. The number of Copies is not unlimited, but the XtremIO system supports thousands of mounted and working Copies that can be leveraged for Test, Dev, Reporting and other uses cases.
Bottom line: The XtremIO inherent Content Addressable Storage (CAS) architecture offloads many of the ongoing and sometimes troublesome monitoring and troubleshooting operations required by a Storage Admins of traditional Storage Architectures to optimize an All Flash Array.
And this is before we talk about the awesome management feature offered by the XtremIO Management System and New WebUI… (see Part #2 of XtremIO Management Simplicity blog posts series).
Innovation in Storage Troubleshooting with the new WebUI application
The XtremIO Management java application was released together with the first release of the XtremIO All Flash Array. Over the years, we have received many positive inputs on the XtremIO Java management application. However, the number one complaint always centered on the fact that it was a Java application, that required a local Java environment, and not a browser , “nothing to install” application.
As our customers really liked the XtremIO Management Java application, we could have taken the approach of simply re-developing/copying our Java application using web technologies. However, this was against the DNA of the XtremIO Engineering team – a team that always seeks to innovate.
So the question arose, how do you innovate Storage Monitoring & Provisioning flows?
This blog post will focus on some of the innovative Storage Monitoring & Troubleshooting capabilities with the new WebUI:
XtremIO collects real time and historical data (up to 2 years) for a rich set of statistics. These statistics are collected for both the Cluster/Array level and also at the object level (Volumes, Initiator Groups, Targets etc.). This data collection has been available from day one. Creating reports to view over time these statistics are basic reporting capabilities offered by many Storage management applications.
However XtremIO WebUI , delivers additional one-click away Intelligent reports.
Here are some examples:
Block Distribution Graph
This graph displays both the current and historical block distribution. This information can be used for many use cases. For example, an Admin sees that the average latency of the cluster has increased. One of the reasons may be that a new application with a relatively large block size has just been deployed. This new application is the cause of the latency average increase. This can be easily explained with the Block distribution graph.
Once the “culprit” block size is identified, again with one click away, the over-time performance data of that specific block size can be displayed. In the below graph you can see the latency histogram, and the BW, IOPS and latency of the block size over time.
Weakly Peaks widget
You are probably asking what is the Weekly Peaks widget?
This heat map widget leverages a color coding scheme to show the admin what are his peak hours. In which day/hour of the day does the array experience the heaviest loads?
This graph presents the Average IOPS for the past 8 weeks per weekly hours. As can be seen with the darker blue areas, peak hours are experienced during working days morning hours (08:00 -11:00) and during weekend nights (12:00 – 6:00).
The Weekly Patterns also has a “Change” option. When clicking on this option, the widget will display the up/down trend based on previously collected data.
We can see from the above, that there is a slowdown in traffic on Friday afternoons.
Latency report at cluster and object level
Here is the Cluster latency report. As you can see it reports on Latency in multiple dimensions: distribution to latency buckets, in relation to IOPS, and in relation to Block Size and read/write transactions.
Viewing this report the Admin can easily see:
We also display Latency reports of leading objects (Volumes, Initiators, Targets)à again just a click-away.
The WebUI offers innovative navigation techniques not found in any other application. These techniques were introduced in order to better streamline troubleshooting procedures. These capabilities make all relevant data needed to troubleshoot issues – just a click-away.
Every HTML5 browser application leverages hyper-linking to support drill-down capabilities. This is inherent with most web applications. XtremIO supports unique drill down capabilities.
The XtremIO WebUI Dashboard shows the Volumes and Initiator Groups with the highest IOPS/BW or Latency.
Clicking on the View Details button of the Top performers will directly link you to the Overview details of all of the highest performing Volumes.
Clicking on the name of the Volume will directly link you to the Overview details of the Volume:
So you are probably asking yourself what is a Context Navigator and how does that relate to troubleshooting. The idea of the Context Selector is to allow the Admin to easily view ALL of the information of a selected object/ group of objects that the Admin needs to focus on and troubleshoot. This can be advanced analysis of the Top Performers Volumes / Initiator Groups (hosts), or focused monitoring on a specific application and all of its related resources.
How does that work?
There are 2 graphical elements that need to be discussed:
The Context Selector
The Context Selector allows the Admin to easily filter and select an object that requires in depth troubleshooting. Filtering can be done using text strings, tag values, or key object attributes (e.g. Mapped/Unmapped).
Context Navigator
Once object/s are selected, the admin can use the Context Navigator to navigate to ALL of the Reports and Configuration data stored on the object/s. All this information is one click away information.
The navigation is performed by clicking on the Context Navigator Icons marked in the side menu bar of the application:
So if a user selected for example an Initiator Group (Host) and now wants to see all its Overview data, deep dive Performance data, Latency histograms and Configuration data, all that he needs to do is click on the Context navigator buttons in the right-side menu bar to easily navigate between the views.
A lot of thought was invested in order to modernize and leapfrog the new XtremIO Management application compared to it’s predecessor Java application, and also compared to leading Storage Management applications found in the market today.
I hope that in this blog, I was able to relay the innovative ideas that were driven into the new WebUI. For more insights please see:
Below, you can see a demo of the new XtremIO Web UI
Management interfaces & ecosystem support
In the previous three blogs on the XtremIO Management System we focused on the unique XtremIO architecture that simplifies Storage Management, and the new XtremIO WebUI that brings UX innovation to the Storage Management arena.
In this blog we will focus on the additional interfaces and ecosystem support by XtremIO.
Out-of-the-box interfaces
There are multiple use cases in which the Storage Admin will need to run management queries and operations:
For each one of the above Use Cases, a different Management interface may be needed. To support these Use Cases, XtremIO offers 3 internal out-of-the-box interfaces.
XtremIO REST API
GET, POST, PUT, DELETE – these are the commands used by any REST API interface:
The XtremIO REST API interface supports ALL admin level commands. The REST API is the recommended scripting interface. XtremIO maintains Backward Compatibility (BC) between the different XtremIO versions. This commitment to Backward Compatibility allows a customer to perform an XtremIO version upgrade, without “breaking” any management scripts or integrated applications. XtremIO is committed to Backward Compatibility for 2 consecutive REST API versions. At a certain point, as a customer adopts newer versions of the XtremIO REST API, we may ask the customer to update his REST API scripts.
All the ecosystem product developed by EMC and 3rd Party integrating products leverage the REST API interface.
XtremIO CLI
XMCLI is the CLI interface of XtremIO. This interface has been supported with XtremIO from day one. Storage admin who want to run ad-hoc specific queries, often use the CLI interface. All XtremIO commands are supported by the CLI interface. These include advanced commands, used by XtremIO technicians.
With time, we have added advanced features to the XMCLI interface. These include Output Filtering options, Output Properties list definitions, Improved HELP/Usage etc.
XtremIO WebUI
As we have posted dedicated blog posts on the XtremIO WebUI, we won’t expand here more on the WebUI. However, we will emphasize that the WebUI offers the Admin optimal flows to both Monitor & Provision the XtremIO storage array.
For more details, please refer to the following blog posts:
Innovation in Storage Provisioning with the new WebUI application
Innovation in Storage Troubleshooting with the new WebUI application
DellEMC XtremIO X2/X1 Management, Part 3, Simplicity in the provisioning / repurposing workflows
XtremIO Rich Ecosystem Frameworks support
XtremIO offers a rich set of integrated management interfaces. These include:
Being able to integrate to the ecosystems of the thousands of deployed XtremIO arrays is strategic for XtremIO. Incorporating XtremIO management capabilities to existing management platforms, streamlines XtremIO monitoring and provisioning into customers’ existing processes.
Summary
XtremIO is a leading AFA in the market. Customers love the XtremIO array for various reasons. One of the reasons is the Simplicity and Extensibility of the XtremIO array. The Simplicity is derived from the underlying XtremIO architecture that automates and makes obsolete a wide range of daily tasks that need to be performed by the storage admin with other arrays.
Part of the XtremIO Engineering DNA is to innovate in all areas. This behavior resulted in a modern and innovative WebUI with differentiated features. We believe that these unique capabilities will add a lot of value to the XtremIO storage admins.
Many integrating products and ecosystems integrate with XtremIO leveraging the XtremIO REST API. XtremIO supports all administrative operations in multiple interfaces: CLI, REST, and GUI. In addition, XtremIO develop itself and supports a wide range of 3rd party integrating platforms.
Below, you can see a demo of the new XtremIO Web UI
Innovation in Storage Provisioning with the new WebUI application
The previous blog Innovation in Storage Troubleshooting with the new WebUI application we reviewed how the innovations of the XtremIO new WebUI adds simplicity and intelligent capabilities to the Storage Admin Troubleshooting processes.
In this blog, we will focus on the simplicity and flexibility of the WebUI in supporting the leading Provisioning Flows.
There are multiple provisioning flows that need to be supported. Here is the list of the most popular flows identified by our customer base:
Deploying a new Application:
Note: This is optional as not all Applications require Consistency Groups
Note: Sometime for the Admin convenience he would like to first create the Initiator Groups and then the Volumes.
Increasing Volume Capacity:
Increasing Host Compute resources:
Create Repurposing Copy
As can be seen, there are a various set of provisioning flows that need to be supported.
The WebUI Nest Step Suggestions feature is a provisioning engine that offers the user the most Popular Next Steps.
If we zoom in to the Deploying a new Application provisioning flow, the provisioning engine offers the following next step logics:
Each next step selected by the user defines the next set of suggested operations.
In the case the user wants to perform: (1) Create Volumes; (2) Add to Consistency Group; (3) Create New Initiator Group; (3) Map – the provisioning engine will navigate the user to the following steps:
How is the provisioning engine implemented in the Web Application’s user interface?
At each decision junction, the application offers the list of the most popular next step suggestions. So for example, if the user created a volume, he will receive the following Next Suggestion screens:
Nest Step Suggestions feature allows a user to navigate in a simple manner between multiple provisioning flows.
Pretty cool, what do you think?
Sometime a storage admin needs to create multiple Volumes and multiple Initiator Groups and map them together.
The WebUI offers a very flexible mapping screen.
Once the Volume/s and Initiator Group/s are selected by the user, the screen automatically generates the Lun Mappings with auto-suggested Lun IDs. The user can than review the mappings, change the LUN ID if needed, and create the lun mappings.
The mapping screen is very flexible. It can map Volume/s or Consistency Group to one or more Initiator Groups. He can also add / remove mappings when needed.
As we have read from analysts over the past year, 60% of enterprise data is Copy Data. Copies refer to Development, Test, Reporting, Analytics copies of the Production or Golden copy image.
XtremIO has streamlined the repurposing flow by offering 2 dedicated flows:
As creating a Repurpose Copy in XtremIO is a very efficient meta data memory operation, the form above completes immediately.
In order to refresh the Copy, the user selects the Copy and selects the Refresh operation.
The flow visually displays the Refresh operation that is going to be performed. Once the user approves the Refresh action and Applies the change, the Copy data is automatically refreshed. Here again, as the Copy is an efficient meta data memory copy transaction, and therefore is executed immediately.
Which other Storage Management application has the above flexibility and features?
Below, you can see a demo of the new XtremIO Web UI (start at minuted 6:13 if you are interested in the provisioning flow)
This post will cover the hardware in this innovation, so let’s start, There are two models in X2
Configuration | X-Brick
Minimum Raw |
X-Brick
Maximum Raw |
Cluster Size in X-Bricks |
X2-S | 7.2 TB | 28.8 TB | Up to 4 |
X2-R | 34.5 TB | 138.2 TB | Up to 8 |
X2-S is utilizing 400GB SSD drives and can scale to 4 X-Brick and X2R will be utilizing 1.92TB drives (larger drives will be supported later on) and scan scale to 8 X-Bricks. Question #1, why are we still utilizing 400GB drives in the X2S model and why are we only scaling this model to 4 X-Bricks? The answer is simple, there are customers out there who do NOT need a lot of physical capacity but rather needs a lot of LOGICAL capacity that can be gained by high deduplication (think more than 5:1), for example, the full clones VDI customers, no need for you the customer to pay for extra capacity that you will not use and we want you to have the best TCO out there..why 4 X-Bricks? Because these customers tend to build VDI PODS, each “pod” is a fault domain consist of server, network and storage, based on our past experience, these customers will not use a fully populated 8 X-Bricks cluster (aka, “The Beast”)
X2-R is utilizing 1.92TB drives and can scale to a fully populated 8 X-Bricks cluster, this model is the one that will be most utilized out there.
Regardless to the model, each DAE can now scale to 72 drives! As oppose to the 25 drives X1 accommodated, this is a diagram of the new DAE
This is a real world image on how you pull out the DAE, super easy and to the right, you can see a DAE fully populated with 72 drives
Its important to note that you do NOT need to fully populate the DAE on day1 (or day2), you start with 18 drives and you then scale with a pack of 6 drives. For example, you bought a single X2 X-Brick with 36 drives, you now want to scale UP (a new feature to X2, in X1 you could only scale out and now you can either scale up, out or both), you simply add another 6 drives, the procedure is fully non-disruptive.
Below you can see a demo of how the new scaling-up feature will work
PCIe NV-RAM
We have removed the battery backup units (BBU) and instead are now using NVRAM PCIe cards reside in each of the storage controllers, these are used in case of a power outage to destage the data from the storage controllers RAM to the drives and again, the question is why?
X1
Due to the amazing inter-node performance, we had to increase the infiniBand switches performance (the ones that are connecting all the X-Bricks to act and behave as one entity, e.g, our Scale-OUT value proposition) to 2 X FDR 56Gbps from the 2 X 40Gbps we used in X1, the switches are Mellanox based and as in the past, are unmanaged so it’s not something you, the customer, the partner etc need to manage.
X2 Storage Controller
As noted before, the only difference between the X2S to the X2R storage controller is the size of the drives its accommodating and the RAM, both now have 2 X 16GB FC ports (from the 2 X 8GB FC in X1) and 2 X 10Gbe iSCSI,
You can guess why we need a port for replication, right?
Hardware is a commodity and since intel do not keep up with the “Moore’s Law” anymore we had to innovate really hard around the software stack which is a great Segway to jump over to part 2 of the series
Finally, we want to be greener and will support AHSRAE A3 which will allow the DC to be up to 40C! and so save energy and money
Density & Raw Capacity
Going greener with ASHRAE A3 was only a starting point, we continue to be more efficient. Density was improved dramatically!
When we compare to our existing X1 arrays, which had 40TB Raw in 6U – now we will propose 138TB and later 276TB RAW in 4U. Up to 2.2PB in 1 standard Rack!!!
Overall Data Center Efficiency
When we convert, the density savings into Data Center efficiency the improvement per RU (Rack U) is starting from x3.8 and will increase to factor of x7.6 in Usable and Effective capacities.
The fully populated system will support around 2PB of usable flash capacity and more than 11PB effective when we take in consideration: deduplication, compression and Copies savings.
The presented agility shall address the high demand from all our customers, when we must address their consolidation requirements.
Price Drop for Single X-Brick
We understand that being efficient is only a first step, you expect us to improve in price as well. So, we did it – more than 50% price reduction.
Two main factors helped us to achieve the goal:
Metadata-aware Architecture matters!
Everyone can claim for use of cheaper drives, but not everyone can prove the decision is the right one.
We continuously monitor and analyze the data from our huge install base (more than 7000 systems)!
When we analyze the SSD endurance across our install base, we see that most of our systems don’t exceed even 0.5 Write per Day.
While extrapolating the endurance data on the existing X1 SSDs will result in: Existing SSDs won’t wear out for next 19 years.
Agility
When we approach the customers with the question of what are the most important features of an enterprise array, right after availability and price they mention agility.
Today’s customers’ requirements are changing so fast, being flexible is more important than it was before.
We’ve listened, we’ve promised it, and we did it!
No more adding in pairs, Granular Scale-Out (1,2,3,4,5,6,7,8) will allow any combination of bricks between 1-8.
But wait, we didn’t stop there, we’ve finally added Capacity Scale-Up.
No more “add compute even if you need only capacity” – Now we allow to add SSD Drives to the same brick. Grow with 6 drive increments.
Being agile, doesn’t assume you compromise on the standards! As is you expect from us: Scaling is online and without performance impact!
you can also see below a quick video I took with Ronny Vatelmacher, our hardware product manager
The other big improvement of X2 is the improved data reduction, we were able to greatly improve it, from what we saw so far, the improvement is around 25% !
One of the questions that we have asked ourselves when we started developing XtremIO X2 was “How do you keep improving the performance if the storage controller’s CPUs can’t cope with the load?”
For those who aren’t sure what I’m talking about, let’s recap the recent history of Intel CPU development, I suggest you have a quick read here,
Now this comment Is of course a little bit extreme but like many of the examples I’m using, it brings the message across, now, how is this related to storage? SSD drives are getting faster and faster every year, they are not limited by the HDD’s 15K RPM speed anymore and just like any other performance load balancing, once you resolve a performance in one aspect of your environment, it will move to the next component which is in the case of a AFA array, the CPUs inside the storage controllers, nodes, engines or whatever you want to call them.
XtremIO is already using a scale out architecture which means ALL the available storage controllers in the cluster (up to 8-Xbricks, 16 Active/Active controllers) but we wanted to take it to the next level..why? because you see so many vendors out there starting to preach the NVMe media but what they Don’t tell you is that they don’t really take advantage of the performance that media can offer because they are bounded by their dual controller Active/Passive architecture. When you release a new storage product, praising that you are now so fast but then you write comments like “Up to 50% lower latency than Xyz for similar workloads in “real-world scenarios” but you don’t really break these down to block sizes, you know there is an issue here and the marketing team is trying really hard to cover it.
We took an alternative route, our performance design criteria had to solve these goals
Ok, so where DO you start? You start with analyzing your customer’s workloads, below you can see a slide we share for the first time which shows that the majority of our customers install base are running block sizes that are smaller than 16kb, this is an histogram chart and is really the only way to break down a performance workload (or workloads in our case)
These numbers weren’t really a surprised to me, I began my career as a VMware “guy” and I know that many generic workloads out there were running small block sizes but it was encouraging to see this being backed up with a real life data, add this to the fact that XtremIO is heavily used for
Ok, so we know we wanted to improve our already great performance and make It even better as we scale to the next years where performance is something you simply want “more” of. We also know which block sizes we wanted to put emphasize on.
Ok, so we know what blocks sizes are dominating, what next?
Implementing a Flux Capacitor or as we call it, “Write Boost” , for those who haven’t see “Back To The Future” (really??), the Flux Capacitor is the magic ingredient making the commodity Delorean go back and forth in the time
To better understand the genius behind the application performance boost lest first look at our current X1 generation. Note that the full architecture is much more complex but I did wanted to provide a very high level because saying we added “magic” doesn’t really cut it
The IO arrives to Routing layer, where after the SCSI stack the HASH is being calculated.
In the next step, the Control Layer handles the table, identifies where (on what logical address) the content (HASH) shall be located.
And finally, the Data Layer, handles the translation to Physical layer. Before writing to drives, the data is landing in the UDC, from which we return the Commit to the Host.
All the operation starting from SCSI and till it lands in UDC is synchronous, while the de-stage from UDC to drive (Physical Layer) is asynchronous operation.
Now to the changes in the new – X2 architecture.
The process looks the same, with one major change in the Control Layer – we added the Write Boost. The commit to the Host will be returned, right after the data lands in the Boost and it eliminates several hopes from the Synchronous operation. The result is amazing, the improvement in latency is several cases is around 400% and allows XtremIO to address application requirements with 0.5mSec requirements!
The later step, from Boost to the Data Layer now performed in asynchronous mode. On this stage, we also have another new optimization, Bandwidth oriented – as of now we can aggregate all small writes coming to the same 16K page.
It differentiates from any industry cache implementation and ensures that we will never run out of Boost space!
The Results
Below you can see a VDI workload of 3500 VDI VMs running on a single X2 array, 0.2 MS latency !!
On X1, the same load was 1.5 ms and even above in some cases
But are VDI resemble generic VSI workloads? Absolutely!
Generic VMs tends to use small block sizes as can see above,
The same rule apply for OLTP workloads, see the difference between X1 to X2 for the SAME OLTP workloads, even though OLTP are 8kb based, the small blocks that were used in a very SMALL proportion, consumed a very LARGE chunk of the CPU (“little’s law”)
Note that in the example below I really took X1 outside of it’s comfort zone, it more than capable of providing a sub ms latency but I wanted to compare a very intense workload..
Little’s Law ??
Yea, little’s Law is a good way to calculate averages of a wait time(s) for items in queued system, everything is working as expected when there is no queuing involved but there is ALWAS queuing involved, here’s a real life example using the ultimate source of truth, Youtube
as you can see in the above video, the “10 items or less” check out point works really well UNTILL there is a queue in the lane, what is the cashier would simply tell the person who couldn’t find his credit card to step forward while giving him a ticket to pay and then, the rest of the people waiting in the line can proceed to their checkout? this is what we are doing in X2, every small block is getting acknowledged in at least 2 storage controllers and eventually find its way to the drives, the performance gains of doing so are amazing as evidently can be shown from the real world example above!
below you can see a video i recorded with Zvi Schneider, our chief architect discussion the performance improvements
Copy Data Management (CDM) is a basic feature of every modern storage array today… It’s just impossible to deal with the exponential data growth and IT industry demands for data protection, immediate restore, parallel access, reporting and more and more without CDM capabilities … The storage vendor’s solutions in this area looks as Siamese Twins with different labels… Snapshots, Clones, ShadowImages, FlashCopy, etc… The basic functionality for most of technologies is similar, where the important differences are:
Let’s discuss a bit the conventional block array metadata structure. General idea is always based on block level pointers between Volume Address Space and the Physical Data level and similar to below table where LBA offset is linked to RAID physical address. The below table represent simplest array metadata structure prior to copy creation.
Implementing Copy Data Management based on this metadata structure requires sharing the physical layer between Original and Copy volumes. The typical way storage vendors achieve this goal duplicating the entire volume metadata structure to allow access to the same physical data for both original and copy volume. Your cost: array memory, CPU, power and internal bandwidth consumption
XtremIO internal architecture is based on Content-Address data access. Every data block is represented by a unique Fingerprint (aka Hash). The metadata is separated between two main metadata structures:
The table contains entries for every written data block per volume. Unwritten data blocks don’t consume any metadata resources and automatically respond “zero” content.
The table contains entries of the Fingerprint data linked to the Physical location of the XtremIO Data Protection layer and a logical address reference count. The HMD data is global (entry is shared between all the array volumes) and dedup-aware
The above content address metadata structure allows complete abstraction between User Volume Address Space managed on A2H table, and the Physical Meta data managed on HMD table.
Now, let’s see what are the benefits of this metadata architecture for XtremIO iCDM solution.
The XtremIO Virtual Copies (XVC) logic is implemented based on A2H table ONLY, no relationships at all with HMD or physical layer.
Let’s play a bit with array metadata just to show how it really works. Every new step here is based on previous metadata content.
Following XVC creation, the original metadata content stays in-tact and becomes an internal resource serving both Source (Vol-A) and XVC (Vol-B). At this point, and without any dependency on Vol-A size we spent less than 50kb array memory to enable XVC logic. The “red” frame below represents the real metadata while the “green” frames represent just XVC internal access algorithm but no actual metadata
When Source or XVC volume address is overwritten by the Host, the “Fingerprint” of new data is updated as a new A2H entry update. Only at this point we are actually consuming the new metadata entry for updated LBA.
Creating additional level of copy follow literally the same rules of in-memory operation efficiency as described above.
The “Copy-of-Copy” levels have the same data services and Volume access options as any other array volume. In the below example the “Vol-C” volume Addr-2 is updated with the new content by user. Exactly like in previous example, only user-updated entries will consume memory space, all the rest is “free of charge” in terms of memory and CPU allocation
Every XVC volume could be refreshed from ANY other volumes belongings to the same tree (VSG = Volume Snapshot Group) without limitations. In the below example the “Vol-B” content refreshed by “Vol-A”:
Like the Copy creation operation, the Refresh operation is managed in the A2H metadata table as an in-memory operation. The physical data structure layer (XtremIO Data Protection / RAID managing the data location on SSD) is not involved. The A2H table optimization (aka merge) algorithm developed to simplify the XVC tree following multiple operations.
The iCDM meta data structures support volume or XVC deletion on any level. Let’s see what is the impact of “Vol-C” deletion:
The deleted volume “Address to Fingerprint” relationships are instantly destroyed like as a volume representation on SCSI target. The metadata structure is optimized as batch process when “Internal3” metadata content is merged with Internal1 which is not relevant anymore. The HMD table entry reference count for related Hash entry discarded as well, the physical layer offset marked as free.
All the metadata management processes triggered as part of XVC deletion are background tasks running beside user IO. However, these background processes managed with DIFFERENT priority and resource allocation to prevent performance impact. The basic XtremIO iCDM design assumption is to manage user activity first, and only then allocate the available resources for internal operations.
One of the new cool features in XtremIO X2 generation is “Management Lock” flag. Once applied on volume level the property will block iCDM Refresh, Restore or Volume deletion on specific volume protecting mission critical data from mistakenly taken operation.
And here is some bonus questions for these who will say “so what? Every storage array have copy management solution today”:
if you are looking for a shorter /easier way to understand this:
Unplug you storage pipeline
Construction noises from the street, your baby sister is crying, your neighbor is practicing for his American idol audition, and all you want to do is watch your favorite series on TV. You decide to put on your headphones and turn up the volume so these noises will stop bothering you.
Now you solved the noise problem but the volume from your headphones is too high. Without volume limitation, you can cause a serious damage to your ears.
As people can cause damage by using their headphones without resource control, so does virtual machines in a storage system. A single virtual machine can use more resources than expected and cause the array to malfunction by raising service times for all other applications on the array.
Without resource limitation, one application or a “noisy neighbor” can easily consume an unfair share of resources, leaving little available for others.
XtremIO QoS technology is focused on preventing application starvation caused by those noisy neighbors, also enabling service providers to ensure that their customers won’t use more resources than they paid for. This is done by limiting the maximum bandwidth or IOPS per any volume/host/application in the system.
An example for a potential noisy neighbors can be caused by different applications running different workloads on the same array, or test/dev environments sharing the same resources as production volumes.
XtremIO unique integrated Copy Data Management (iCDM) technology allows to create multiple copies of each volume/application and present them to various development teams without any performance impact on production. Yet, in some heavy load environments, there is a potential to max up the array so limiting the test/dev copies will make sure the production volumes won’t be affected.
XtremIO Implementation
XtremIO QoS is policy driven and can be assigned to any Volume, Consistency group (application) or Host/s (Initiator Group/s) in the system.
QoS policy should only be set once, which contributes to the simplicity of XtremIO management.
The policy can then be used for different storage objects.
QoS policies can be set for IOPS, BW, or IO-Density units. IO-Density is IOPS used per capacity. For example: you can define a 5000 IOPS per 1TB policy. Assigning this to a Consistency Group with 2 volumes of 6TB provisioned size in total will limit the Consistency Group to a maximum of 30,000 IOPS.
The IO limitation dynamically changes upon add/remove volumes from the Consistency Group.
Burst IOPS can also be set to extend the MAX IOPS limit for a short period of time. When a volume runs below the MAX IOPS limitation, burst credits are accumulated and the volume can then use these credits up to the burst limit.
One of the key benefits of XtremIO web interface is the availability of performance data on the storage array, such as weekly pattern, performance over time and block size histogram. This makes the process of assigning QoS policy fast and reliable by easily viewing the workload patterns of each application and match the best QoS policy to it.
For example, the image below shows the weekly pattern of a volume in the system. Hovering on a specific day-time shows the number of IOPS and BW during that time. From this image we can learn the amount of IOPS in normal working hours and during peak and assign the MAX IOPS limitation and the Burst IOPS accordingly.
Here is a short demo – how to use QoS with XtremIO Storage array:
How to achieve the desired RPO with replication with minimum cost
Replicating data has always been a copying challenge in terms of achieving the required protection for the application. This challenge becomes even more severe when using and AFA with high change rate workloads.
The issue is that when the application change rate is extremely high, the data that needs to be sent over the wire is in many cases above the infrastructure capabilities. It then becomes difficult to achieve the desired protection (RPO) without upgrading the bandwidth and increasing the infrastructure cost.
XtremIO Native Replication is designed for critical application with high utilization and bandwidth requirements.
By leveraging XtremIO inline always on data services, the XtremIO Native Replication is the most efficient replication product found today in the market.
The inline, always-on global deduplication of XtremIO ensures that the data blocks will always be replicated just once. Unlike XtremIO, other storage arrays deduplication implementations are based on a deduplication process that utilizes the source & target arrays resources and therefore has high impact on the application and replication performance. In many cases, this impact causes the administrator to compromise on the deduplication data service. In some of the products the deduplication process stops on high utilization, which causes on-unique data to be replicated. This is tremendously inefficient and highly costly.
XtremIO unique content-aware storage architecture deduplicates the data by design, as every non-unique block is automatically pointed to the block that is already stored on the SSDs. Thus there is no need for an additional process and therefore no additional penalty exists for deduplication.
With XtremIO Native Replication:
If the typical data reduction ratio is 4:1, which reduces the replicated data by 75%, the global deduplication improves the efficiency even higher.
The efficiency of XtremIO Native Replication reduces the utilization on the source and target arrays and on the wire. By reducing the utilization and improving the copy efficiency, with XtremIO Native Replication there is no need to compromise on the desired RPO nor on the cost.
By reducing significantly the capacity that is transferred on the wire, the replication process becomes much faster, which enables XtremIO Native Replication to support the AFA high workloads.
XtremIO Implementation
XtremIO Native Replication is an Asynchronous replication that supports all disaster recovery operations, including “test a copy” at the target host and “failover” to the target host.
It supports an RPO of 30 seconds or more and 1000 of recovery points per volume, with the ability to test and failover to any recovery point that exists in the target site and without the need to copy metadata or roll backwards, it’s just done instantaneously!
The retention of the native replication is policy driven. A policy needs to be set just once and can then be used then for multiple protection sessions.
The Retention Policy defines the protection window and the number of recovery-points that will be kept within the protection window.
There are different failure scenarios that requires different granularity for the recovery points. For instance for disaster-recovery or logical corruption scenarios the granularity should be as small as possible but is required for a short period of time such as 30 to 60 minutes. However, for other failure scenarios we might want to have a larger protection window but with a larger granularity.
The Retention Policy in XtremIO supports these different scenarios and allows to define up to 3 time periods with different granularities.
The retention is managed automatically by XtremIO according to the Retention Policy definition.
Thus by assigning the retention policy to the protection session, it’s easily to get the required protection:
When needed, any of the recovery points created by the protection session, can be then used for Test and Failover operations by a single command.
All in one place
One of the key benefits of XtremIO is the simplicity of the management. By using the protection wizard it’s easy to create both local and remote protections.
A high level summary is displayed at the top of the protection screen with the ability to drill-down to a more detailed information including a visual view that presents easily the configuration and status of the protection session.
The information is comprehensive and includes aggregated information to get a high level view of the protection status of a selected volume/consistency group. This information includes:
In addition, detailed information, current and overtime, per protection session, is presented:
Below you can see a tech preview demo, showing how our Content Addressable Storage (CAS) based Native Replication will look like:
Summary:
XtremIO is bringing to the market a very unique and efficient Native Replication solution. The Native Replication feature was designed with the same inherent architectural characteristics that define the XtremIO storage array:
One of the key factors of the success of XtremIO, was that customers understood the unique architectural, performance and functional benefits of the XtremIO array. They simply got it. In the same manner, XtremIO customers should understand the unique technological advantages of XtremIO’s native replication solution – making this yet another winning feature in the XtremIO feature set.
If you attended the OpenStack Summit in Boston a couple of weeks ago you must have seen the enthusiasm all around. More and more organizations are adopting the open source Infrastructure-as-a-Service software as a cloud solution for their data centers and using it to utilize their resources and manage their IT projects.
Customers are going in the OpenStack way, using its flexibility and modularity to build computer environments that can scale easily and deploy environments quickly, but they still need a storage solution that support those needs and allows quick access to the data.
Dell EMC’s XtremIO All Flash Array complements that need perfectly. Its unique scale-out architecture goes hand in hand with OpenStack scaling strength, its Data Reduction abilities cut the physical storage needed for the logical data saved, and its integrated Copy Data Management (iCDM) together with OpenStack’s services provide a way to quickly deploy environments, all while granting high bandwidth and very low latency for data access, being a 100% flash-based technology.
Attached to this post is a comprehensive white paper discussing the integration between a Mirantis OpenStack environment and an XtremIO storage array, detailing the benefits of such integration, together with guidelines and recommendations on how to do so.
EXECUTIVE SUMMARY
This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell EMC™ XtremIO® all-flash array as its storage backend. It describes the process of deploying such an environment and how to work with it. The paper also discusses the benefits of using XtremIO for such an environment, and how its unique features can be leveraged to create a cloud environment which is easy to operate, quick on resource deployment, provides excellent I/O performance and saves on organizational resources which include physical footprint and storage capacity requirements, as well as operations budget.
BUSINESS CASE OpenStack has been widely adopted around the world by many users and organizations as a private and public cloud solution to control and utilize large pools of compute, storage and network resources in their datacenters. The open-source Infrastructure-as-aService (Iaas) software is being used by a growing number of companies to allow quick deployments of virtual machines, networks, storage and other services to counter the industry’s requirements for an easy, non-expensive, fast solution for dynamic and scalable computer environments.
While the amount of data being saved everywhere grows larger, Information Technology (IT) departments are also being asked to cut costs and deliver quickly – both in time to deploy new services and in those services response time. Adopting OpenStack as a cloud solution gives IT departments a software-based management layer with which to oversee and orchestrate their datacenter resources. OpenStack provides fast deployment of services, but the need of storing the massive amount of data produced in the cloud, from where it can be saved efficiently and accessed quickly remains.
Dell EMC’s XtremIO All Flash Array complements that need perfectly. Its unique scale-out architecture enables maintaining a dynamic amount of data at each scale, and its Inline Data Reduction abilities (such as thin provisioning, deduplication and compression) cut the physical storage needed for the logical data saved by a few multiples at least, allowing customer savings in both space and cost. Being a 100% flash-based technology, XtremIO was built specifically to utilize its flash disks at an optimal level, which allows it to deliver ultrahigh performance for its storage clients at a very low latency, for both FC and iSCSI connections. XtremIO’s Copy Data Services also present a great benefit for cloud environments, as entire projects and tenants in the cloud can be copied to new test, development and analytics environments for almost no extra space in the storage system and with no performance degradation to either environment. XtremIO comes with an easy-to-use user interface to provide storage administrators a quick and convenient way of setting up enterprise class storage environments, provisioning storage to client hosts and applications and monitoring performance.
This paper discusses:
The XtremIO features and added value for OpenStack environments.
OpenStack’s cloud architecture and storage implementations.
Guidance, considerations and best practices for deploying, configuring and operating OpenStack with XtremIO
(This was written on the older X1 generation of XtremIO. X2-related documents will be published in the future.)
You can watch a recorded demo on how it all works here
and you can download the reference architecture from here
ESG (The Enterprise Strategy Group) have just released a new validation report for Dell EMC XtremIO and ProtectPoint
This report documents hands-on testing and validation of Dell EMC ProtectPoint technology for XtremIO with Dell EMC
Data Domain for protection storage. The report highlights data protection for Oracle Databases, with a focus on the
performance and efficiency of backups and restores at the host, network, and storage layers.
If you are new to ProtectPoint, I highly encourage you to first have a read here
https://xtremio.me/2015/11/12/emc-protectpoint-now-support-xtremio/
and watch this demo below
and to the WP itself..
Background
For several years, IT executives and professionals have consistently cited improving data backup and recovery, and
managing data growth among their top IT priorities in ESG’s annual IT spending intentions survey (see Figure 1).1 When
infrastructure failure or data corruption interrupts access to production data, organizations must be able to restore both
the information and application/user access to it quickly. Extended outages mean lost revenue and productivity, and
possibly the inability to meet regulatory requirements.
ESG research also indicates that IT leaders are increasingly focused on improved recoverability. In ESG’s survey exploring
trends in data protection modernization, respondents cited increased reliability of backups or recoveries, increased speed
and agility of recoveries, and increased speed or frequency of backups among the top data protection mandates for IT
leadership.
The white paper can be downloaded from here http://research.esg-global.com/reportaction/dellemcprotectingoracledatabases/Toc
Hi,
I had the pleasure to be interviewed by Sam about XtremIO X2 @ The Source Podcast is hosted by Sam Marraccini (@SamMarraccini)
http://thesource.libsyn.com/89-introducing-xtremio-x2?tdest_id=300181
We have just GA’d Appsync 3.5 which is a big and important release across the board (and across our portfolio), AppSync 3.5 will also be the minimum required release to support XtremIO X2 once it’s out, downloading Appsync can be done from https://support.emc.com/search/?text=appsync%203.5
Key Features in this release:
Major new features in AppSync 3.5
Enhanced AppSync support for VMAX All Flash and VMAX 3 to create simultaneous copies of SRDF R1 and R2 devices via the gold service plan.
Enhanced AppSync support for VPLEX to include support for VMAX All Flash and VMAX 3 arrays.
Support for customizing file-system paths when mounting copies of Microsoft SQL, Microsoft Exchange, and file-based standalone Oracle databases.
Oracle enhancements
Supports mixed layout of an Oracle database which has data files, control files, and redo logs on ASM disk groups and archive logs and/or FRA on file systems (such as ext4 on Linux).
Supports ASM RAC or standalone databases created using ASM Filter Driver (ASMFD).
Update AppSync configuration if Oracle databases are deleted on the Oracle database server.
AppSync now provides an option to create a SPFile when mounting the copy of an Oracle database.
TCE enhancements
AppSync now provides a way to save Service Plan events in a separate file for better serviceability.
AppSync provides an option to optimize restore time for applications residing on VMAX devices by initiating the application restore process without waiting for restore track synchronization to complete on the VMAX array.
During the UNIX host plug-in software installation, you can now specify the SSH port.
AppSync provides an option to set the maximum number of databases that can be grouped together for protection. This is only applicable for Microsoft SQL Server service plans.
You can now receive an email notification on the successful completion of a service plan.
AppSync provides an option to disable call out scripts for file system service plans.
AppSync now supports non ESX cluster mount on XtremIO, VMAX, and Unity arrays.
AppSync now supports refresh of mounted applications on VPLEX with XtremIO.
Unity Repurposing
Leveraging new snap and “thin clone” technologies that boost performance to “iCDM” levels
• Repurposing Unity requires Unity 4.2 (Merlin) – GA expected sometime after September 10th
– Includes using thin clones as 2nd gen
• 1st and 2nd gen snaps are supported for all repurposing workflows – SQL, Oracle, and file systems
• 1st gen copies must be snaps only
– Thin clones are not supported as 1st gen
• 2nd gen copies can be either snap or clone (Unity thin clone)
– 2nd gen thin clone copies cannot be created/refreshed if 1st Gen copy is mounted
• RecoverPoint + Unity Repurposing is supported
– Requires RecoverPoint 5.1 and Unity 4.2
• VPLEX+Unity Repurposing is not supported with this release
• Unity Block level only is supported – Unity File Snapshot is not supported with this release
“Path Mapping” for SQL, Exchange, & Oracle
Path Mapping Limitations and Restriction
Mounting to Windows nested file systems are not supported
– Example:
› E:\ mounting to C:\mapped_E
› F:\ mounting to C:\mapped_E\mapped_F\
• Oracle:
– Mounting filesystem copies from multiple production hosts to common mount host is not supported
– Supported with file systems only – no support for ASM DG’s
TCE Improvements
VPLEX + XtremIO Consistency Group Refresh
Mounted Applications
Identify and use XtremIO Consistency Group information during repurpose copy creation and refresh
– Uses XtremIO CG APIs for applications contained within a single CG
– Is an extension of the AppSync 3.1 enhancement supporting XtremIO CGs
• Ability to create XtremIO CG based Snapshot sets for applications residing in XtremIO consistency groups
– Refresh occurs at a XtremIO CG level during repurposing
• If a copy is created using XtremIO CG APIs, the mounted refresh will not un-map the storage from host
Oracle Improvements
Expanded event logging to incorporate more relevant information for analysis
• Databases removed from an environment, are removed from the UI
• Ability to Create SPFile with Mount on standalone server and recover database mount option
• Ability to manage environments which reside on both ASM disk groups as well as on native file
systems
– split archive logs and/or FRA from data files/redo logs/control files
– Example: Customer chooses to put data files, redo logs, and control files on ASM disk groups, but archive
logs on a native file system – AppSync 3.5 will now support “mixed” ASM and file system environments
• ASM rebalancing checks are now conducted based on DGs being replicated rather than at the instance leve
Easy access to customer training: https://community.emc.com/community/products/appsync#learn
Hi
We have just released RP 5.1 (not to be mistaken with 5.0 SP1 that was released some time ago), there are many enhancements in this release, I will focus on the XtremIO one, in a nutshell, we AUTOMATE everything!
The XtremIO array is zoned to the RPAs according to best practices. No volumes have yet been provisioned. In
Unisphere for RecoverPoint 5.1, select RPA Clusters > Storage > Register Storage.
The Register Storage dialogue shows this XtremIO cluster even with no exposed volumes. Highlight the XtremIO array and use the rp_user, on the XtremIO array, to register the XtremIO array.
Any connected XtremIO array can still be registered by providing its serial number by selecting Register any storage of type.
The XtremIO array was successfully registered and now shows up in the list of registered storage.
When you register the XtremIO array with the RecoverPoint cluster, it automatically creates the initiator group on the XtremIO array. The naming convention for the created initiator group is ARRAY_XIO-
<xxxxxxx>_0x<RPclusterID>.)
All initiators belonging to the RecoverPoint cluster are automatically registered as well.
During the configuration of protection for a production volume, a journal needs to be defined for the source copy.
By default, when working with the XtremIO array, the journal volumes are auto provisioned with a size set to 2GB.
The same is true when defining the journal for the copy volume.
The automatic creation and sizing of journal volumes is helpful, and reduces the necessary steps to set up
volume protection. The size of the journal volumes is valid for any size of source or copy volume in the XtremIO arrays.
You can also use the option Select Provisioned Journal Volumes if you have already created the journal volumes manually.
In the XtremIO environment, there is no need to create larger size journal volumes than the 2GB auto provisioned journal volumes.
Once you have confirmed all the settings and started the volume protection,(click)you can see the automatically created journal volumes.
When a copy is removed from the replication set, the corresponding journal volume is automatically deleted.
User volumes – production volumes, copy volumes – must exist and be part of RP initiator groups on the XtremIO arrays.
In this example you select a 30 GB production volume.
After the source journal volume was created, the next step is now to select the copy volume. Click the Select volume link.
XtremIO goes through the process of automatically finding size matches of copy user volumes to the production volume. The auto matching only works when the source and target is an XtremIO array.
The Select Volume dialog contains a list of candidates to which the production volume could replicate. In our example, only volumes of size 30 GB and larger are shown.
As a result, you have matched the production volume to the copy volume and defined the journal volumes
The Group Summary shows the production and copy volumes used for this replication. Click the Finish button to start the replication.
The list of volumes shows the journal volumes, production volume, copy volume, and snap volumes in the RecoverPoint XtremIO environment.
if the target volume doesn’t exit? no worries, RP will also create it automatically for you!
lastly, if you are letting RP create the replica (copy) volume for you, you might as well tell it to which initiator group(s) you want to map it at the remote site as well, right? right!, we got you covered here as well
Here’s a demo on how it all works
as always, the release notes, downloads etc can be found in our portal at https://support.emc.com/search/?text=recoverpoint
Along with the release of (the Classic) RP 5.1 release which you can read about here https://xtremio.me/2017/07/06/recoverpoint-5-1-is-here-a-must-upgrade/
We have also just released RP4VMs 5.1, to me, this if where scale and simplicity made their way into the product, here are some of the new features / changes
Deployer Improvements
DEPLOYER – INSTALL A vRPA CLUSTER
You now get many more pre-automated validations done for you so you know in advanced and before the actually deployments what errors you might have.
You now also get a much more simplified network setting screen, a good divide between the LAN / WAN settings.
If the previous installation has failed, the repository volume will be deleted as well and won’t prevent you from trying again.
It will also show you some basic instructions for troubleshooting.
Replicating Resource Reservations
Enhancement of Replicate hardware changes whereby the source VM reservations (Memory or CPU) can be reflected on the replica VM.
Encompassed within the Hardware changes option during the VM
Protection Flow (shown below) or in the RP4VMs plugin, Protection-
>VM->Hardware Settings.
The resource reservation replication will enable when the replica VM is powered off
• Any CPU\Memory reservation changes performed on the source VM will reflect on the replica VM only when in image access
Shadowless SMURF
SMURF = Starting Machine Up at Remote Facility
• Shadow – use a minimal hardware machine for SMURF in order to reduce consumption of resources at the remote
• Having a VM powered up in the remote in order to access the VMDKs
Looks and sounds like a “contradiction in terms” but in reality it means no more shadow VM .vmx file – the shadow and replica will use the same .vmx file
• The look and feel remains the same
Why are we doing this?
Minimizing ReconfigVM API calls – multiple operations in the system that each independently made API reload or reconfig calls (MORef ID
and VC UID now maintained)
• NSX with network segregation feature: NSX configuration was reset when transitioning between shadow and replica due to VC UID momentarily being wrong
• Some Cloud metering and monitoring systems where sensitive to the previous configuration
• Better Storage vMotion support on both modes of the replica VM – when SvMotion was used on the shadow, the replica VM vmx was not moved (or vice versa if SvMotion on the target)
Silent Image Access
This feature allows PP4VMs and RP4VMs users to perform Image Access without powering on the Replica VM
• PP4VMs does not require the VM to be powered on to recover and for RP4VMs customers they can choose to perform to their own power up sequence outside the scope of RP.
• Supports both backup Data Domain local replica copies and RP4VMs Local and remote replica copies
• The user can initiate Recover Production or Failover without powering on the replica VM.
This feature allows RP4VMs and PP4VMs users to perform Image Access without powering on the Replica VM
If the user initiates a Failover the plugin will display the warning message above
• Note that when failing over the replica VM will remain in a powered down state and the CG will move to a “Paused” state
RE-IP OR START-UP SEQUENCE ENABLED
A validation warning will be displayed to the user requesting confirmation before finishing the “Test a Copy” Wizard
• The system will block both production recovery and failover operations and return an error message
Failover Networks
Allow users to better view and change the networks of a replica VM which will be applied after failover. This feature arose from customer
complaints about the network changing from the “Test Network” chosen when entering Image Access to a arbitrary “Failover Network” after failing over
MODIFY FAILOVER NETWORKS IN THE PLUGIN
MODIFY FAILOVER NETWORKS IN THE FAILOVER FLOW
Scale & Performance
The 5.1 release aims to improve RP4VMs scalability capabilities by protecting maximal scale with minimal number of vRPA clusters
• The goal is to increase the RP4VMs scale limitation: replicating up to 256 consistency groups on a single ‘Bronze+’ vRPA (2 vCPUs, 8 GB RAM) and above
NEW (and awesome!) COMPRESSION LIBRARY
One compression level RP4VMs
RecoverPoint for VMs provides enhanced scale-out ability for a cluster of Bronze+ (2 vCPU/8GB RAM) vRPAs:
RecoverPoint for VMs achieves 100 percent across-the-board improvements in performance
CONSISTENCY GROUP STATS IN PLUGIN
• New option to choose the statistic time span
Below you can see some demos Idan Kentor (Corporate SE) @idankentor recorded
Here’s a demo showing the VM protection
and one showing a more advanced protection options
and lastly, the orchestration and failover options