Quantcast
Channel: itzikr – Itzikr's Blog
Viewing all articles
Browse latest Browse all 202

Under The Dom(e)0, Tweaking XenServer For EMC XtremIO

$
0
0

image

Hi,

on the last couple of days we have been very busy working with XenServer fine tuning. we have noticed that its much harder to push IOPS using XenServer than say VMware vSphere (not really surprising because the I/O of VMware is much more advanced) but since i don’t want to make this a who’s better post, here goes, i also wanted to stress that like the name of the post title suggest, these are recommendations for EMC XtremIO array, with other arrays, your mileage may vary.

Testing XenServer and especially testing it in conjunction with All Flash Array (AFA), you need to have a good overview of how to test an All Flash Array, IDC has covered it here http://idcdocserv.com/241856 but some highlights:

image

Keeping Up To Date with XenServer 6.1

like many cases, you want to make sure you are running the latest version / HotFixes, listed below are some really important HotFixes for XenServer, it’s important to understand that NOT all of them are cumulative

http://support.citrix.com/product/xens/v6.1.0/hotfix/general/

http://support.citrix.com/article/CTX135410

•“After installing XenServer 6.1.0, multipathing will only use one path for Software iSCSI on arrays which contain the same IQN on multiple target portals. As a result, customers will see only one active path and will not successfully failover in the event of an infrastructure issue. This hotfix resolves this issue.”

These hotfixes were recommended by CITRIX For XenServer 6.1

We found a great improvement for DOM0 after applying them

http://support.citrix.com/article/CTX137645

http://support.citrix.com/article/CTX137843

HBA Qdepth Settings

•Edit/create /etc/modprobe.d/qla2xxx

•Add – options qla2xxx ql2xmaxqdepth=128 (for 6.x use 256)

•Save & close

•Reboot the host.

Configure The Multipath Driver

Like in vSphere, if you are not using Powerpath/VE, you want to use Round Robin and deliver 1 IOP per path, so:

•Edit /etc/multipath.conf

•Add the following to the devices section:

•device {

• vendor XtremIO

• product XtremApp

• features “0″

• hardware_handler “0″

• rr_weight priorities

• rr_min_io 1

• path_selector “round-robin 0″

• path_checker tur

• path_grouping_policy multibus

• }

XenServer Control Domain (Dom0)

image

http://wiki.xen.org/wiki/Dom0

•”..Dom0, or domain zero to expand the abbreviation, is the initial domain started by the Xen hypervisor on boot. The Xen hypervisor is not usable without Domain-0 (“dom0″).

•The dom0 is essentially the “host” operating system (or a “service console”, if you prefer). As a result, it runs the Xen management toolstack, and has special privileges, like being able to access the hardware directly.

•It also has drivers for hardware, and it provides Xen virtual disks and network access for guests each referred to as a domU (unprivileged domains). For hardware that is made available to other domains, like network interfaces and disks, it will run the BackendDriver, which multiplexes and forwards to the hardware requests from the FrontendDriver in each DomU.

•Unless DriverDomain‘s are being used or the hardware is passed through to the domU, the dom0 is responsible for running all of the device drivers for the hardware.”

Dom 0 Recommendations

RAM:

image

DISK:

http://support.citrix.com/article/CTX134738

•“When running I/O intensive tasks, customers may not achieve expected bandwidth and performance, even when they are not limited physically by their infrastructure. In these situations customers should check if their XenServer host dom0 vCPUs are overloaded. If the dom0 vCPUs are found to be overloaded, customers should follow the steps in this article to assign more processing power to dom0. XenServer 6.1.0 introduces a new command line tool at /etc/init.d/tune-vcpus which allows you to simply adjust the vCPU settings”

•You also want to consider to pin these vCPUS to DOM0 and make sure that the VM’s are not using them

Bingo!

Can I dedicate a cpu core (or cores) only for dom0?

http://wiki.xen.org/wiki/Tuning_Xen_for_Performance

•Yes, you can. It might a good idea especially for systems running IO intensive guests. Dedicating a CPU core only for dom0 makes sure dom0 always has free CPU time to process the IO requests for the domUs. Also when dom0 has a dedicated core there are less CPU context switches to do, giving better performance.

•Specify “dom0_max_vcpus=X dom0_vcpus_pin” options for Xen hypervisor (xen.gz) in grub.conf and reboot

Pinning Down vCPU’s For DOM0

•According to the guides the best practice is to allocate 8 vCPUs to DOM0.

•The way to do it is by running this commands:

•/opt/xensource/libexec/xen-cmdline –set-xen dom0_max_vcpus=1-8

•/opt/xensource/libexec/xen-cmdline –set-xen dom0_vcpus_pin

•Reboot the server

Pinning Down DOM0 vCPU’s

By running the commands and rebooting the server, you will not get any

improvement because XEN server does NOT allocate them to DOM0

system but doesn’t enable them

image

Pinning Down 8 DOM0 vCPU’s

•There is a way to overcome this.

•To figure out what are the CPUs that allocated to DOM0 simply run this:

•ls -d /sys/devices/system/cpu/cpu*

And now to all the disabled CPUs run this:

•echo 1 > /sys/devices/system/cpu/cpu[4-7]/online

•echo 1 > /sys/devices/system/cpu/cpu4/online

•echo 1 > /sys/devices/system/cpu/cpu5/online

•echo 1 > /sys/devices/system/cpu/cpu6/online

•echo 1 > /sys/devices/system/cpu/cpu7/online

Pinning Down DOM0 vCPU’s

•After this here is the expected output (the state was always changing):

image

XenServer IO Schedulers

CFQ

Anticipatory scheduler

Deadline scheduler

In our testing, we found out that the default CFQ Scheduler doesn’t yield the best performance so we changed it to “NOOP”

NOOP Scheduler

http://en.wikipedia.org/wiki/Noop_scheduler

http://en.wikipedia.org/wiki/Noop_scheduler

•The NOOP scheduler inserts all incoming I/O requests into a simple FIFO queue and implements request merging. This scheduler is useful when it has been determined that the host should not attempt to re-order requests based on the sector numbers contained therein. In other words, the scheduler assumes that the host is definitionally unaware of how to productively re-order reques

“There are (generally) three basic situations where this situation is desirable:

“..Because movement of the read/write head has been determined to not impact application performance in a way that justifies the additional CPU time being spent re-ordering requests. This is usually the case with non-rotational media such as flash drives or Solid-state drives

Setting UP NOOP

IO scheduler can be done VIA XEN commend line using this command:

xe sr-param-set uuid=<sr_uuid> other-config:scheduler=noop

A reboot will be required to enable it (no I didn’t find a way how to check it, only by running the workload).

•There is a way to enable it without rebooting the XEN server and it by using this command:

echo noop > /sys/block/dm-89/queue/scheduler

To make sure that it’s working simply cat the file.

For your convenience here is a one liner:

•for i in `multipath -ll |grep “XtremIO,XtremApp”|awk ‘{print $2}’`; do echo noop > /sys/block/$i/queue/scheduler; done

•for i in `multipath -ll |grep “XtremIO,XtremApp”|awk ‘{print $2}’`; do cat /sys/block/$i/queue/scheduler; done.

So..these were the intial tweaks, we then ran a test with one XenServer Hypervisor and 4 VM’s..

Using 8 vCPUs For DOM0 With 4 VM’s

image

as you can see, it’s very easy to saturate the dom0 while already using 8 vCPU’s !!

the next thing we tried is to use 16 vCPU’s for DOM0..

Pinning Down 16 DOM0 vCPU’s

remember we already pinned vCPU’s 0-3, right..? so

•echo 1 > /sys/devices/system/cpu/cpu4/online

•echo 1 > /sys/devices/system/cpu/cpu5/online

•echo 1 > /sys/devices/system/cpu/cpu6/online

•echo 1 > /sys/devices/system/cpu/cpu7/online

echo 1 > /sys/devices/system/cpu/cpu8/online

•echo 1 > /sys/devices/system/cpu/cpu9/online

•echo 1 > /sys/devices/system/cpu/cpu10/online

•echo 1 > /sys/devices/system/cpu/cpu11/online

•echo 1 > /sys/devices/system/cpu/cpu12/online

•echo 1 > /sys/devices/system/cpu/cpu13/online

•echo 1 > /sys/devices/system/cpu/cpu14/online

•echo 1 > /sys/devices/system/cpu/cpu15/online

Using 16 vCPUs For DOM0 With 4 VM’s

image

As you can see, we solved the DOM0 utilization but at what cost?

Summary:

the architecture of the control domain (DOM) was always to save installing drivers and such, all the I/O travel trough DOM0 but i guess it was never planned with maximum IOPS / least latency in mind, in the next part i will explain the changes in XenServer 6.2

Credits:

I wanted to thank my colleagues Dima and Dadee for the big effort they put testing all of this!



Viewing all articles
Browse latest Browse all 202

Trending Articles