Univention KVM I/O Delays

Hi everyone,

we have 2 Hypervisor called kvm01 and kvm02, both with UCS 4.2-1 errata 52.
kvm01 is DC in an internal kvm domain and kvm02 slave.

We use lvm’s for the virtual hard disks, and attach them ‘raw’ to the VM.

Our main problem is, that we have I/O Delays over 5 even there’s no great activity on the Server,

for example compared with an proxmox host,

Univention KVM, I/O about 10MB/s Read/Write -> I/O Delay over 6
Proxmox KVM, I/O about 50 to 80 MB/s R/W -> I/O Delay doesn’t even hit 1, mostly about 0,1 - 0,2.

Hardware (Proxmox & UCS) is nearly the same.
2x Intel Xeon E5-2643 v4 @3,4GHz (HT enabled)
192 GB Ram
Storage: Eternus DX80 with SAS Hard drives and two 10GBit fibre connection (multipath & iscsi)

For now we activated writeback cache mode for all VM’s, the I/O Delays become minimal better, but are still too high.

Normal I/O Delays should be 1/n where n is the Number of CPU Cores, right?

i already spent hours to find the bottleneck.
the only thing, what could be a problem is the number of vm’s and even more the number of cpu cores assigned to the vm’s.
There are 10 VM’s and 26 & 28 assigned CPU Cores on the kvm hosts.

Any ideas?

Hello,

you seem to use some benchmark tool to measure the IO bandwidth and IO delay; which one do you use?
If not, you can use something like
dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
to test the write performance using the named test file.

first check if the problem is LVM related or QEMU. For this

  1. Create a logical volume on your host system by hand using lvcreate --name test --size 1G vg_ucs and run the benchmark on the host-system.
  2. Add a 2nd disk to one of your VMs and perform the test inside that VM using that additional disk.
  3. As you use iSCSI, can you also please setup a separate iSCSI target and perform the benchmark directly on that device with no LVM at all.

If the 1st is way faster than the 2nd, it’s a QEMU problem.
If the 2nd is way faster than the 3rd, it’a a LVM issue.
Otherwise probably an iSCSI or network problem.

What type of IO controller do you have configured in QEMU: ide, sata, virtio, scsi?
What cache setting have you configured in QEMU: none, writeback, writethrough, unsafe, default?
(You can use virsh dumpxml $VM to get that data - just post the data here)

Can you also please check which IO scheduler is used:
grep ' ' /sys/class/block/*/queue/scheduler

Hi Philipp,

nice to see you again here, my colleague and i had the technical Training at the summit with you :wink:

i will do the tests later, but here are some answers:

What type of IO controller do you have configured in QEMU: ide, sata, virtio, scsi?
As IO controller we only use virtio

What cache setting have you configured in QEMU: none, writeback, writethrough, unsafe, default?
i recently changed all 20 VM’s to writeback, the perofmance became a little bit better, but its still to slow for this Hardware

Can you also please check which IO scheduler is used:
grep ' ' /sys/class/block/*/queue/scheduler

root@kvm01:~# grep ' ' /sys/class/block/*/queue/scheduler
/sys/class/block/dm-35/queue/scheduler:noop deadline [cfq] 
/sys/class/block/dm-36/queue/scheduler:noop deadline [cfq] 
/sys/class/block/dm-37/queue/scheduler:noop deadline [cfq] 
/sys/class/block/dm-38/queue/scheduler:noop deadline [cfq] 
/sys/class/block/dm-39/queue/scheduler:noop deadline [cfq] 
/sys/class/block/dm-40/queue/scheduler:noop deadline [cfq] 
/sys/class/block/sda/queue/scheduler:noop deadline [cfq] 
/sys/class/block/sdb/queue/scheduler:noop deadline [cfq] 
/sys/class/block/sdc/queue/scheduler:noop deadline [cfq] 
/sys/class/block/sdd/queue/scheduler:noop deadline [cfq] 
/sys/class/block/sde/queue/scheduler:noop deadline [cfq] 
/sys/class/block/sdf/queue/scheduler:noop deadline [cfq] 
/sys/class/block/sdg/queue/scheduler:noop deadline [cfq] 
/sys/class/block/sdh/queue/scheduler:noop deadline [cfq] 
/sys/class/block/sdi/queue/scheduler:noop deadline [cfq] 
/sys/class/block/sdj/queue/scheduler:noop deadline [cfq] 
/sys/class/block/sdk/queue/scheduler:noop deadline [cfq] 
/sys/class/block/sdl/queue/scheduler:noop deadline [cfq] 
/sys/class/block/sdm/queue/scheduler:noop deadline [cfq] 
/sys/class/block/sr0/queue/scheduler:noop deadline [cfq]

I see nothing obvious from you additional data:

  • cfg is used by default in UCS, which can delay IO-ops. You can do echo noop > /sys/class/block/$DEV/scheduler to change it just for testing. sdX are probably your iSCSI devices (use lsblk), dm-XX the Device-Mapper files created by LVM (use ls -l /dev/mapper/vg_ucs-*).
  • As noted in my previous post, testing the different layers should narrow down, where the issue is located.

[quote=“pmhahn, post:2, topic:6004, full:true”]
Hello,

you seem to use some benchmark tool to measure the IO bandwidth and IO delay; which one do you use?[/quote]
Basic dd jobs, and to check the system load and delays i use top.

I also used fio, but there the iops seems to be okay, but i only tested directly on the kvm host, i will do some tests again, to compare the performance between vm and hypervisor.

This could probably be a reason.
Sadly we have an VM with 3 harddisks (3 LVMs on the hypervisor) which get bundled in the VM with lvm again, to one volume.
The realy strange thing is, that i could see these volume on the hypervisor, this only affects one VM.

I’ll try that asap.

I’ll try this with the test iSCS target.

Mastodon