Discussion:
[Qemu-discuss] virtio-console downgrade the virtio-pci-blk performance
Feng Li
2018-09-30 05:25:58 UTC
Permalink
Hi,
I found an obvious performance downgrade when virtio-console combined
with virtio-pci-blk.

This phenomenon exists in nearly all Qemu versions and all Linux
(CentOS7, Fedora 28, Ubuntu 18.04) distros.

This is a disk cmd:
-drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on

If I add "-device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 ", the virtio
disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.

In VM, if I rmmod virtio-console, the performance will back to normal.

Any idea about this issue?

I don't know this is a qemu issue or kernel issue.


Thanks in advance.
--
Thanks and Best Regards,
Alex
Dr. David Alan Gilbert
2018-10-01 11:41:47 UTC
Permalink
Post by Feng Li
Hi,
I found an obvious performance downgrade when virtio-console combined
with virtio-pci-blk.
This phenomenon exists in nearly all Qemu versions and all Linux
(CentOS7, Fedora 28, Ubuntu 18.04) distros.
-drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
If I add "-device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 ", the virtio
disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.
In VM, if I rmmod virtio-console, the performance will back to normal.
Any idea about this issue?
I don't know this is a qemu issue or kernel issue.
It sounds odd; can you provide more details on:
a) The benchmark you're using.
b) the host and the guest config (number of cpus etc)
c) Why are you running it with iscsi back to the same host - why not
just simplify the test back to a simple file?

Dave
Post by Feng Li
Thanks in advance.
--
Thanks and Best Regards,
Alex
--
Dr. David Alan Gilbert / ***@redhat.com / Manchester, UK
Feng Li
2018-10-01 14:58:24 UTC
Permalink
Hi Dave,
My comments are in-line.
Post by Dr. David Alan Gilbert
Post by Feng Li
Hi,
I found an obvious performance downgrade when virtio-console combined
with virtio-pci-blk.
This phenomenon exists in nearly all Qemu versions and all Linux
(CentOS7, Fedora 28, Ubuntu 18.04) distros.
-drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
If I add "-device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 ", the virtio
disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.
In VM, if I rmmod virtio-console, the performance will back to normal.
Any idea about this issue?
I don't know this is a qemu issue or kernel issue.
a) The benchmark you're using.
I'm using fio, the config is:
[global]
ioengine=libaio
iodepth=128
runtime=120
time_based
direct=1

[randread]
stonewall
bs=4k
filename=/dev/vdb
rw=randread
Post by Dr. David Alan Gilbert
b) the host and the guest config (number of cpus etc)
The qemu cmd is : /usr/libexec/qemu-kvm --device virtio-balloon -m 16G
--enable-kvm -cpu host -smp 8
or qemu-system-x86_64 --device virtio-balloon -m 16G --enable-kvm -cpu
host -smp 8

The result is the same.
Post by Dr. David Alan Gilbert
c) Why are you running it with iscsi back to the same host - why not
just simplify the test back to a simple file?
Because my ISCSI target could supply a high IOPS performance.
If using a slow disk, the performance downgrade would be not so obvious.
It's easy to be seen, you could try it.
Post by Dr. David Alan Gilbert
Dave
Post by Feng Li
Thanks in advance.
--
Thanks and Best Regards,
Alex
--
--
Thanks and Best Regards,
Feng Li(Alex)
Feng Li
2018-10-11 10:15:41 UTC
Permalink
Add Amit Shah.

After some tests, we found:
- the virtio serial port number is inversely proportional to the iSCSI
virtio-blk-pci performance.
If we set the virio-serial ports to 2("<controller
type='virtio-serial' index='0' ports='2'/>), the performance downgrade
is minimal.

- use local disk/ram disk as virtio-blk-pci disk, the performance
downgrade is still obvious.


Could anyone give some help about this issue?
Post by Feng Li
Hi Dave,
My comments are in-line.
Post by Dr. David Alan Gilbert
Post by Feng Li
Hi,
I found an obvious performance downgrade when virtio-console combined
with virtio-pci-blk.
This phenomenon exists in nearly all Qemu versions and all Linux
(CentOS7, Fedora 28, Ubuntu 18.04) distros.
-drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
If I add "-device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 ", the virtio
disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.
In VM, if I rmmod virtio-console, the performance will back to normal.
Any idea about this issue?
I don't know this is a qemu issue or kernel issue.
a) The benchmark you're using.
[global]
ioengine=libaio
iodepth=128
runtime=120
time_based
direct=1
[randread]
stonewall
bs=4k
filename=/dev/vdb
rw=randread
Post by Dr. David Alan Gilbert
b) the host and the guest config (number of cpus etc)
The qemu cmd is : /usr/libexec/qemu-kvm --device virtio-balloon -m 16G
--enable-kvm -cpu host -smp 8
or qemu-system-x86_64 --device virtio-balloon -m 16G --enable-kvm -cpu
host -smp 8
The result is the same.
Post by Dr. David Alan Gilbert
c) Why are you running it with iscsi back to the same host - why not
just simplify the test back to a simple file?
Because my ISCSI target could supply a high IOPS performance.
If using a slow disk, the performance downgrade would be not so obvious.
It's easy to be seen, you could try it.
Post by Dr. David Alan Gilbert
Dave
Post by Feng Li
Thanks in advance.
--
Thanks and Best Regards,
Alex
--
--
Thanks and Best Regards,
Feng Li(Alex)
--
Thanks and Best Regards,
Feng Li(Alex)
Amit Shah
2018-10-15 18:51:15 UTC
Permalink
Post by Feng Li
Add Amit Shah.
- the virtio serial port number is inversely proportional to the iSCSI
virtio-blk-pci performance.
If we set the virio-serial ports to 2("<controller
type='virtio-serial' index='0' ports='2'/>), the performance downgrade
is minimal.
If you use multiple virtio-net (or blk) devices -- just register, not
necessarily use -- does that also bring the performance down? I
suspect it's the number of interrupts that get allocated for the
ports. Also, could you check if MSI is enabled? Can you try with and
without? Can you also reproduce if you have multiple virtio-serial
controllers with 2 ports each (totalling up to whatever number that
reproduces the issue).

Amit
Post by Feng Li
- use local disk/ram disk as virtio-blk-pci disk, the performance
downgrade is still obvious.
Could anyone give some help about this issue?
Post by Feng Li
Hi Dave,
My comments are in-line.
Post by Dr. David Alan Gilbert
Post by Feng Li
Hi,
I found an obvious performance downgrade when virtio-console combined
with virtio-pci-blk.
This phenomenon exists in nearly all Qemu versions and all Linux
(CentOS7, Fedora 28, Ubuntu 18.04) distros.
-drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
If I add "-device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 ", the virtio
disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.
In VM, if I rmmod virtio-console, the performance will back to normal.
Any idea about this issue?
I don't know this is a qemu issue or kernel issue.
a) The benchmark you're using.
[global]
ioengine=libaio
iodepth=128
runtime=120
time_based
direct=1
[randread]
stonewall
bs=4k
filename=/dev/vdb
rw=randread
Post by Dr. David Alan Gilbert
b) the host and the guest config (number of cpus etc)
The qemu cmd is : /usr/libexec/qemu-kvm --device virtio-balloon -m 16G
--enable-kvm -cpu host -smp 8
or qemu-system-x86_64 --device virtio-balloon -m 16G --enable-kvm -cpu
host -smp 8
The result is the same.
Post by Dr. David Alan Gilbert
c) Why are you running it with iscsi back to the same host - why not
just simplify the test back to a simple file?
Because my ISCSI target could supply a high IOPS performance.
If using a slow disk, the performance downgrade would be not so obvious.
It's easy to be seen, you could try it.
Post by Dr. David Alan Gilbert
Dave
Post by Feng Li
Thanks in advance.
--
Thanks and Best Regards,
Alex
--
--
Thanks and Best Regards,
Feng Li(Alex)
--
Thanks and Best Regards,
Feng Li(Alex)
Amit
--
http://amitshah.net/
Feng Li
2018-10-16 02:26:08 UTC
Permalink
Hi Amit,

Thanks for your response.

See inline comments.
Post by Amit Shah
Post by Feng Li
Add Amit Shah.
- the virtio serial port number is inversely proportional to the iSCSI
virtio-blk-pci performance.
If we set the virio-serial ports to 2("<controller
type='virtio-serial' index='0' ports='2'/>), the performance downgrade
is minimal.
If you use multiple virtio-net (or blk) devices -- just register, not
necessarily use -- does that also bring the performance down? I
Yes. We just register the virtio-serial, and not use it, it brings the
virtio-blk performance down.
Post by Amit Shah
suspect it's the number of interrupts that get allocated for the
ports. Also, could you check if MSI is enabled? Can you try with and
without? Can you also reproduce if you have multiple virtio-serial
controllers with 2 ports each (totalling up to whatever number that
reproduces the issue).
This is the full cmd:
/usr/libexec/qemu-kvm -name
guest=6a798fde-c5d0-405a-b495-f2726f9d12d5,debug-threads=on -machine
pc-i440fx-rhel7.5.0,accel=kvm,usb=off,dump-guest-core=off -cpu host -m
size=2097152k,slots=255,maxmem=4194304000k -uuid
702bb5bc-2aa3-4ded-86eb-7b9cf5c1e2d9 -drive
file.driver=iscsi,file.portal=127.0.0.1:3260,file.target=iqn.2016-02.com.smartx:system:zbs-iscsi-datastore-1537958580215k,file.lun=74,file.transport=tcp,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
-drive file.driver=iscsi,file.portal=127.0.0.1:3260,file.target=iqn.2016-02.com.smartx:system:zbs-iscsi-datastore-1537958580215k,file.lun=182,file.transport=tcp,format=raw,if=none,id=drive-virtio-disk1,cache=none,aio=native
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1,bootindex=2,write-cache=on
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -vnc
0.0.0.0:100 -netdev user,id=fl.1,hostfwd=tcp::5555-:22 -device
e1000,netdev=fl.1 -msg timestamp=on -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5

qemu version: qemu-kvm-2.10.0-21

I guess the MSI is enabled, I could see some logs:
[ 2.230194] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
[ 3.556376] virtio-pci 0000:00:05.0: irq 24 for MSI/MSI-X

The issue could reproduce easily, using one virtio-serial with 31
ports, and this is the default port num.
I think it's not necessary to reproduce with multiple controllers.
Post by Amit Shah
Amit
Post by Feng Li
- use local disk/ram disk as virtio-blk-pci disk, the performance
downgrade is still obvious.
Could anyone give some help about this issue?
Post by Feng Li
Hi Dave,
My comments are in-line.
Post by Dr. David Alan Gilbert
Post by Feng Li
Hi,
I found an obvious performance downgrade when virtio-console combined
with virtio-pci-blk.
This phenomenon exists in nearly all Qemu versions and all Linux
(CentOS7, Fedora 28, Ubuntu 18.04) distros.
-drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
If I add "-device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 ", the virtio
disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.
In VM, if I rmmod virtio-console, the performance will back to normal.
Any idea about this issue?
I don't know this is a qemu issue or kernel issue.
a) The benchmark you're using.
[global]
ioengine=libaio
iodepth=128
runtime=120
time_based
direct=1
[randread]
stonewall
bs=4k
filename=/dev/vdb
rw=randread
Post by Dr. David Alan Gilbert
b) the host and the guest config (number of cpus etc)
The qemu cmd is : /usr/libexec/qemu-kvm --device virtio-balloon -m 16G
--enable-kvm -cpu host -smp 8
or qemu-system-x86_64 --device virtio-balloon -m 16G --enable-kvm -cpu
host -smp 8
The result is the same.
Post by Dr. David Alan Gilbert
c) Why are you running it with iscsi back to the same host - why not
just simplify the test back to a simple file?
Because my ISCSI target could supply a high IOPS performance.
If using a slow disk, the performance downgrade would be not so obvious.
It's easy to be seen, you could try it.
Post by Dr. David Alan Gilbert
Dave
Post by Feng Li
Thanks in advance.
--
Thanks and Best Regards,
Alex
--
--
Thanks and Best Regards,
Feng Li(Alex)
--
Thanks and Best Regards,
Feng Li(Alex)
Amit
--
http://amitshah.net/
--
Thanks and Best Regards,
Feng Li(Alex)
Loading...