Page tree
Skip to end of metadata
Go to start of metadata

2017-03-06 Problem FOUND and RESOLVED!

After the problem was reproducable and we figured out that it was related to the I/O, we tested serveral different options to lower the I/O. Finally we found out that XenServer is using LVM (Linux Logical Volumes) for their running instances instead of qcow files. So we tried LVM for the beroNet hypervisor too and et voila the problem disappeared! 

It took some time to rewrite all the code from qcow files to LVM as it involved nearly every function. Now it is done and it prooves to be stable. To update to the newest Version 2.XX from 1.XX it is necessary to follow these steps:

  1. backup all VMs to a large enough USB drive
  2. update to 2.XX via the recovery stick: 1.4 Factory Reset and Recovery (Appliance)
  3. convert the backups to the new backup format in the Backup GUI of the 2.XX Hypervisor
  4. Restore the backups

NOTE: LVM is much more performant, so the update will not only bring more stability, it will also bosst the speed of the VMs. 

2017-02-05 Problem finaly reproducable!

The main problem in identifying the cause of the hypervisor issue, was that it was not reproduceable at all. Most of the times the crash happened after several weeks of usage. Now we could find a setup which could reproduce the freeze within 1 hour. The idea was that a so called race-condition happened somewhere in the Hypervisor kernel. Race-conditions create random like crashes, like in our case with a timescale of a few weeks. The key is to create enough load, so that the race-condition would happen faster. We have tried several things to push the race-condition, but so far we had no success until now. By using the fio - I/O generating tool, we where finaly able to reprodruce the crash within 1 hour, this helped in testing different Kernels and settings out. 

The setup for identifying the race condition

2xVMs where setted up: 

  1. Alpine Linux 
  2. Windows 10

On the hypervisor itself we started the fio tool (https://github.com/axboe/fio) with the command line: 


fio --name=test --rw=randrw --size=800M --numjobs=5 --time_based --runtime=8h --direct=1 --alloc-size=4096

In the Alpine VM we started another fio instance:

fio --name=test --rw=randrw --size=300M --numjobs=5 --time_based --runtime=8h --direct=1 --alloc-size=512

this means 5 threads are generating random read / write accesses to the SSD, both within a VM and outside in dom0, thus the Hypervisor. The Windows VM is just a Test-VM to see if a second VM runs beside the main VM. 

After a maximum of one hour this setup tended to create a kernel panic, or to freeze the hypervisor or both. 

What is the problem now?

After trying out several things, we are still not sure what the cause is.We can say that the problem could be related to the use of SWAP and Overcommitting of Memory, probably the fair I/O scheduler in the Linux Kernel. After disabling Swap and overcommitting memory and enabling the noop I/O scheduler, fewer crashes and lockups happened.

How to optimize I/O ? 

You can manually make changes to your existing Hypervisor installation until the 1.03 will be released, which will have those optimizations built-in. First of all you need to create a root password (Management→Hypervisor Settings, change root pw). Then you can login with root into the Hypervisor via the console or via SSH. 

Disabling Swap

To disable the swap permanently you need to edit the file /etc/fstab and put a # before the swap line, it should finaly look like:

#/dev/sda2 swap swap defaults 0 0

Enable NOOP I/O Scheduler

You need to edit the file:  /boot/extlinux.cfg and modify the Kernel Parameter, the option "elevator=noop" should be added. The line should look like:

APPEND xen.gz dom0_mem=1024M --- vmlinuz-grsec root=UUID=f75fac6a-cf4d-48bd-89c5-5a9fffd0e23f modules=sd-mod,usb-storage,ext4 elevator=noop quiet --- initramfs-grsec

 Disable Memory Overcommiting

To disable memory overcommitting a sysctl-file should be created. We call it /etc/sysctl.d/01-overcommit.conf and fill it with the line:

vm.overcommit_memory=2


2017-01-10 Current Hypervisor State is BETA

We experience a major problem with the beroNet Hypervisor Operating System. The problem results in a frozen state of the appliance after several days, sometimes weeks of production use. The freeze is a result of a Linux Kernel Panic which taints the Kernel, after a while the Kernel locks up. Unfortunately we have not detected this issue during testing, as it does not happen immediately. As a matter of fact the issue was not present in the 0.9.7 version and below. We know this, because all customers that use the 0.9.7 version and the beroNet Cloud all have uptimes of more than 100 days, sometimes even more than 200 days.

We were wondering what big differences have been implemented between 0.9.7 and 1.0 which could result in such a major problem and we have the following guesses:

  • Linux Kernel of dom0
  • XEN Kernel 
  • QEMU toolstack
  • Alpine Linux version
  • VM Format (especially Disk Format)

Since roughly 2 months we are investigating these possibilities step-by-step by downgrading each component back to the 0.9.7 state. The new Version 1.01 is built completely on the base system of the 0.9.7, that is why the Upgrade to the 1.01 requires an installation via the recovery stick. The whole system needs to be re-installed, but there is an option to maintain the VMs. 

Even after having downgraded the whole Alpine OS, the problem still persists. We believe currently that the new VM Format which was introduced with the 0.9.8 Version triggers the problem. It means, that VMs which where created after the 0.9.8 version will have the new format already. Even if the Hypervisor is downgraded the format will persist. 

There are currently 2 options to go back to the 0.9.7 VM format, either by reinstalling the VM under 0.9.7 or 1.02, or use the 1.02 disk conversion function to convert the disk of the VM to qcow.

As we are not yet completely sure if these steps finally resolve the appliance freezing issue, that is why we decided to set the Hypervisor state to Beta and not for production use. 

NOTE: Instead of using the Hypervisor, any operating system can be installed on the appliance, even other Hypervisors like VMware or Microsofts HyperV. 

We are intensively searching for a final solution, so that we can go on in enhancing the hypervisor. Everybody who wants to contribute can still use the hypervisor and provide Debug files which we can use for analysis.

  • No labels
Write a comment…