About

Following is the tutorial on how to setup all the things. For me was most difficult part to partition the disk storage as there was no tutorial for that made by Virtualizor. At the bottom of this tutorial are important notes about "CPU units" and "CPU %" etc., without following that instructions, KVM may not work, so i advice read in full. 18GB RAM maximum for KVM VPS

Virtualizor in this setup will allow managing OpenVZ and KVM VPS's at one physical server. It can backup OVZ VPS and restore using vzrestore. It can import vzrestore restored OpenVZ VPS ***** into Virtualizor easily (Import menu item, very quick).

CentOS 6.x 64bit + disk partitioning

At http://www.virtualizor.com/wiki/Multi_Virtualization is mentioned that for KVM+OpenVZ virtualization, the requirement is: "CentOS 6 Node to run KVM + OpenVZ". So i installed CentOS 6 64bit and partitioning i selected 50GB / and 32GB SWAP (have 64GB RAM). (i do not know if this is good idea), KVM VPS's are said to be backed up inside volume group, not in / mount point. After OS installation the result was:

df -h
Code:
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              50G  *G   *G  *% /
tmpfs                  32G     0   32G   0% /dev/shm
fdisk -l
Code:
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        6528    52428800   83  Linux
/dev/sda2            6528        9661    25165824   82  Linux swap / Solaris
/dev/sda3            9661        9661        1024   83  Linux
/dev/sda4            9661       72582   505410560    5  Extended
/dev/sda5            9661       72582   505409536   8e  Linux LVM
So there is some LVM. So i list it using command "vgs;lvs;vgdisplay":
VG #PV #LV #SN Attr VSize VFree
vg 1 2 0 wz--n- 481.97g 160.00m
So i wanted to distribute 481GB disk space somehow, I used Thin LVM because it minimizes used disk space and allows over provisioning. If you need, here is some kind of comparison of other Virtualizor supported file systems (https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_storage).

So here i created new logical volumes using Thin LVM "system" inside volume group "vg"

lvcreate --size 50G --type thin-pool --thinpool thin_pool vg
lvcreate --size 270.00G --type thin-pool --thinpool thin_pool_large vg
Create file system:
mkfs.ext4 /dev/vg/thin_pool
mkfs.ext4 /dev/vg/thin_pool_large

50G one was aimed for KVM and 270 one for OpenVZ. I was told that "In-order to make backups work for Kvm, you will need free space in your VG of around 26Gb", so that is why i have not assigned all the free space (481GB) of the volume group (VG). It is probably better to keep "some" VG disk space unassigned to the LV's also for the reason that one can not decrease Thin LVM size currently (error: "Thin pool volumes cannot be reduced in size yet."), so better to create much smaller volumes and then carefully add space instead of later being unable to decrease size (lvresize --size 123G /dev/mapper/thin_pool;resize2fs /dev/vg/thin_pool # resize2fs updates df) having to backup vpss, deleting thin lvm and creating new + restoring vps?

I wanted OpenVZ and KVM storage to support Thin LVM (so it allows over provisioning of the disk space and no downtime snapshots/backups). And above mentioned lvcreate commands allows over provisioning/overselling of the disk space on both virtualization volumes!

Then i wanted to mount /dev/vg/thin_pool_large as a /vz file system, because it is a requirement to be at /vz

vi /etc/fstab

added line:
/dev/vg/thin_pool_large /vz ext4 defaults 1 1

and command: mkdir /vz;mount -a

then "df -h" shown new mount point /vz

I was told that beside OpenVZ logical volume, i do not need to mount KVM one.

Virtualizor installation

According to http://www.virtualizor.com/wiki/Install_KVM i executed Virtualizor installation at CentOS:

wget -N http://files.virtualizor.com/install.sh
chmod 0755 install.sh
./install.sh [email protected] kernel=kvm interface=em1

em1 is my main net interface seen at the top of "ifconfig" command output. Yours can be different, example: eth0

Installation seemed frozen at some point, but it indeed finished after many minutes.

Multi virtualization (OpenVZ+KVM in this case)

I logged into Virtualizor and followed http://www.virtualizor.com/wiki/Multi_Virtualization

it failed to enable it, but after reboot and retry enabling, it worked: https://internetlifeforum.com/virtualisation/8335-virtualizor-fixed-issue-enabling-openvz-kvm-multi-virtualization/

I created new Storage in Virtualizor / Storage / Add Storage
Name: OpenVZ
Server: All servers
Storage Path: /vz
Storage Type: OpenVZ
File Format: QCOW2
Submit, then another one:
Name: KVM
Server: All servers
Storage Path: /dev/vg/thin_pool
Storage Type: Thin LVM
File Format: RAW
Primary Storage: tick

Networking - Adding two IPv4 subnets

It worked. I used IP calculator (google it) http://www.calculator.net/ip-subnet-calculator.html and http://www.subnet-calculator.com (if have example /27 subnet, set "Mask Bits" to 27) and it provided me with correct IPs that i need to add into Virtualizor / IP Pool / Create IP Pool. Example:
Name: 1.2.3.192/27
Gateway: 1.2.3.193 (one IP above the subnet name is GW)
Netmask: 255.255.255.224 (last octet number depends on subnet size bits, see calculators above)
Nameserver 1: 8.8.8.8 (Google's open DNS)
Nameserver 2: 8.8.4.4 (Google's open DNS)
Server: click All servers or Localhost
Use Routed network: i ticked this, unsure about this, but in case of OpenVZ 7, failing to tick it may caused KVM fail to start!

Adding new OS templates

Virtualizor / Media / Template browser (they have it there free for download for kvm, xen, openvz, virtuozzo!)

Adding new VPS plan:

Virtualizor / Plans (more details: http://www.virtualizor.com/wiki/Adding_A_Plan)
IMPORTANT: in VPS plans and when creating new KVM VPS's leave the option "CPU %" set to "0" and the "CPU units" to "1000", else KVM VPS will probably does not work (be 99% CPU overloaded) or do not start at all.

Debugging

- KVM VPS logs are in /var/log/libvirt/qemu/ (example first VPS log file watching: tail -f /var/log/libvirt/qemu/v1001.log)
- Discover what virtualizor says (errors): Virtualizor / Tasks
- KVM VPS configuration files are in /etc/libvirt/qemu

Command line tools

"virsh" can be used

list KVM VPS's: virsh list
show KVM VPS details: virsh dominfo v1001
show/edit KVM VPS configuration file: virsh edit v1001
apply configuration changes: virsh define /etc/libvirt/qemu/v1001.xml
virsh reboot/shutdown/start v1001
"freeze" VPS keep in RAM, will not survive reboot: virsh suspend/resume v1001
save VPS state into disk and remove from RAM, survive reboot: virsh save/restore v1001 /path/to/dumpfile

Also "prlctl", example:
prlctl list -n v1009 -i
shows various detals of the KVM VPS

BUGS! READ THIS

Multi virtualization is in BETA (and i guess it will stay like that), there are serious bugs (functional wise, not security wise - i am aware of) in KVM on OpenVZ kernel 2.6.3 (the setup described on this web page):

- KVM VPS will start only if you set "CPU %" to "0" and "CPU units" to "1000" !!! These variables are defined inside Virtualizor, "VPS Plan" page and VPS configuration editing page. If misconfigured, some VPS will be 99% CPU overloaded or do not start at all.
- Be careful about "Enable RDP" inside VPS configuration and VPS Plan settings, VPS may not boot.
- unable to setup more than 18000 MB RAM to a KVM VPS, else it shutdown ~1 min. after boot. Here is relevant info about that:
OpenVZ kernel kills KVM VM process if it finds system memory is too low and this VM has a high score of OOM (Out Of Memory), which can be checked for a process using following command:
cat /proc/PID/oom_score

If you dont want kernel to kill processes having this score, you can set
vm.memory_overcommit to 2 both under sysctl.conf and /proc/sys/vm.memory_overcommit

Note, this has a consequence of the whole node to halt (or reboot) if there is high memory usage.

You can search more about OOM_KILLER and configure your server accordingly.
----
Do You have personal experience / fixes / ideas? Please kindly share.