Running KVM in a Zone on illumos (OpenIndiana)

Preamble

This blog post documents how to run a Linux VM inside of Qemu inside the KVM branded zone. While illumos can run almost any application these days, some more exotic runtimes like mono have removed solaris and thus illumos support. To be able to run applcations that are based on these frameworks you will need either an lx branded zone or a Linux VM. As lx branded zones have still have not made it upstream into the illumos sources (hint @illumos-developers) I will use a VM to host some pre made docker containers.

Preparations

We will need a openindiana host with the following installed:

pkg install system/qemu/kvm system/zones/brand/kvm

Additionally you will need the ISO from your Linux distro of choice mine is my custom arch linux iso with zfs pre installed. Once I have that image finished to my liking I will blog about it too. I will use a Zvol as disk backend to leverage all ZFS features. File based images are also available although they add no additional features compared to Zvol's

Zone creation

First let's create a zone for a VM that will install from a VM. In this step of the set-up we will create: * a Virtual nic connected to the Hosts Network interface (rge0) * a ZFS Block device (Volume) to host the Vm's Disk * the zone that will Host our VM

### Variables
ZONE_NAME="arch" # Limited to 65 Characters Alphanumeric
VNIC_NAME="${ZONE_NAME}0" # Limited to 16 Characters Alhpanumeric. Must end in a Number
ZFS_ZONES_PATH="rpool/zones" # ZFS Dataset under which all zones will be created
ZONES_PATH="$(zfs get -Hp -o value mountpoint ${ZFS_ZONES_PATH})" # Path in the VFS where all child Datasets will be located
PARENT_NIC="rge0"
VM_DISK_SIZE="30G"
VM_DISK_NAME="disk0" # Limited to Alphanumerc Characters
VM_IP_CIDR="192.168.10.24/24"
ISO_PATH="/rpool/iso/archlinux-2019.12.iso"
#### END Variables

dladm create-vnic -l ${PARENT_NIC} ${VNIC_NAME}
zfs create "${ZFS_ZONES_PATH}/${ZONE_NAME}"
zfs create -V "${VM_DISK_SIZE}" "${ZFS_ZONES_PATH}/${ZONE_NAME}/${VM_DISK_NAME}"
zonecfg -z ${ZONE_NAME} <<EOF
create -b
set brand=kvm
set zonepath=${ZONES_PATH}/${ZONE_NAME}
set ip-type=exclusive
add net
    set allowed-address=${VM_IP_CIDR}
    set physical=${VNIC_NAME}
end
add device
    set match=/dev/zvol/rdsk/${ZFS_ZONES_PATH}/${ZONE_NAME}/${VM_DISK_NAME}
end
add attr
    set name=bootdisk
    set type=string
    set value=${ZFS_ZONES_PATH}/${ZONE_NAME}/${VM_DISK_NAME}
end
add attr
    set name=vnc
    set type=string
    set value=on
end
add fs
    set dir=${ISO_PATH}
    set special=${ISO_PATH}
    set type=lofs
    add options ro
    add options nodevices
end
add attr
    set name=cdrom
    set type=string
    set value=${ISO_PATH}
end
EOF
zoneadm -z ${ZONE_NAME} install
zoneadm -z ${ZONE_NAME} boot

Now you have a VM booting from ISO in a zone. To connect the VM's Serial Console use

zlogin -C ${ZONE_NAME}

VNC access

To access the VNC console you will need to forward the Socket loacted under /tmp/vm.vnc to where you want to access it. I am using the Hosts network Interface so that I can access it via my local Network/VPN. The socat utility is not provided by any package in OpenIndiana. But there is a simplified version bundled with the zone brand.

/usr/lib/brand/kvm/socat "${ZONES_PATH}/${ZONE_NAME}/root/tmp/vm.vnc"

And there you go. The Vm will boot first from harddisk and if nothing is installed from cdrom. You can also modify other settings like memory allocation etc. Documentation is thanks to the people behind the OmniOS distribution. Full attribute docs here