These instructions have been tested on Ubuntu 9.10 (Karmic) 64-bit. Skip right to the instructions if you're short on time.

After being a happy Xen user for several years now, I've recently had to switch to an alternative virtualization solution. My colleague Arun (@iamclovin) actually struggled for a week with Xen VMs that locked up on Hardy; we've had much success with Hardy and Xen before, so we attributed it to a hardware problem since these were our first blade servers.

Out of ideas, we tried Karmic (Ubuntu 9.10) only to discover that Xen support via the apt package system is gone. I went down the path of compiling a paravirt_ops Dom0 kernel (this article was very useful) but ended up deciding the process took far too long despite being successful.

With KVM gaining official support from Ubuntu as the virtualization solution, I ended up ditching Xen and switching to KVM for these new servers on Karmic. The rest of the entry is a step-by-step guide on setting up KVM VMs on a Ubuntu server; I'm putting this down because like all wikis, the Ubuntu KVM wiki has grown a little too organically to be useful.

Preparing a host server for KVM

  1. Update and upgrade apt packages (use your own discretion on whether this is necessary):

    aptitude update && aptitude dist-upgrade
  2. Check whether CPU supports hardware virtualization:

    egrep '(vmx|svm)' --color=always /proc/cpuinfo

    You should see lines with either "vmx" or "svm" highlighted.

  3. Install these packages:

    aptitude install kvm libvirt-bin ubuntu-vm-builder bridge-utils

    If you see a FATAL: Error inserting kvm_intel message during installation, it means that virtualization is not enabled in your machine's BIOS. You'll need to reboot your machine, enter the BIOS setup and enable virtualization (you'll have to hunt for the option).

    After enabling virtualization in the BIOS and rebooting, run:

    modprobe kvm-intel

    There should be no error shown (in fact, no console response).

  4. Optionally, install virt-top, a top-like tool for your VMs:

    aptitude install virt-top
  5. Verify that you can connect to the hypervisor:

    virsh -c qemu:///system list

    You should see something like this:

    Connecting to uri: qemu:///system
     Id Name                 State
     ----------------------------------
  6. Setup a network bridge on the server for VMs. Edit /etc/network/interfaces so it looks like this (use your own IPs):

    auto lo
    iface lo inet loopback
    
    auto eth0
    iface eth0 inet manual
    
    auto br0
    iface br0 inet static
     address 192.168.1.222
     netmask 255.255.255.0
     network 192.168.1.0
     broadcast 192.168.1.255
     gateway 192.168.1.167
     bridge_ports eth0
     bridge_stp off
     bridge_fd 9
     bridge_hello 2
     bridge_maxage 12
     bridge_maxwait 0
  7. Make sure that you have a direct console to the server because you're going to restart networking:

    /etc/init.d/networking restart
  8. Verify that your changes took place with ifconfig. You should see 2 entries like these:

    br0       Link encap:Ethernet  HWaddr 00:11:22:33:44:55
              inet addr:192.168.1.215  Bcast:192.168.1.255  Mask:255.255.255.0
              inet6 addr: fe80::223:aeff:fefe:1f14/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:1099 errors:0 dropped:0 overruns:0 frame:0
              TX packets:50 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:74665 (74.6 KB)  TX bytes:6223 (6.2 KB)
    
    eth0      Link encap:Ethernet  HWaddr 00:66:77:88:99:00
              inet6 addr: fe80::223:aeff:fefe:1f14/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:4939 errors:0 dropped:0 overruns:0 frame:0
              TX packets:39 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:532798 (532.7 KB)  TX bytes:5585 (5.5 KB)
              Interrupt:36 Memory:da000000-da012800

Setting up a VM

  1. Setup a VM with vmbuilder:

    vmbuilder kvm ubuntu \
        -v \
        --suite=karmic \
        --libvirt=qemu:///system \
        --arch=amd64 \
        --cpus=2 \
        --mem=2048 \
        --swapsize=4096 \
        --rootsize=20480 \
        --flavour=server \
        --hostname=billiejean \
        --ip=192.168.1.240 \
        --mask=255.255.255.0 \
        --net=192.168.1.0 \
        --bcast=192.168.1.255 \
        --gw=192.168.1.167 \
        --dns='202.157.163.157 202.157.131.118' \
        --bridge=br0 \
        --mirror=http://archive.ubuntu.com/ubuntu \
        --components='main,universe' \
        --addpkg=openssh-server \
        --user=administrator \
        --pass=icanhaspasswd \
        --dest=/root/vm-billiejean \
        --tmpfs=-

    The options you need to care about are:

    1. suite: Version of Ubuntu to install (e.g. karmic, hardy).
    2. cpus: Number of CPUs to assign to VM.
    3. mem: Amount of RAM in MB to assign to VM.
    4. swapsize: Size of swap in MB of VM.
    5. rootsize: Size of root filesystem in MB of VM.
    6. flavour: The "flavour" of kernel to use in the VM. Either "virtual" or "server".
    7. hostname: Hostname of VM.
    8. ip: IP address of VM.
    9. mask: Netmask of VM.
    10. net: Network of VM.
    11. bcast: Broadcast address of VM.
    12. gw: Gateway of VM.
    13. dns: DNS server(s) for VM.
    14. addpkg: APT packages to install in the VM. openssh-server is needed so that we can login to the VM to setup the virsh console.
    15. user and pass: User account that's setup for you to access the VM.
    16. dest: Destination directory on server where VM disk image will reside.
  2. If your VM is created successfully, there'll be a config file for the VM in /etc/libvirt/qemu/ (e.g. /etc/libvirt/qemu/billiejean.xml), and a disk image in the directory specified in the --dest option (e.g. /root/vm-billiejean/disk0.qcow2).
  3. You can verify that it works by starting the VM and SSHing into it (virsh console will not work yet).

    virsh start billiejean

Converting Disk Images to LVM Logical Volumes

Now, we have the VM setup but it's running off a disk image. For better performance, running the VM off a LVM logical volume will optimize disk IO.

vmbuilder is supposed to support the --raw option to write the VM to a block device (such as a LVM logical volume), but I've had no success with it (as does Mark Imbriaco, sysadmin of 37signals: http://twitter.com/markimbriaco/status/7437688341 and http://twitter.com/markimbriaco/status/7437699338). We're going to convert the disk images using qemu-img and write the bits into a LVM logical volume instead.

  1. Stop the VM if it's running:
    virsh shutdown billiejean
  2. Convert the VM's qcow2 (QEMU image format) disk image to a raw disk image:
    qemu-img convert disk0.qcow2 -O raw disk0.raw
  3. Create a logical volume to house the VM, making sure it's big enough for the VM's rootsize and swapsize options:
    lvcreate -L24GB -n <logical_volume_name> <volume_group_name>
  4. Copy raw image into the logical volume:
    dd if=disk0.raw of=/dev/<volume_group_name>/<logical_volume_name> bs=1M

    This will take awhile (the bigger your image, the longer it takes).

  5. Edit the VM's config so that it uses your new logical volume:
    virsh edit billiejean

    Change <disk> to point to the logical volume:

    <disk type='block' device='disk'>
      <source dev='/dev/<volume_group_name>/<logical_volume_name>'/>
      <target dev='hda' bus='ide'/>
    </disk>
  6. Startup the VM. You might wanna rename the original disk0.qcow2 image first just to make sure your VM isn't still using it.
  7. Once you're sure your VM is running off your LVM logical volume, you can delete or backup the original qcow2 disk image.

Getting a console to your VM from the Host Server

Now, we have to setup the VM so that virsh console works. This is a console to the VM from the host server that works even when the networking in the VM is not.

  1. Edit the VM's settings:
    virsh edit billiejean

    In the <devices> block, add:

    <serial type='pty'>
      <target port='0'/>
    </serial>
  2. Startup your VM:
    virsh start billiejean
  3. SSH into VM and create a file /etc/init/ttyS0.conf:
    start on stopped rc RUNLEVEL=[2345]
    stop on runlevel [!2345]
    
    respawn
    exec /sbin/getty -8 38400 ttyS0 vt102

    Start the tty with:

    start ttyS0
  4. Still in the VM, install acpid so that the VM will respond to shutdown commands from the server:
    aptitude install acpid
  5. Reboot the VM.
  6. Verify the console works by opening a console to the VM from the server:
    virsh console billiejean

    You may have to hit "Enter" before you see any console output.

Miscellaneous VM Setup

That's it, your VM is ready! You'll probably want to do these:

  • Set a root password and possibly delete the user you setup using vmbuilder.
  • Set the timezone with dpkg-reconfigure tzdata.