LXC on Ubuntu 11.04 Server

Published:

For a while I've been interested in Linux Containers (LXC), new way of providing Linux virtual machines on Linux hosts. Unlike machine virtualization systems like Xen, VMWare, KVM and VirtualBox, LXC is an OS-level virtualization system. Instead of booting a complete disk image Linux Containers share the same kernel as the host and typically use a filesystem that is a sub-tree of the host's. I'd previously tried to get this running on a machine that was connected by WiFi, but basically that doesn't work. Bridged network interfaces don't play nicely with wireless network interfaces. I just set up a machine on wired ethernet, found a great introductory article, and I'm up and running.

Stéphane Graber's article is great, but it hides a bunch of the details away in a script he wrote. Here I'm going to explain how I got LXC up and running on my Ubuntu 10.04 system as simply as possible.

Install the required packages Everything you need to set up and run LXC is part of Ubuntu these days, but it's not all installed by default. Specifically you need the LXC tools, debootstrap (which can create new Ubuntu / Debian instances) and the Linux bridge utilities.

apt-get install lxc debootstrap bridge-utils
Set up the network To put the containers on the network we use a bridge. This is a virtual ethernet network in the kernel that passes ethernet frames back and forth between the containers and the physical network. Once we've set this up our current primary network interface (eth0) becomes just a conduit for the bridge (br0) so the bridge should be used as the new primary interface.

Add the bridge in /etc/networking/interfaces:

# LXC bridge
auto br0
iface br0 inet dhcp
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0
And change eth0 to be manually configured, ie: it doesn't DHCP:
auto eth0
iface eth0 inet manual
Now bring up the interface:
ifup br0
Set up Control Groups Control Groups are a fairly new Linux mechanism for isolating groups of processes as well as managing the resources allocated to them. The feature is exposed to userland via a filesystem that must be mounted, so put this in your /etc/fstab:
cgroup          /cgroup         cgroup
Create the mount point, and mount it:
mkdir /cgroup
mount /cgroup
Now we're ready to create containers An LXC container is a directory under /var/lib/lxc. That directory contains a configuration file named config, a filesystem table called fstab and a root filesystem called rootfs. The easiest way to do that is to use the lxc-create script. First we create a configuration file that describes the networking configuration, let's call it network.conf:
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
and then run lxc-create to build the container:
lxc-create -n mycontainer -t natty -f network.conf
Note, -n let's you specify the container name (ie: the directory under /var/lib/lxc), -f lets you specify a base configuration file and -t lets you indicate which template to use.

Container templates are implemented as scripts under /usr/lib/lxc/templates. Given a destination path they will install a base Linux system customized to run in LXC. There are template scripts to install Fedora, Debian and recent Ubuntus as well as minimal containers with just busybox or an sshd. Template scripts cache completed root filesystems under /var/cache/lxc. I've found the container template scripts to be interesting and quite readable.

Starting a container Starting the container is as simple as:

lxc-start --name mycontainer --daemon
If you leave off the --daemon then you'll be presented with the container's console. Getting a console is as simple as:
lxc-console --name mycontainer

Keeping containers straight Each time a container is started it will be allocated a random MAC (ethernet) address. This means that it'll be seen by your network as a different machine each time it's started and it'll be DHCPed a new address each time. That's probably not what you want. When it requests an address from the DHCP server your container will pass along its hostname (eg: mycontainer). If your DHCP server can be configured to allocate addresses based on hostname then you can use that. Mine doesn't so I assign static ethernet addresses to my containers. There's a class of ethernet addresses that are "locally administered addresses". These have a most-significant byte of xxxxxxxx10, ie: x2, x6, xA or xE. These addresses will never be programmed into a network card. I chose an arbitrary MAC address block and started allocating addresses to containers. They're allocated by adding the following line after the network configuration in the /var/lib/lxc/mycontainer/config file:

lxc.network.hwaddr = 66:5c:a1:ab:1e:01

Managing your containers There are a few handy tools for container management, beyond the lxc-start and lxc-console mentioned before.

lxc-stop --name mycontainer does what you'd probably expect.

lxc-ls lists available containers on one line and running containers on the next. It's a shell script so you can read it and work out how to find which containers are running in your own scripts.

lxc-ps --lxc lists all process across all containers.

There are more like lxc-checkpoint, lxc-freeze and lxc-unfreeze that look pretty exciting but I haven't had a chance to play with them. There are also a set of tools for restricting a container's access to resources so that you can prioritize some above others. That's a story for another day.