Philipp's Computing Blog

Success is about speed and efficiency

Container Virtualization using LXC on Ubuntu

LXC (Linux Containers) is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a single control host. LXC does not provide a virtual machine, but rather provides a virtual environment that has its own process and network space. It is similar to a chroot, but offers much more isolation. LXC relies on the Linux kernel cgroups and namespace isolation functionality from version 2.6.29, which was developed as part of LXC.

In this blog post I'm going to describe you how to set up an LXC with Ubuntu 10.10 maverick step by step.

WARNING: Even if I explain the necessary steps all one by one this is still an advanced procedure. If you do not know how to get the bridged network connection to work with you personal setup, you won't be able to access your network on your LXC container!


System containers

Ready to use script

To automize all of the steps below I wrote a script that automatically does everything for you on Ubuntu 10.10 maverick:

https://gist.github.com/794335 (edit 2012-04-28: an updated version can be found on https://gist.github.com/2519584 ).

Otherwise have a look at the templates installed in /usr/lib/lxc/templates (that comes with the package lxc).

Installation of the Prerequisits

First, we want to mount the virtual control group file system on /cgroup. So we create this directory first using sudo mkdir -p /cgroup. Then we add the line none /cgroup cgroup defaults 0 0 to /etc/fstab and mount the new entry using

sudo mount /cgroup

Then install some required packages:

sudo aptitude install lxc debootstrap bridge-utils libcap2-bin

Now you need a bridge in order to enable networking on the virtual machine in the LXC. To do this you can add the following lines to /etc/network/interfaces:

# LXC-Config
auto br0
iface br0 inet dhcp
  # ethX is the interface with which the bridge should be bridged
  bridge_ports eth1
  bridge_stp off
  bridge_maxwait 5
  post-up /usr/sbin/brctl setfd br0 0

This set up would connect the new bridge to the physical network interface eth1 and would be of use if you are connected to a network with a DHCP server assigning IP addresses to that interface.

Start the bridge using /etc/init.d/networking restart. In my setup, my physical network interface eth1 was not up after the network restart. So watch out for problems here.

Creation of the LXC itself

Create a directory where you want to install the LXC and a subdirectory called rootfs.guest. For example rootfs.guest in /home/user/lxc/. So run

mkdir -p /home/user/lxc/rootfs.guest

Now create the mount file fstab.guest for the guest:

none /lxc/rootfs.guest/dev/pts devpts defaults 0 0
#none /lxc/rootfs.guest/dev/run tmpfs defaults 0 0
none /lxc/rootfs.guest/dev/shm tmpfs defaults 0 0

Install a minimal system in the container (32bit systems can be set up by using i386 instead of amd64):

sudo debootstrap --arch amd64 maverick /home/user/lxc/rootfs.guest http://archive.ubuntu.com/ubuntu

You need to modify the fstab file in the fresh installation: comment out the lines starting with /proc, /dev and /dev/pts in /home/user/lxc/rootfs.guest/lib/init/fstab. In addition set the hostname guest for the fresh install: /home/user/lxc/rootfs.guest/etc/hostname and /home/user/lxc/rootfs.guest/etc/hosts:

127.0.0.1 localhost guest

We need a user with admin rights on the new system. So start a shell in the new system using chroot /home/user/lxc/rootfs.guest /bin/bash. Then we also need to install openssh-server in order to be able to connect to the machine via SSH:

u=philipp; g=admin
adduser $u; addgroup $g; adduser $u $g
apt-get update
apt-get install openssh-server

If you want the new user (here philipp) to be able to run commands with super user permissions, you need to edit /etc/sudoers in the container and replace sudo with admin.

You might want to extend the kind and number of available packages by replacing the minimal content of /etc/apt/sources.list with the official Ubuntu Maverick 10.10 Sources:

deb http://de.archive.ubuntu.com/ubuntu maverick main restricted universe multiverse
deb http://de.archive.ubuntu.com/ubuntu maverick-updates main restricted universe multiverse
deb http://de.archive.ubuntu.com/ubuntu maverick-security main restricted universe multiverse

Now leave the chroot environment using [Ctrl]+[d] or exit.

Create the container configuration file conf.guest. For more options see the man page on lxc.conf.

lxc.utsname = guest
lxc.tty = 4
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 08:00:12:34:56:78
#lxc.network.ipv4 = 0.0.0.0
lxc.network.ipv4 = 192.168.1.69
lxc.network.name = eth0
lxc.mount = /lxc/fstab.guest
lxc.rootfs = /lxc/rootfs.guest
lxc.pts = 1024
#
lxc.cgroup.devices.deny = a
# /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
# consoles
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 4:0 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
# /dev/{,u}random
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
# rtc
lxc.cgroup.devices.allow = c 254:0 rwm

Now you have to prepare the LXC configuration using

sudo lxc-create -n guest -f /home/user/lxc/conf.guest

And start the system using (-d starts the system in background as daemon)

sudo lxc-start -n guest -d

Stop the system using

sudo lxc-stop -n guest

After any change to the conf.guest you have to run the following commands:

sudo lxc-destroy -n guest
sudo lxc-create -n guest -f /home/user/lxc/conf.guest

Ready to use script

To automize all of these steps above I wrote a script that automatically does everything for you on Ubuntu 10.10 maverick:

https://gist.github.com/794335 (edit 2012-04-28: an updated version can be found on https://gist.github.com/2519584 ).

Application containers

The easiest way to use LXC is to create an application container. You take any application and run it in a container. This is an example to run the bash in an lxc application container:

sudo lxc-execute -n bash-test1 /bin/bash

Or if you want more isolation (like virtual networking) specify a settings file:

sudo lxc-execute -n bash-test2 -f lxc-macvlan.conf /bin/bash

where lxc-macvlan.conf contains:

# example as found on /usr/share/doc/lxc/examples/lxc-macvlan.conf
# Container with network virtualized using the macvlan device driver
lxc.utsname = alpha
lxc.network.type = macvlan
lxc.network.flags = up
lxc.network.link = eth0 # or eth2 or any of your NICs
lxc.network.hwaddr = 4a:49:43:49:79:bd
lxc.network.ipv4 = 0.0.0.0/24
#lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3596

more LXC commands

  • lxc-ls Lists all containers you set up.
  • lxc-info -n guest Prints information about the container guest.
  • lxc-ps or lxc-ps -n guest Lists all processes that run in all / the guest container.
  • lxc-netstat -n guest Prints the netstat information for the container.
  • lxc-monitor -n guest Track the state of the container guest.
  • lxc-cgroup -n guest cpuset.cpus Print the cpu share for the container.

The commands lxc-ps, lxc-ls and lxc-netstat accept in general the parameters that their non LXC counterparts ps, ls abd netstat accept too.

Some examples:

Get information on any container on the system:

for i in $(lxc-ls -1); do
    lxc-info -n $i
done

List the processes with ASCII art on the hierarchy:

lxc-ps -n guest --forest

resources