Category Archives: Ubuntu

Landscape Tags to Puppet Facter Facts

I’ve been playing around this week with using Landscape to interact with puppet. I really like Landscape as an administration tool, but I also really like using puppet to manage packages and configuration. So the question is how to get puppet to do something based on Landscape. Landscape tags seemed like an obvious choice for this, tag a system as “compute-node” and it becomes a nova compute node, but how do you make this happen?

Standing on Shoulders

After running through a couple of ideas, I was inspired by a great article from Mike Milner. Mike’s blog post uses Landscape tags to do node classification and I wanted to use tags to set facter facts so I needed a few changes.

Make Some Tags

First I went to Landscape and added some basic tags to my box:

Just a couple tags

Just a couple tags

Get Some Tags

Next, I sat down to write a script that would run on each system and get the tags for the system it was running on. I did this first before I looked into the glue with Facter. This turned out to be pretty since since the Landscape API, is easy to use, well documented, and comes with a CLI implementation that makes testing easy.


Facter Time

Once that worked, it was time to look at Facter. A colleague told me about executable facter facts. The tl;dr for this is, drop a script into /etc/facter/facts.d, make it executable and facter will take the output from it and turn it into facter facts. This was super cool. I had planned on some complex hooks being required, but all I had to do was print the tags to stdout. However, Facter wants a key=value pair and Landscape tags are more like just values, so I decided on a method to generate keys by prepending landcape_tagN, where N is an iterated number. With that change in, I ran facter:

ubuntu@mfisch:~$ facter | grep landscape
landscape_tag0 => nova-compute
landscape_tag1 => vm

Values or Keys?

The puppet master will not know what to do with “landscape_tag0” most likely, so we’ll need a convention to make this more useful. One idea that my colleague had was to actually set a tag with an = sign, like this, “type=compute”. Alas Landscape won’t let us do this, so instead we’ll probably just set a convention that the last _ is the key/value boundary. That would map like this:

  • datacenter_USEast -> datacenter=USEast
  • node_type_compute -> node_type=compute
  • foo_bar -> foo=bar
  • Note the current version of my script that’s in github doesn’t have this convention yet, you’ll probably want to choose your own anyway.


    My script is pretty simple, it retrieves a list of computers, filtering on the hostname, as long as it only finds one match, it proceeds. This matching is the weakpoint of the current iteration of this script. It’s possible to have more than one system registered with the same hostname, so I’m thinking of adding a better filter here when I get time. If you’ll note the original script that I based mine on solved this by making you pass in the hostname or id as an argument. My current plan is to look at the system title in /etc/landscape/client.conf and do a 2nd level filter on that, but even that I don’t think will be guaranteed to be unique.

    What is This Good For?

    You probably use an automated tool to install and boot a node and it registers with Landscape (via puppet of course). Now what. We have a box running Ubuntu but not doing much else. Let’s say I want a new compute node in an openstack cluster, so all I’d have to do is tag that box with a pre-determined tag, say “compute-node”, let puppet agent run and wait. The puppet master will see the facter facts that let us know it should be a compute node and act accordingly.


    Here’s the code and patches always welcome.

    pbuilder via pbuilder-scripts: A Short Howto

    There are a myriad of ways to do cross-compiles and a smaller myriad that can do chrooted debian package builds. One of my favorite tools for this is pbuilder and I’d like to explain how (and why) I use it.

    A pbuilder environment is a chrooted environment which can have a different distroseries or architecture than your host system. This is very useful, for example, when your laptop is running raring x64 and you need to build binaries for saucy armhf to run on Ubuntu Touch. Typically pbuilders are used to build debian packages, but they can also provide you a shell in which you can do non-package compilations. When you exit a pbuilder (typically) any packages you’ve installed or changes you’ve made are dropped. This makes it the perfect testing ground when building packages to ensure that you’ve defined all your dependencies correctly. pbuilder is also smart enough to install deps for you for package builds, which makes your life easier and also avoids polluting your development system with lots of random -dev packages. So if you’re curious, I recommend that you follow along below and try a pbuilder out, it’s pretty simple to get started.

    Getting Setup

    First install pbuilder and pbuilder-scripts. The scripts add-on really simplifies setup and usage and I highly recommend it. This guide makes heavy use of these scripts, although you can use pbuilder without them.

    sudo apt-get install pbuilder pbuilder-scripts

    Second, you need to setup your ~/.pbuilderrc file. This file defines a few things, mainly a set of extra default packages that your pbuilder will install and what directories are bind-mounted into your pbuilder. By default pbuilder scripts looks in ~/Projects, so make that directory at this point as well and set it in the .pbuilderrc file.

    Add the following to .pbuilderrc, substitute your username for user:

    BINDMOUNTS="${BINDMOUNTS} /home/user/Projects"
    EXTRAPACKAGES="${EXTRAPACKAGES} pbuilder devscripts gnupg patchutils vim-tiny openssh-client"

    I like having the openssh-client in my pbuilder so I can copy stuff out easier to target boxes, but it’s not strictly necessary. A full manpage for ~/.pbbuilderrc is also available to read about setting more advanced stuff.

    Don’t forget to make the folder:
    mkdir ~/Projects

    Making your First Pbuilder

    Now that you’re setup, it’s time to make your first pbuilder. You need to select a distroseries (saucy, raring, etc) and an architecture. I’m going to make one for the raring i386. To do this we use pcreate. I use a naming scheme here so that when I see the 10 builders I have, I can keep some sanity, I recommend you do the same, but if you want to call your pbuilder “bob” that’s fine too.

    cd ~/Projects
    pcreate -a i386 -d raring raring-i386

    Running this will drop you into an editor. Here you can add extra sources, for example, if you need packages from a PPA. Any sources list you add here will be permanent anytime you use this pbuilder. If you have no idea what I mean by PPA, then just exit your editor here.

    At this point pcreate will be downloading packages and setting up the chroot. This may take 10-30 minutes depending on your connection speed.

    This is a good time to make coffee or play video games

    This is a good time to make coffee or play video games

    Using your pbuilder

    pbuilders have two main use cases that I will cover here:

    Package Builds

    pbuilder for package builds is dead simple. If you place the package code inside ~/Projects/raring-x86, pbuilder will automagically guess the right pbuilder to use. Elsewhere and you’ll need to specify.

    Aside: To avoid polluting the root folder, I generally lay the folders out like this:


    Then I just do this

    cd ~/Projects/raring-i386/project/project-0.52

    This will unpack the pbuilder, install all the deps for “project” and then attempt to build it. It will exit the pbuilder (and repack it) whether it succeeds or fails. Any debs built will be up one level.

    Other – via a Shell

    The above method works great for building a package, but if you are building over and over to iterate on changes, it’s inefficient. This is because every time it needs to unpack and install dependencies (it is at least smart enough to cache the deps). In this case, it’s faster to drop into a shell and stay there after the build.

    cd ~/Projects/raring-i386

    This drops you into a shell inside the chroot, so you’ll need to manually install build-deps.

    apt-get build-dep project

    ptest also works great when you need to do non-package builds, for example, I build all my armhf test code in a pbuilder shell that I’ll leave open for weeks at a time.

    Updating your pbuilder

    Over time the packages in your pbuilder may get out of date. You can update it simply by running:

    pupdate -p raring-i386

    This is the equivalent of running apt-get upgrade on your system.


    A few caveats for starting with pbuilder.

    • Ownership – files built by pbuilder will end up owned as root, if you want to manipulate them later, you’ll need to chown them back or deal with using sudo
    • Signing – unless you bind mount your key into your pbuilder you cannot sign packages in the pbuilder. I think the wiki page may cover other solutions.
    • Segfaults – I use pbuilders on top of qemu a lot so that I can build for ARM devices, however, it seems that the more complex the compile (perhaps the more memory intensive?) the more likely it is to segfault qemu, thereby killing the pbuilder. This happened to a colleague this week when trying to pbuild Unity8 for armhf. It’s happened to me in the past. The only solution I know for this issue is to build on real hardware.
    • Speed – For emulated builds, like armhf on top of x86_64 hardware (which I do all the time), pbuilds can be slow. Even for non-emulated builds, the pbuilder needs to uncompress itself and install deps every time. For this reason if you plan on doing multiple builds, I’d start with ptest.
    • Cleanup – When you tire of your pbuilder, you need to remove it from /var/cache/pbuilder. It also caches debs in here and some other goodies. You may need to clean those up manually depending on disk space constraints.


    I’ve really only scratched the surface here on what you can do with pbuilder. Hopefully you can use it for package builds or non-native builds. The Ubuntu wiki page for pbuilder has lots more details, tips, and info. If you have any favorite tips, please leave them as a comment.

    Tagged ,

    Hacking the initrd in Ubuntu Touch

    This week I’ve been hacking some of the initrd scripts in Ubuntu Touch and I thought that I’d share some of the things I learned. All of this work is based on using Image Update images, which are flashable by doing phablet-flash ubuntu-system. First, why would you want to do this? Well, the initrd includes a script called “touch” which sets up all of the partitions and does some first boot migration. I wanted to modify how this process works for some experiments on customizing the images.

    Before getting started, you need the following packages installed on your dev box: abootimg, android-tools-adb, android-tools-fastboot

    Note: I was told after posting this that it won’t work on some devices, including Samsung devices, because they use a non-standard boot.img format.

    Getting the initrd

    The initrd is inside the boot.img file. I pulled mine from here, but you can also get it by dding it off of the phone. You can find the boot partition on your device with the following scriptlet, taken from flash-touch-initrd:

    for i in $BOOT; do                                                              
        path=$(find /dev -name "*$i*"|grep disk| head -1)                           
        [ -n "$path" ] && break                                                     
    echo $path

    Once you have the boot.img file by whatever means you used, you need to unpack it. abootimg is the tool to use here, so simply run abootimg -x [boot.img]. This will unpack the initrd, kernel and boot config file.

    Unpacking and Hacking the initrd

    Now that you have the initrd, you need to unpack it so you can make changes. You can do this with some cpio magic, but unless you have a UNIX-sized beard, just run abootimg-unpack-initrd . This will dump everything into a folder named ramdisk. (UNIX beard guys: mkdir ramdisk; cp initrd ramdisk; cd ramdisk; cat initrd | gzip -d | cpio -i)

    To make changes, simply cd into ramdisk and hack away. For this example, I’m going to add a simple line to ramdisk/scriprts/touch. My line is

    echo "mfisch: it worked!" > /dev/kmsg || true

    This will log a message to /var/log/kern.log which can assist us to make sure it worked. Your change will probably be less trivial.


    Repacking the initrd is simple. To repack, just run abootimg-pack-initrd [initrd.img.NEW] Once you do this you’ll notice that the initrd size is quite different, even if you didn’t make any changes. After discussing this with some people, the best I can figure is that the newly packed cpio file has owners and non-zero datestamps, which make it slightly larger. One clue, when compared to mkinitramfs, abootimg-pack does not use the -R 0:0 argument and there are other differences. If you want to do this the hard way, you can also repack by doing: cd ramdisk; find . | cpio -o -H newc | gzip -9 > ../initrd.img.NEW

    Rebuilding the boot image

    The size change we discussed above can be an issue that you need to fix. In the file bootimg.cfg, which you extracted with abootimg -x, there is a line called bootsize. This line needs to be >= the size of the boot.img (not initrd). If the initrd file jumped by 4k or so, like mine did, be sure to bump this as well. I bumped mine from 0x837000 to 0x839000 and it worked. If you don’t do this step, you will wind up with a non-booting image. Once you correct this, rebuild the image with abootimg:

    abootimg --create saucy-new.img -f bootimg.cfg -k zImage -r initrd.img.NEW

    I’ve found that if your size is off, it will sometimes complain during this step, but not always. It’s best to check the size of saucy-new.img with the line you changed in bootimg.cfg at this point.

    Flashing and testing

    To flash the new boot image, reboot the device and use fastboot.

    adb reboot bootloader
    fastboot flash boot saucy-new.img

    Use the power button to boot the device now.

    Once booted you can go check out the kern.log and see if your change worked.

    Aug 13 16:11:04 ubuntu-phablet kernel: [    3.798412] mfisch: it worked!

    Looks good to me!

    Thanks to Stephane Graber and Oliver Grawart for helping me discover this process.

    Tagged , , ,

    Being a MOTU

    Back in October, I wrote a post about my process of becoming a MOTU. I’ve been pretty busy since October. First of all, I had this 9 month build finally finish:

    Successfully signed dsc and changes files

    Successfully signed dsc and changes files

    Once things sort of settled down from that, I jumped back in to updating and syncing packages. This time I was mainly focusing on desktop packages, because that’s the group my mentor worked on. However, I wanted to get some different experiences, so I also worked on some new debian packages (one of which landed).

    So after all this, I talked to a few people and it was suggested that I apply for MOTU. So I cleaned up my wikipage and applied for it. The DMB had a lot of questions in the meeting, but I guess I was persuasive enough because I was approved on June 6!

    So what’s next? Personally, I want to keep doing updates, complete a SRU, land my other debian package, sponsor some packages, and help other people achieve their goal of being a MOTU also.

    I feel that mentoring is probably one of the most important parts of being a MOTU, so even though I’m new, I’d love to help where I can. I can help by answering questions or helping with ideas of things to work on. Finding the work can sometimes be the hardest part, and the only path forward to becoming a MOTU is doing updates and syncs, so it’s critical to keep up the momentum. So if you’re working on this goal, find me on #ubuntu-motu as mfisch and we can chat.

    Tagged , ,

    powerd in Ubuntu Touch

    The past few weeks I’ve been on loan to work on Ubuntu Touch, specifically the power daemon, powerd. Seth Forshee and I have been working to enhance the power daemon so that system services can interact with it to request that the device stay active, that is, that the device not suspend. The initial round of this work is complete and is landing today. (Note: There is a lot of low-level kernel interaction stuff landing in the code today too, that is not covered here)

    What’s Landing

    What’s landing today allows a system service, talking on the system bus, to request the Active system power state. We currently only have two states, Active and Suspend. When there are no Active state requests, powerd will drop the state to Suspend and suspend the device. This is best illustrated by showing how we use the states internally: For example, the user activity timer holds an Active state request until it expires at which point the request is dropped. The system then scans the list of outstanding state requests and if none are left, it drops the system to Suspend and suspends the system. Pressing the power button works in the same way, except as a toggle. When the screen is on, pressing the power button drops an active request, when off, it makes an active request.

    For now, this ties screen state to system power state, although we plan to change that later. There is no way currently to request a display state independently of a system state, however that is planned for the future as well. For example, a request may be made to keep the screen at a specified brightness.

    The API is subject to change and has a few trouble spots, but is all available to look at in the code here. Taking a look at the testclient C code or will best illustrate the usage, but remember this is not for apps, it is for other system services. The usage for an app will be to request from a system service via an API something like “playVideoWithScreenOn()”, and then the system service will translate that into a system state request.

    Trying it Out

    If you want to play with it on your phone, use gdbus to take an active state request and you can block system suspend. You will need to install libglib2.0-bin on your phone if not already installed.

    # request active state from PID 99 (a made-up PID). This returns a cookie, which you need to later drop the request. The cookie here is “1”

    phablet@localhost:~$ sudo gdbus call --system --dest com.canonical.powerd --object-path\
       /com/canonical/powerd --method com.canonical.powerd.requestSysState 1 99
    [sudo] password for phablet: 
    (uint32 1,)

    show the outstanding requests.

    phablet@localhost:~$ sudo gdbus call --system --dest com.canonical.powerd --object-path /com/canonical/powerd --method com.canonical.powerd.listSysRequests
    ([(':1.29', 99), ('internal', 36)],)

    now we pass in the cookie we received earlier and clear our request

    phablet@localhost:~$ sudo gdbus call --system --dest com.canonical.powerd --object-path /com/canonical/powerd --method com.canonical.powerd.clearSysState 1

    recheck the list

    phablet@localhost:~$ sudo gdbus call --system --dest com.canonical.powerd --object-path /com/canonical/powerd --method com.canonical.powerd.listSysRequests
    ([('internal', 36)],)


    If you want to see everything that is going on, check the powerd log file. sudo tail -f /var/log/upstart/powerd.log. For now we have it logging in debug mode, so it will tell you everything.

    But My Device Isn’t Suspending

    Even though we request suspend, we may not get to suspend, because it appears that at least on some devices (Nexus4 and maybe others) Android’s sensor service is holding a wakelock. We are also working on this issue.

    <6>[ 1249.183061] lm3530_backlight_off, on: 0
    <6>[ 1249.185105] request_suspend_state: sleep (0->3) at 1249179158043 (2013-05-21 16:38:57.769127486 UTC)
    <4>[ 1249.185441] [Touch D]touch disable
    <4>[ 1250.217488] stop_drawing_early_suspend: timeout waiting for userspace to stop drawing
    <3>[ 1250.244132] dtv_pipe is not configured yet
    --> <6>[ 1250.248679] active wake lock sns_periodic_wakelock
    <6>[ 1250.248710] PM: Syncing filesystems...
    <6>[ 1250.329741] sync done.

    Next Steps

    We have a bunch of stuff left to do here, the first obvious one is that using a monotonically increasing int for the cookie is not a great plan, so we will switch that to something like a UUID. We also need to send out dbus signals when the system goes into suspend so that services can react. We need to clean-up some of the dbus code while we’re doing that. Finally we plan on implementing display state requests using a similar model to the power state requests. Throughout all of this we need to start integration with the rest of the system.

    Tagged , ,

    Team Workflow with bzr and Launchpad

    I was trying to explain how our team did workflow to a former colleague last week and I so I started thinking about all the different workflows I’ve dealt with in my career. This one is by far my favorite, although I know it’s not git which everyone loves, I’m curious what workflows other groups use with launchpad. Take a look at this one and let me know, can our team do anything better, can yours?

    First a brief note about our team at Canonical. We work on “premium” customer-facing projects, typically on ARM based hardware. We are downstream from Ubuntu for the most part, and although we do send fixes upstream when it makes sense, often we make customizations to packages that cannot go upstream. I’ll use a real-world example for this workflow explanation, we have a platform where we want to remove the user list, help menu entry, and the logout menu enty from the session indicator, so we needed to modify indicator-session to do so.

    The tl;dr version of our workflow is Decentralized with shared mainline, with parts of Decentralized with automatic gatekeeper added.

    Setup a Shared Master (mainline)

    Grab the source for indicator-session for the distroseries we’re based on, precise in this case. We usually grab it from launchpad or apt-get source if launchpad’s precise copy is out of date. This code gets pushed to lp:~project-team/project/indicator-session. This is now the master/mainline version. Everyone on the team has write access to this, provided they follow team rules.

    Setting Up My Local Branch

    I have a pbuilder already setup for our project usually, so my first step is to setup my local tree. I like to use a two level hierarchy here so that builds don’t “pollute” my main project area where I have dozens of different branches checked out. So I setup a subdirectory and checkout a copy to master.

    cd ~/Projects/project-precise-amd64
    mkdir indicator-session
    cd indicator-session
    bzr branch lp:~project-team/project/indicator-session master

    Now I branch master, if this wasn’t a fresh checkout, I would bzr pull in master first.

    bzr branch master remove-buttons

    Make Changes

    At this point we make fixes or whatever changes are needed. The package is built, changes are tested, and lintian is run (this one gets forgotten many times).

    We have a few goals to meet for changes, we don’t always succeed, but here they are:

    1. No new lintian errors, if it’s a new package that we made, 0 is better.
    2. If the package has unit tests, add a new test case to cover what we just fixed/changed.
    3. Patches should have minimal DEP3 headers.
    4. Coding style should follow upstream.
    5. No new compiler warnings without explanation.
    6. Good changelog entries with bug numbers if applicable. Entries should list what files were modified. Distroseries set to UNRELEASED still (more on why later).

    A note on lintian, Jenkins is capable of rejecting packages with lintian errors. We have this disabled because we need to fix the errors that crept in first when we didn’t follow this rule.

    Push to a Remote Branch for Review

    We code review everything we do, so the next step is to make the branch public for a review.

    bzr commit -m "good message, usually we just use the changelog entry" --fixes lp:BUGNUM
    bzr push lp:~project-team/project/indicator-session-remove-buttons

    Setup a Code Review

    Everything is reviewed and all reviews are sent to the team, though the onus is on the submitter to ping appropriate people if they don’t get a timely review. For code reviews, everyone is expected to provide a good explanation of what they’re doing and what testing was done.

    We also have one of the “enhancements” here as we have a Jenkins instance (similar to this one) setup for some projects and Jenkins gets to “vote” on the review. Packages that fail to build or fail unit tests are marked as “Rejected” in the review by Jenkins.

    Merge Back to Master

    After the review is approved, the code submitter merges the code and commits it up to the mainline. I’m paranoid about master changing, although the push will fail if it did, so I always update it first.

    We have to also fix the distroseries back. We do this on our team because it reduces the chance that someone will dput a package that is built from a local or non-master branch. If somone were to try and dput the changes file built from the remove-buttons branch, it would fail. We really want the archive to only have packages built from master, it’s more repeatable and easier to track changes.

    cd ~/Projects/project-precise-amd64/indicator-session
    cd master
    bzr pull
    bzr merge ../remove-buttons
    dch -e (modify distroseries from UNRELEASED to precise)
    debcommit -r
    bzr push :parent

    Jenkins Does dput

    Our team is slowly moving into the world of Jenkins and build/test automation, so we have Jenkins watching the master branch for interesting projects and it will manage the dput for us. This also provides a final round of build testing before we dput.

    Some teams have autolanding setup, that is when the review is approved, the Jenkins instance will do the merge. For now, we’ve kept a human in the loop.

    Update the Bug

    It is annoying to look at a bug 3 months after you fixed it and wonder what version it’s fixed in. Although the debian/changelog tracks this, we generally always add a bug comment saying when a bug was fixed. For the most part people usually just paste the relevant changelog entry into the bug and make sure it’s marked as Fix Committed.

    Tagged , , ,

    dconf Settings: defaults and locks

    Last year I worked on a project where I was playing around with system-wide default settings and locks and I thought I’d share a post based on some of my notes. Most all of what I will mention here is covered in depth by the dconf SysAdmin guide, so if you plan on using this, please read that guide as well. UPDATE: Gnome has moved all the dconf stuff into the Gnome SysAdmin guide, it’s a bit more scattered now, but there.

    For most everyone, you have just one dconf database per user. It is a binary blob and it’s stored in ~/.config/dconf/user. Anytime you change a setting, this file gets updated. For system administrators who may want to set a company-wide default value, a new dconf database must be created.

    Create a Profile

    The first step in setting up other databases is to create a dconf profile file. By default you don’t need one since the system uses the default database, user.db, but to setup other databases you will. So create a file called /etc/dconf/profile/user and add the list of databases that you want. Note that this list is a hierarchy and that the user database should always be on top.

    For this example, I will create a company database and a division database. The hierarchy implies that we will have company-wide settings, perhaps a wallpaper, settings on top that are specific to the division, perhaps the IP of a proxy server that’s geographically specific, and each user will have customized settings on top of that.

    To create a profile, we’ll do the following:

    mkdir -p /etc/dconf/profile

    and edit /etc/dconf/profile/user, then add:



    (Note: I am doing this on a relatively clean precise install using a user that has not changed their wallpaper setting, that is important later)

    Once you have created the profile hierarchy, you need to create keyfiles that set the values for each database. For this example, we will just set specific wallpaper files for each hierarchy. This is done with key files:

    mkdir -p /etc/dconf/db/division.d/

    and edit /etc/dconf/db/division.d/division.key, add the following:


    Next we’ll create the company key file:

    sudo mkdir -p /etc/dconf/db/company.d/

    and edit /etc/dconf/db/company.d/company.key, add the following:


    Finally, you need to run sudo dconf update so that dconf sees these changes.

    After running dconf update, you will see two changes. The first and most obvious change is that the background is now a bunch of Flocking birds, not the Precise default. The second change is that you will see two new binary dconf database files in /etc/dconf/db, one called company and one called division. If you don’t see these changes then you did something wrong, go back and check the steps.


    Since I have no default set the division’s default takes precedence

    The current user and any new users will inherit the Division default wallpaper, Flocking. However, the user still may change the wallpaper to anything they want, and if they change it, that change will be set in the user database, which takes precedence. So this method gives us a soft-default, a default until otherwise modified. If you are trying this test on a user who has already modified the wallpaper, you will notice that it didn’t change due to this precedence.

    If we want to force all users, new and existing, to get a specific wallpaper, we need to use a lock.


    For this example, let’s assume that the IS department for our division really really likes the Flocking picture and doesn’t want anyone to change it. In order to force this, we need to set a lock. A lock is simple to make, it just specifies the name of the key that is locked. A locked key takes precedence over all other set keys.

    Before doing this, I will use the wallpaper picker and select a new wallpaper, this will take precedence, until the lock is created. I picked Bloom for my test.

    I like flowers more than birds.

    I like flowers more than birds.

    Now it’s time to make the lock, because the IS department really doesn’t like flowers, so we create the lock as follows.

    sudo mkdir -p /etc/dconf/db/division.d/locks/

    and then edit /etc/dconf/db/division.d/locks/division.lock (note file name doesn’t really matter) and add the following line:


    After saving the file, run sudo dconf update. Once doing so, I’m again looking at birds, even though I modified it in my user database to point to Bloom.

    Lock file forces me to use the Division settings

    Lock file forces me to use the Division settings

    One interesting thing to note, any changes the user is making are still being set in their dconf user db, but the lock is overriding what is being seen from outside dconf. So if I change the wallpaper to London Eye in the wallpaper picker and then remove the lock by simply doing sudo rm division.lock && sudo dconf update, I immediately get the London Eye. So it’s important to keep this in mind, the user db is being written into, but the lock is in effect masking the user db value when the setting is read back.

    London Eye wallpaper is shown after I remove the lock

    London Eye wallpaper is shown after I remove the lock

    Lock Hierarchy

    Lock hierarchy is interesting, in that the lowermost lock takes precedence. What this means is that if we lock both the company and division wallpapers, we will see the company one. In the example below I set locks on the wallpaper key for both databases, and I end up seeing Murales, the company default.

    Company setting takes precedence

    Company setting takes precedence with both locked


    Locks Without Keys

    It is also possible to set a lock on a hierarchy without a corresponding default key. In this instance the system default is used and the user is unable to change the setting. For this example, I set a company lock but removed the company key. The resulting wallpaper is the system default.

    System default wallpaper for Precise is seen

    System default wallpaper for Precise is seen

    What Value is Seen – A Quiz

    If you’d like to test your knowledge of what key will take precedence when read from dconf, follow the quiz below, answers are at the bottom. For each scenario, see if you can figure out what wallpaper the user will see, assume the same database hierarchy as used in the example.

    1. User Wallpaper: unset, Division Wallpaper: Flock, Company Wallpaper: Murales
    2. User Wallpaper: London Eye, Division Wallpaper: Flock, Company Wallpaper: Murales
    3. User Wallpaper: London Eye, Division Wallpaper: Flock, Company Wallpaper: Murales, Lock file for Company Wallpaper setting
    4. User Wallpaper: London Eye, Division Wallpaper: Flock, Company Wallpaper: Murales, Lock file for Division and Company Wallpaper setting
    5. User Wallpaper: London Eye, Division Wallpaper: Flock, Company Wallpaper: unset, Lock file for Division and Company Wallpaper setting

    Answers: Flock, London Eye, Murales, Murales, Default for Precise


    Some notes about testing this if you are trying it:

      • Creating new users and logging in as them is a good way to see what settings are shown, the wallpaper is a great visual test as it’s easy to verify.
      • Do not do this on your development box. I screwed up my settings right before I was going to give a demo. I’d recommend a VM. If you do screw something up, check .xsession-errors, that’s where my problem was apparent.


    If you’re a system administrator or you really like pictures of birds, dconf keyfiles and locks are the correct mechanism to make settings that are defaults, soft or hard. Hopefully this has been illustrative on how they work. I’d recommend playing with them in a VM and once you understand the hierarchies and locking, they should be pretty easy to use.

    Tagged , ,

    Limiting LXC Memory Usage

    I’ve been playing around with LXC over the past few weeks and one of the things I tried out was limiting the memory that the container is allowed to use. I didn’t plan on explaining all the ins-and-outs of LXC here, but a short description is that LXC provides a virtualizedish environment that is more than a chroot gives you, but less than a full-blown virtual machine. If you want more details, please check out stgraber’s blog post about LXC in 12.04.

    Kernel Configuration

    The first thing you need to do in order to limit memory usage for LXC is make sure your kernel is properly configured, you need the following flag enabled:


    If you plan on also limiting swap space usage, you’ll also need:


    These flags are enabled for me in my 12.10 kernel (3.5.0-22) and so presumably you’ll have them in 12.04.

    Setting the Cap

    First, I’m going to create my container. Following the instructions from stgraber’s blog post, and calling the container “memlimit”:

    sudo lxc-create -t ubuntu -n memlimit

    Once the container is built, we need to modify the config files. Look at /var/lib/lxc/memlimit/config. We need to add a few lines to that file. I’m going to limit memory to 512M and total usage or memory + swap to 1G. Note the second setting is for overall memory + swap, not just swap usage.

    lxc.cgroup.memory.limit_in_bytes = 512M
    lxc.cgroup.memory.memsw.limit_in_bytes = 1G

    Now let’s start the container and get some debug info out of it to make sure these were set:

    sudo lxc-start -n memlimit -l debug -o debug.out

    The debug.out file will show up wherever you ran lxc-start from, so let’s see if it picked up our limits:

    lxc-start 1359136997.617 DEBUG lxc_conf - cgroup 'memory.limit_in_bytes' set to '512M'
    lxc-start 1359136997.617 DEBUG lxc_conf - cgroup 'memory.memsw.limit_in_bytes' set to '1G'

    Looks good to me!

    Note, I tried setting this once to 1.5G and it seems that the fields are only happy with whole numbers, it complained about 1.5G. That error message appeared in the same debug log I used above.

    A list of more of the values you can set in here is available here.

    Measuring Memory Usage

    The view of /proc/meminfo inside the container and outside the container are the same. This means that you cannot rely on tools like top to show how much memory the container is using. In other words, when run inside the container top will correctly only show processes inside the container, it will also show overall memory usage for the entire system. To get valid information, instead we need to examine some files in /sys:

    Current memory usage:

    Current memory + swap usage:

    Maximum memory usage:

    Maximum memory + swap usage:

    You can use expr to show it as kb or mb which is easier to read for me:

    expr `cat memory.max_usage_in_bytes` / 1024

    What Happens When the Limit is Reached?

    When the cap is reached, the container simply behaves as if the system ran out of memory. Calls to malloc will start failing (returning -1), leading to strange and bad things happening. Dialog boxes may not open, you may not be able to save files, and more than likely where people didn’t bother to check the returned value from malloc (aka, everyone), you’ll get segfaults. You can alleviate the pressure like normal systems do, by enabling swap inside the container, but once that limit is reached, you’ll have the same problem. In this case the host system’s kernel should start firing up the OOM killer and killing stuff inside the container.

    Here is my extremely simple test program to drive up memory usage, install gcc in your container and you can try it too:

    int main(void) {
        int i;
        for (i=0; i&lt;65536; i++) {
            char *q = malloc(65536);
            printf ("Malloced: %ld\n", 65536*i);

    You can simply compiled with: gcc -o foo foo.c

    I used the following simple shell construct to watch the memory usage. This needs to be run outside the container and I ran it from the /sys directory mentioned above:

    while true; do echo -n "Mem Usage (mb): " \&amp;\&amp; expr `cat memory.usage_in_bytes` / 1024 / 1024; echo -n "Mem+swap Usage (mb): " \&amp;\&amp; expr `cat memory.memsw.usage_in_bytes` / 1024 / 1024; sleep 1; done

    With the above shell script runnint, I fired up a bunch of copies of foo one bye one. Here’s the memory usage from that script:

    Running a few copies:

    Mem+swap Usage (mb): 825
    Mem Usage (mb): 511
    Mem+swap Usage (mb): 859
    Mem Usage (mb): 511

    A new copy of foo is starting:

    Mem+swap Usage (mb): 899
    Mem Usage (mb): 511
    Mem+swap Usage (mb): 932
    Mem Usage (mb): 511
    Mem+swap Usage (mb): 1010
    Mem Usage (mb): 511

    The OOM killer just said “Nope!”

    Mem+swap Usage (mb): 814
    Mem Usage (mb): 511
    Mem+swap Usage (mb): 825
    Mem Usage (mb): 511

    At the point where the OOM killer fired up, I see this in my container:
    [1] Killed ./foo

    So the limits are set, and they’re working.


    If you are using LXC or considering using LXC, you can use a memory limit to protect the host from a container run amok. You could also use it to test your code in an artificially restricted environment. In either case, try the tools above and let me know how it works for you.

    Tagged , ,

    Cairo Perf Testing on the Nexus 7

    Last week I was running some cairo perf traces on the Nexus7. Cairo-perf traces are a great way to measure 2d graphics performance and to use those numbers to measure the effects of code, hardware, or driver changes. One other cool thing is that with this tool you can do a benchmark on something like Chromium or Firefox without even needing the application installed.

    The purpose of this post is to briefly explain how to build the traces, how to run the tools on Ubuntu, and finally a quick look at some results on the Nexus7.

    Before running the tools you need to get setup and build the traces. A full clone and build will use several gigs of disk space. Since the current N7 image only builds a 6G or so filesystem, you may want to build the traces in a pbuilder. The disk I/O on the N7 is also pretty slow, so I found that building in the pbuilder, even though it runs inside a qemu, is much faster on my Core i5 + SSD.

    In the steps below I’ve tried to call out the things you can do to reduce the disk space.

    Building the traces

    1. Setup the build environment

    sudo apt­-get install libcairo2-­dev lzma git

    2. Grab the traces from git

    git clone git://­-traces

    3. (Optional) Remove unused files to save on disk space. Don’t do this if you plan on submitting changes back upstream.

    cd cairo-­traces
    rm -­rf .git

    4. Build the benchmarks, I used -j4 on my laptop and -j2 on the Nexus7. I didn’t really investigate the optimal value.

    make -j4 benchmarks

    5. The benchmark directory is now ready to use for traces. If you built it on a different system, you only need to copy over this directory. You can delete the lzma files if you want.

    The traces you are pixman version specific, so if you have a Raring based system like the Nexus7, you can’t re-use them on a Precise based box.

    Running cairo-perf-trace

    1, Before you start, delete the ocitysmap trace from the benchmarks folder. It uses too much RAM and ended up locking up my N7.

    2. If you are at the command line, connected via ssh for example, you need to set the display or it will segfault, simply run export DISPLAY=:0

    3. Run the tool, I’d start first with a simple trace to make sure that everything is working.

    CAIRO_TEST_TARGET=image cairo-­perf-­trace ­-i3 -­r ./benchmark/gvim.trace > ~/result_image.txt

    In that command above we did a few things, first we set the cairo backend. Image is a software renderer, you probably want to use xlib or xcb to test hardware. If you don’t set the CAIRO_TEST_TARGET it will try all the back-ends, this will take a long long time and I don’t recommend doing it. A simple way to get the tool to list them all is to set it to a bad value, for example

    mfisch@caprica:~$ CAIRO_TEST_TARGET=mfisch cairo-perf-trace
    Cannot find target 'mfisch'.
    Known targets: script, xcb, xcb-window, xcb-window&, xcb-render-0.0, xcb-fallback, xlib, xlib-window, xlib-render-0_0, xlib-fallback, image, image16, recording

    The next argument, -i3 tells it to run 3 iterations, this gives us a good set of data to work with. -r asks for raw output, which is literally just the amount of time the trace took to run. Finally ./benchmark/gvim.trace shows which trace to run. You can pass in a directory here and it will run them all, but I’d recommend trying that just one until you know that it’s working. When you’re running a long set of traces doing a tail -f on the result file can help assure you that it’s working without placing too heavy of a load on the system. The hardware backend runs took almost all day to finish, so you should always be plugged into a power source when doing this.

    The output should look something like this:
    [ # ] backend.content test-size ticks-per-ms time(ticks) ...
    [*] xlib.rgba chromium-tabs.0 1e+06 1962036000 1948712000 1938894000

    Making Pretty Graphs

    Once you have some traces you can make charts with cairo-perf-chart. This undocumented tool has several options which I determined by reading the code. I did send a patch to add a usage() statement to this tool, but nobody has accepted it yet. First, the basic usage, then the options:

    cairo-perf-chart nexus7_fbdev_xlib.txt nexus7_tegra3_xlib.txt

    cairo-perf-chart will build two charts with that command, one will be an absolute chart, on that chart, larger bars indicate worse performance. The second chart, the relative chart takes the first argument as the baseline and compares the rest of the results files against it. On the relative chart, a number below the zero line indicates that the results are slower than the baseline (which is the first argument to cairo-perf-chart.

    Now a quick note about the useful arguments. cairo-perf-chart can take as many results files as you want to supply it when building graphs, if you’d like to compare more than two files. If you want to resize the chart, just pass –width= and –height=, defaults are 640×480. Another useful option is –html which generates an HTML comparison chart from the data. The only issue with this option is that you manually need to make a table header and stick it in to a basic HTML document.

    Some Interesting Results

    Now some results from the Nexus7 and they are actually pretty interesting. I compared the system with and without the tegra3 drivers enabled. Actually I just plain uninstalled the tegra3 drivers to get some numbers with fbdev. My first run used the image backend, pure software rendering. As expected the numbers are almost identical, since the software rendering is just using the same CPU+NEON.

    Absolute Results - Tegra3 vs fbdev drivers, image (software) backend

    Absolute Results – Tegra3 vs fbdev drivers, image (software) backend

    Relative Results - Tegra3 vs fbdev drivers, image (software) backend

    Relative Results – Tegra3 vs fbdev drivers, image (software) backend

    The second set of results are more interesting. I switched to the xlib backend so we would get hardware rendering. With the tegra3 driver enabled we should expect a massive performance gain, right?

    Absolute Results - Tegra3 vs fbdev drivers, xlib backend

    Absolute Results – Tegra3 vs fbdev drivers, xlib backend

    Relative Results - Tegra3 vs fbdev drivers, xlib backend

    Relative Results – Tegra3 vs fbdev drivers, xlib backend

    So as it turns out the tegra3 is actually way slower than fbdev and I don’t know why. I think that this could be for a variety of reasons, such as unoptimized 2d driver code or hardware (CPU+NEON vs Tegra3 GPU).

    Now that we have a method for gathering data, perhaps we can solve that mystery?

    If you want to know more about the benchmarks or see some more analysis, you should read this great post which is where I found out most of the info on running the tools. If you want to know more background about the cairo-perf trace tools you might want to read this excellent blog post.

    Tagged , ,

    Announcing Fitbit Accomplishments for the Ubuntu Accomplishments System!

    For my birthday in October, I received a Fitbit One. The reason that I wanted it is that I thought with better data tracking I could push myself to be more active during the day. The Fitbit One is a “fitness tracker”, essentially a technologically enhanced pedometer, that can also measure elevation gain (steps climbed they call it), and even track your sleep patterns. The device, which is slightly larger than a large paperclip, syncs wirelessly  to iPhones or to a computer. It uploads all your statistics to, which provides a cool dashboard which you can use to track your steps, floors climbed, calories burned, etc. Here’s my dashboard from yesterday:

    My Fitbit dashboard from yesterday

    Like most geeks, I love data, and nice charts and graphs too, so I’ve really enjoyed the dashboard. I’ve also found that the maxim, “What gets measured gets done” really applies here. Two nights ago at 11:30PM, I noticed I was 300 steps short of 10000 steps, so I made sure to walk around while brushing my teeth, took the trash out, and generally wandered until I got past 10000 steps. That was only 300 steps, but I’ve also found myself walking the dog more, walking to the library more, etc.

    So what does this have to do with Ubuntu? Well you can see at the bottom of that dashboard that Fitbit gives “badges”, which Chris Wayne thought would be a perfect fit for the Ubuntu Accomplishments system.  So Chris hacked all weekend and created an online account plugin for Fitbit. On Monday we hooked the oauth account created by Chris’s plugin into Fitbit’s web API and now we had Fitbit accomplishments!

    Badges I can earn

    My Trophies

    You need a Fitbit to use it, and if you buy one, use this link so that Chris and I can support our daily beer and daily steps habits. The same link is also in the collection itself.


    Note: That this requires Quantal or Raring because it uses Online Accounts. The raring build broke for some reason earlier but it should be ready an hour from the time this posts.

    Installing is easy, although if you don’t already have Ubuntu Accomplishments installed it’s a two step process.

    First, install Ubuntu Accomplishments if you’ve not already done so:

    sudo add-apt-repository ppa:ubuntu-accomplishments/releases
    sudo apt-get update
    sudo apt-get install accomplishments-daemon accomplishments-viewer ubuntu-community-accomplishments ubuntu-desktop-accomplishments accomplishments-lens

    Then install the Fitbit Accomplishments collection:

    sudo add-apt-repository ppa:fitbit-accomplishment-maintainers/daily
    sudo apt-get update
    sudo apt-get install account-plugin-fitbit ubuntu-fitbit-accomplishments

    If you’re already running Ubuntu Accomplishments, you’ll need to close the viewer and restart the Accomplishments Daemon to get the new collection to show up.  You can restart the daemon by doing accomplishments-daemon –restart.  A simple logout/login will also work.

    The first accomplishment you need to get is connecting to your Fitbit account. Chris also wrote a post with some screenshots if you get stuck.

    You need to setup your Fitbit Online Account before you can get any Fitbit badges, follow the steps in the Accomplishment to do so.

    Follow the directions in the accomplishment to set this up. Once you do that, the other fitbit accomplishments will unlock in a logical progression as you achieve things (for example, the 10000 steps in a day accomplishment requires you to complete the 5000 steps in a day accomplishment first).

    Note that Fitbit admits that the Badge API is still new and there are some quirks, for example, Fitbit provides badges for 50 and 250 lifetime kilometers, but for lifetime miles, they offer 50, 250, 1000, and 5000. Also some badges are transparent, some are not, which I know we could fix, but I haven’t had time yet. As this API improved and is expanded, we’ll add more accomplishments, or better yet, you can add more by sending us a merge proposal (the code is here).


    Fitbit accomplishments, like walking 10000 steps in a day, obviously have nothing to do with Ubuntu, but this collection highlights the flexibility of the Ubuntu Accomplishments system. Anything that can be tested via script can be an accomplishment. I’m sure there are lots of other websites that people use that could be added as collections like this one. If you’re interested and you need help setting one up, you can find me (mfisch) in #ubuntu-accomplishments on Freenode.

    About the Accomplishments Code

    The code for checking these accomplishments in the accomplishments scripts is very very simple:

        badgeid = "10000 DAILY_STEPS"
        me = FitBit.fetch(None)
        if badgeid in me.badges:

    This is because all the hard logic is in, which provides the FitBit class and handles caching for us. Since the way accomplishments work is that each accomplishment has a script associated with it, we want to cache the info so that we don’t hammer the Fitbit web API once per script every 15 minutes (all unlocked accomplishments are checked every 15 minutes). The caching solution in, was copied from the model used by AskUbuntu and Launchpad in the Ubuntu Community Accomplishments package. also is how we interact with the Online Accounts plugin and Fitbit’s web API, so if you want to see the “interesting code”, look there.

    Note: Expect a follow-up blog post from Chris Wayne on how to write an online accounts plugin in the next couple of weeks.

    Help Needed

    If you live outside of the US and you have a Fitbit and are willing to help, I need some assistance to see what happens if the Fitbit API returns localized Badge info. I also need to see if what it looks like when you get a badge marked in Kilometers. I don’t think I get these because of where I live (the US). Drop me an email to matt@<this_domain>.com if you can assist or find me in #ubuntu-accomplishments on freenode, I’m mfisch. I think I’ll only need a few minutes of your time.

    Tagged , ,