Monthly Archives: February 2014

Landscape Tags to Puppet Facter Facts

I’ve been playing around this week with using Landscape to interact with puppet. I really like Landscape as an administration tool, but I also really like using puppet to manage packages and configuration. So the question is how to get puppet to do something based on Landscape. Landscape tags seemed like an obvious choice for this, tag a system as “compute-node” and it becomes a nova compute node, but how do you make this happen?

Standing on Shoulders

After running through a couple of ideas, I was inspired by a great article from Mike Milner. Mike’s blog post uses Landscape tags to do node classification and I wanted to use tags to set facter facts so I needed a few changes.

Make Some Tags

First I went to Landscape and added some basic tags to my box:

Just a couple tags

Just a couple tags

Get Some Tags

Next, I sat down to write a script that would run on each system and get the tags for the system it was running on. I did this first before I looked into the glue with Facter. This turned out to be pretty since since the Landscape API, is easy to use, well documented, and comes with a CLI implementation that makes testing easy.

ubuntu@mfisch:~$ get_tags.py
nova-compute
vm

Facter Time

Once that worked, it was time to look at Facter. A colleague told me about executable facter facts. The tl;dr for this is, drop a script into /etc/facter/facts.d, make it executable and facter will take the output from it and turn it into facter facts. This was super cool. I had planned on some complex hooks being required, but all I had to do was print the tags to stdout. However, Facter wants a key=value pair and Landscape tags are more like just values, so I decided on a method to generate keys by prepending landcape_tagN, where N is an iterated number. With that change in, I ran facter:

ubuntu@mfisch:~$ facter | grep landscape
landscape_tag0 => nova-compute
landscape_tag1 => vm

Values or Keys?

The puppet master will not know what to do with “landscape_tag0” most likely, so we’ll need a convention to make this more useful. One idea that my colleague had was to actually set a tag with an = sign, like this, “type=compute”. Alas Landscape won’t let us do this, so instead we’ll probably just set a convention that the last _ is the key/value boundary. That would map like this:

  • datacenter_USEast -> datacenter=USEast
  • node_type_compute -> node_type=compute
  • foo_bar -> foo=bar
  • Note the current version of my script that’s in github doesn’t have this convention yet, you’ll probably want to choose your own anyway.

    Issues

    My script is pretty simple, it retrieves a list of computers, filtering on the hostname, as long as it only finds one match, it proceeds. This matching is the weakpoint of the current iteration of this script. It’s possible to have more than one system registered with the same hostname, so I’m thinking of adding a better filter here when I get time. If you’ll note the original script that I based mine on solved this by making you pass in the hostname or id as an argument. My current plan is to look at the system title in /etc/landscape/client.conf and do a 2nd level filter on that, but even that I don’t think will be guaranteed to be unique.

    What is This Good For?

    You probably use an automated tool to install and boot a node and it registers with Landscape (via puppet of course). Now what. We have a box running Ubuntu but not doing much else. Let’s say I want a new compute node in an openstack cluster, so all I’d have to do is tag that box with a pre-determined tag, say “compute-node”, let puppet agent run and wait. The puppet master will see the facter facts that let us know it should be a compute node and act accordingly.

    Code

    Here’s the code and patches always welcome.

    Keystone: User Enabled Emulation (follow-up)

    Last week I wrote about Keystone using LDAP for Identity. Thanks to a helpful comment from Yuriy Taraday and a quick email exchange I have solved the issue of the empty “Enabled” column. Here’s how it works.

    Emulated user enable checking is useful if your LDAP system doesn’t have a simple “enabled” attribute. It works by using a separate group of users or tenants that you must be a member of in order to be enabled. Yuriy has a simple example for this which shows how we can mark user2 as enabled and user1 as disabled.

    *ou=Users
    +-*cn=enabled_users
    -member=cn=user2,ou=Users
    +-*cn=user1
    +-*cn=user2

    To make this setup work, add these to your keystone.conf file:

    user_enabled_emulation = True
    user_enabled_emulation_dn = cn=enabled_users,cn=groups,cn=accounts,dc=example,dc=com

    The default value for foo_enabled_emulation_dn is cn=enabled_foos,$tree_dn, in other words, user_enabled_emulation_dn has a default of enabled_users (note the s).

    Keep in mind that even when you remove a user from the enabled_users group they are still a valid user to LDAP/AD. This means that your service account, which you’re using for ldaps authentication, does not need to be a member of this group.

    The user_enabled_emulation_* fields appear to be undocumented to me, so I’ll work on that this week so that the official docs are more helpful.

    Tagged ,