Tag Archives: performance

Using Keystone’s LDAP Connection Pools to Speed Up OpenStack

If you use LDAP with Keystone in Juno you can give your implementation a turbo-boost by using LDAP connection pools. Connection pooling is a simple idea. Instead of bringing up and tearing down a connection every time you talk to LDAP, you just reuse an existing one. This feature is widely used in OpenStack when talking to mysql and adding it here really makes sense.

After enabling this feature, using the default settings, I got a 3x-5x speed-up when getting tokens as a LDAP authenticated user.

Using the LDAP Connection Pools

One of the good things about this feature is that it’s well documented (here). Setting this up is easy. The tl;dr is that you can just enable two fields and then use the defaults and they seem to work pretty well.

First turn the feature on, nothing else works without this master switch

# Enable LDAP connection pooling. (boolean value)
use_pool=true

Then if you want to use pools for user authentication, add this one:

# Enable LDAP connection pooling for end user authentication. If use_pool
# is disabled, then this setting is meaningless and is not used at all.
# (boolean value)
use_auth_pool=true

Experimental Setup

For my experiment I used a virtual keystone node that we run on top of our cloud, pointing at a corporate AD box using ldaps. Using an LDAP user, I requested 500 UUID tokens in a row. We use a special hybrid driver that uses the user creds to bind against ldap and ensure that the user/pass combo is valid. I also changed my OS_AUTH_URI to point directly at localhost to avoid hitting the load balancer. Finally I’m using the eventlet (keystone-all) vs apache2 to run Keystone. According to the Keystone PTL, Morgan Fainberg, “under apache I’d expect less benefit” If you’re not using the eventlet, ldaps, or my hybrid driver you might get different results, but I’d still expect it to be faster.

Here’s my basic test script:

export OS_TENANT_NAME=admin
export OS_USERNAME=LDAPUSER
export OS_PASSWORD=password
export OS_TENANT_NAME=admin
export OS_REGION_NAME='dev02'
export OS_AUTH_STRATEGY=keystone
export OS_AUTH_URL=http://localhost:5000/v2.0/
echo "getting $1 tokens"
for i in $(eval echo "{1..$1}")
do
curl -s -X POST http://localhost:5000/v2.0/tokens \
-H "Content-Type: application/json" \
-d '{"auth": {"tenantName": "'"$OS_TENANT_NAME"'", "passwordCredentials": {"username": "'"$OS_USERNAME"'", "password": "'"$OS_PASSWORD"'"}}}' > /dev/null
done

Results

Using the default config, It took 7 mins, 25s to get the tokens.

getting 500 tokens
real 7m25.527s
user 0m2.312s
sys 0m1.557s

I then enabled use_pool and use_auth_pool and restarted keystone, the results were quite a bit faster, a 5x speed-up. Wow.

getting 500 tokens
real 1m25.774s
user 0m2.302s
sys 0m1.539s

I ran this several times and the results were all within a few seconds of each other.

I also tried this test using the keystone CLI and the results were closer to 3.5x faster, still a respectable number.

Watching the Connections

I have a simple command so I can see how many connections are being used:

watch -n1 "netstat -an p tcp | grep :3269"

Using this simple code I can see it bounce between 0 and 1 without connection pools.

Using the defaults but with connection pools enabled, the number of connections was a solid 4. Several minutes after the test ran they died after a bit and went to 0.

I’m not sure why I didn’t get more than 4, but raising the pool counts did not change this value. Any ideas on this are welcome. This is because I have 4 workers on this node.

Tokens Are Fundamental

The coolest part of this is that this change speeds everything up. Since you need a token to do anything, I re-ran the test but just had it run nova list, cinder list, and glance image-list 50 times using the clients. Without the pooling, it took 316 seconds but with the pooling it took 231 seconds.

Plans

There are lots of ways to improve the performance of OpenStack, but this one is simple and easy to setup. The puppet code to configure this is in progress now. Once it lands, I plan to move this to dev and then to staging and prod in our environments. If I learn any other interesting things there, I’ll update the post.

Tagged , , ,

Keystone: User Enabled Emulation Can Lead to Bad Performance

An update on my previous post about User Enabled Emulation, tl;dr, don’t use it. It’s slow. Here’s what I found:

I just spent parts of today debugging why keystone user-list was so slow. It was taking between 8 and 10 seconds to list 20 users. I spent a few hours tweaking the caching settings, but the database was so small that the cache never filled up, and so I realized that this was not the main issue. A colleague asked me if basic ldapsearch was slow, and no it was fine. Then I dug into what Keystone is doing with the enabled emulation code. Unlike a user_filter, Keystone appears to be querying the user list and then re-querying LDAP for each user to check if they’re in the enabled_emulation group. This leads to a lot of extra queries which slows things down. When I disabled this setting, the query performance improved dramatically to between 2-2.5 seconds, about a 4x speed-up. If you are in a real environment with more than 20 users, the gain will be pretty good in terms of real seconds.

Disabling the enabled_emulation leaves us with no user enabled information. In order to get the Enabled field back, I’m going to add a field to the schema to emulate what AD does with the enabled users. Since this portion of Keystone was designed for AD, this blog post may help clear up what exactly it expects here from an AD point of view. Read that page to the end and you get a special treat, how to use Logical OR in LDAP, see if it makes less sense than the bf language does.

Also to reduce my user count, I did enable a user_filter, which since it’s just part of the initial query does NOT appear to slow things down. You could skip the new field and just use the filter if you want, however it’s not clear what impact a “blank” for Enabled has, other than perhaps some confusion. If it has a real impact, PLEASE comment here!

Tagged , , ,