Using Keystone’s LDAP Connection Pools to Speed Up OpenStack

If you use LDAP with Keystone in Juno you can give your implementation a turbo-boost by using LDAP connection pools. Connection pooling is a simple idea. Instead of bringing up and tearing down a connection every time you talk to LDAP, you just reuse an existing one. This feature is widely used in OpenStack when talking to mysql and adding it here really makes sense.

After enabling this feature, using the default settings, I got a 3x-5x speed-up when getting tokens as a LDAP authenticated user.

Using the LDAP Connection Pools

One of the good things about this feature is that it’s well documented (here). Setting this up is easy. The tl;dr is that you can just enable two fields and then use the defaults and they seem to work pretty well.

First turn the feature on, nothing else works without this master switch

# Enable LDAP connection pooling. (boolean value)
use_pool=true

Then if you want to use pools for user authentication, add this one:

# Enable LDAP connection pooling for end user authentication. If use_pool
# is disabled, then this setting is meaningless and is not used at all.
# (boolean value)
use_auth_pool=true

Experimental Setup

For my experiment I used a virtual keystone node that we run on top of our cloud, pointing at a corporate AD box using ldaps. Using an LDAP user, I requested 500 UUID tokens in a row. We use a special hybrid driver that uses the user creds to bind against ldap and ensure that the user/pass combo is valid. I also changed my OS_AUTH_URI to point directly at localhost to avoid hitting the load balancer. Finally I’m using the eventlet (keystone-all) vs apache2 to run Keystone. According to the Keystone PTL, Morgan Fainberg, “under apache I’d expect less benefit” If you’re not using the eventlet, ldaps, or my hybrid driver you might get different results, but I’d still expect it to be faster.

Here’s my basic test script:

export OS_TENANT_NAME=admin
export OS_USERNAME=LDAPUSER
export OS_PASSWORD=password
export OS_TENANT_NAME=admin
export OS_REGION_NAME='dev02'
export OS_AUTH_STRATEGY=keystone
export OS_AUTH_URL=http://localhost:5000/v2.0/
echo "getting $1 tokens"
for i in $(eval echo "{1..$1}")
do
curl -s -X POST http://localhost:5000/v2.0/tokens \
-H "Content-Type: application/json" \
-d '{"auth": {"tenantName": "'"$OS_TENANT_NAME"'", "passwordCredentials": {"username": "'"$OS_USERNAME"'", "password": "'"$OS_PASSWORD"'"}}}' > /dev/null
done

Results

Using the default config, It took 7 mins, 25s to get the tokens.

getting 500 tokens
real 7m25.527s
user 0m2.312s
sys 0m1.557s

I then enabled use_pool and use_auth_pool and restarted keystone, the results were quite a bit faster, a 5x speed-up. Wow.

getting 500 tokens
real 1m25.774s
user 0m2.302s
sys 0m1.539s

I ran this several times and the results were all within a few seconds of each other.

I also tried this test using the keystone CLI and the results were closer to 3.5x faster, still a respectable number.

Watching the Connections

I have a simple command so I can see how many connections are being used:

watch -n1 "netstat -an p tcp | grep :3269"

Using this simple code I can see it bounce between 0 and 1 without connection pools.

Using the defaults but with connection pools enabled, the number of connections was a solid 4. Several minutes after the test ran they died after a bit and went to 0.

I’m not sure why I didn’t get more than 4, but raising the pool counts did not change this value. Any ideas on this are welcome. This is because I have 4 workers on this node.

Tokens Are Fundamental

The coolest part of this is that this change speeds everything up. Since you need a token to do anything, I re-ran the test but just had it run nova list, cinder list, and glance image-list 50 times using the clients. Without the pooling, it took 316 seconds but with the pooling it took 231 seconds.

Plans

There are lots of ways to improve the performance of OpenStack, but this one is simple and easy to setup. The puppet code to configure this is in progress now. Once it lands, I plan to move this to dev and then to staging and prod in our environments. If I learn any other interesting things there, I’ll update the post.

Tagged , , ,

4 thoughts on “Using Keystone’s LDAP Connection Pools to Speed Up OpenStack

  1. T Lam says:

    Hello, based on your research, would you say it is correct for me to say 1 connection in the pool per worker up to the pool size? Currently, I have 2 keystone processes each with 40 workers in production, and someone fears that setting a pool size of 100 can use up 100*40*2 connections with LDAP.

  2. Antonio Messinalo says:

    Hello,

    We had connection issues with the centralized LDAP server of our institution, and we enabled LDAP connection pools on our deployment. All problems are gone now!

    I think it doesn’t really make sense to create one connection per request: pools should be enabled *by default*

    Thank you for this post

  3. Imtiaz Chowdhury says:

    We had a number of issues when we were using Keystone with eventlet. Very often eventlet would just hang causing the entire system to come to a standstill. As soon as put Keystone behind apache, all these issues went away.

    P.S. We are using Keystone with LDAP with the Hybrid Identity driver.

  4. Rafael Urena says:

    I liked this blog post and i know it’s an older one. I found it quite useful. Thank you. I modified the script for keystone v3.

    echo “getting $1 tokens”
    time for i in $(eval echo “{1..$1}”)
    do
    curl -s -X POST \
    -H “Content-Type: application/json” \
    -d ‘
    {
    #!/bin/bash
    “auth”: {
    “identity”: {
    “methods”: [
    “password”
    ],
    “password”: {
    “user”: {
    “domain”: {
    “name”: “‘”$OS_USER_DOMAIN_NAME”‘”
    },
    “name”: “‘”$OS_USERNAME”‘”,
    “password”: “‘”$OS_PASSWORD”‘”
    }
    }
    }
    }
    }’ \
    $OS_AUTH_URL/auth/tokens > /dev/null

    done

    Works much the same. I did notice something peculiar the local account managed by keystone seems to take much longer to create the 500 tokens, 2.5x more. Not sure why.

Leave a Reply

Your email address will not be published. Required fields are marked *