# Setting Up S4u2proxy

## Motivation:

Kerberos authentication provides a good experience for allowing users to connect to a service. However this authentication does not allow the user to take the received ticket and further communicate with another service.

The canonical example of this is when authenticating to a web service we want to use the same user credentials to authenticate with an LDAP service, rather than require credentials for the service itself.

In my specific case if I have a kerberized keystone then when the user talks to Horizon I want to forward the user’s ticket to authenticate with keystone.

The mechanism that allows us to forward these Kerberos tickets is called Service-for-User-to-Proxy or S4U2Proxy. To mitigate some of the security issues with delegating user tickets there are strict controls over which services are allowed to forward tickets and to whom which have to be configured.

For a more in-depth explanation check out the further reading section at the end of this post.

## Scenario:

I intend this guide to be a step by step tutorial into setting up a basic S4U2 proxying service that we can verify and give you enough information to go about setting up more complex delegations. If you are just looking for the raw commands you can jump down to Setting up the Delegation.

I created 3 Centos 7 virtual machines on a private network:

• An IPA server at ipa.s4u2.jamielennox.net
• A service provider at service.s4u2.jamielennox.net that will provide the target service.
• An S4U2 proxy service at proxy.s4u2.jamielennox.net that will accept a Kerberos ticket and forward it to service.s4u2.jamielennox.net

For this setup I am creating a testing realm called S4U2.JAMIELENNOX.NET. I will post the setup that works for my environment and leave it up to you to recognize where you should use your own service names.

## Setting up IPA

I pick the option to enable DNS as I think it’s easier, you can skip that but then you’ll need to make /etc/hosts entries for each of the hosts.

## Setting up the Service

We start by doing the basic configuration of the machine and setting it up as an IPA client machine.

Register that we will be exposing a HTTP service on the machine:

Fetch the Kerberos keytab from IPA and make it accessible to Apache:

Create a simple site that will display the environment variables the server has received. I share most people’s opinion of PHP, however for a simple diagnostic site it’s hard to beat phpinfo():

Configure Apache to serve our simple PHP site behind Kerberos authentication.

Finally restart Apache to bring up the service site:

## Setting up my local machine

You could easily test all this using curl, however particularly as we are setting up HTTP to HTTP delegation the obvious use is going to be via the browser, so at this point I like to configure firefox to allow Kerberos negotiation.

I don’t want my development machine to be an IPA client so I just configure the Kerberos KDC so that I can get a ticket on my machine with kinit.

Edit /etc/krb5.conf to add:

And because I don’t want to rely on the DNS provided by this IPA server I’ll need to add the service IPs to /etc/hosts:

In firefox open the config page (type about:config into the URL bar) and set:

These are comma seperated values so you can configure this in addition to any existing realms you might have configured.

To test get a ticket:

I can now point firefox to http://service.s4u2.jamielennox.net and we see the phpinfo() dump of environment variables. This means we have successfully set up our service host.

Interesting environment variables to check for to ensure this is correct are:

• REMOTE_USER admin shows that the ticket belonged to the admin user.
• AUTH_TYPE Negotiate indicates that the user was authenticated via the Keberos mechanism.

## Create Proxy Service

When you register the service you have to mark it as allowed to delegate credentials. You can do this anywhere you have an admin ticket or via the web UI, however there’s less options to provide if you use one of the ipa client machines.

or to modify an existing service:

## Setting up the Delegation

Unfortunately FreeIPA has no way to manage S4U2 delegations via the command line or GUI yet and so we must resort to editing LDAP directly. The s4u2 access permissions are defined from a group of services (groupOfPrincipals) onto a group of services.

You can see existing delegations via:

This delegation is how the FreeIPA web service is able to use the user’s credentials to read and write from the LDAP server so there is at least 1 existing rule that you can copy from.

A delegation consists of two parts:

• A target group with a list of services (memberPrincipal) that are allowed to receive delegated credentials.
• A group (type objectclass=ipaKrb5DelegationACL) with a list of services (memberPrincipal) that are allowed to delegate credentials AND the target groups (ipaAllowedTarget) that they can delegate to.

Write it to LDAP:

And that’s the hard work done, the HTTP/proxy.s4u2.jamielennox.net@S4U2.JAMIELENNOX.NET service now has permission to delegate a received ticket to HTTP/service.s4u2.jamielennox.net@S4U2.JAMIELENNOX.NET.

## Proxy

Registering the proxy machine is very similar.

Because the easiest way I know to test a Kerberos endpoint is with curl I am also going to write the proxy service directly in bash:

This works because the cgi-bin sets the request environment into the shell environment, so \$KRB5CCNAME is set. If you are using mod_wsgi or other then you would have to set that into your shell environment before executing any Kerberos commands.

I’m going to skip the IPA client setup and fetching the keytab - this is required and done exactly the same as for the service.

The apache configuration for the proxy is very similar to the configuration of the service except we add:

Within the apache vhost config file to enable it to delegate a Kerberos credential.

The final config file looks like:

Restart apache to have your changes take effect:

## Voila

After all that aiming firefox at http://proxy.s4u2.jamielennox.net gives me the same phpinfo page I got from when I talked to the service host directly. You can verify from this site also that the SERVER\_NAME service.s4u2.jamielennox.net and that REMOTE_USER is admin.

There are a couple of sites that this guide is based on:

• Adam Young - who initially prototyped a lot of the work for horizon which we hope to have ready soon.
• Alexander Bokovoy - who is the actual authority that Adam and I are relying upon.
• Simo Sorce - explaining the rationale and uses for the S4U2 delegation mechanisms.

# V3 Authentication With Auth_token Middleware

Auth_token is the middleware piece in OpenStack responsible for validating tokens and passing authentication and authorization information down to the services. It has been a long time complaint of those wishing to move to the V3 identity API that auth_token only supported the v2 API for authentication.

Then auth_token middleware adopted authentication plugins and the people rejoiced!

Or it went by almost completely unnoticed. Auth is not an area people like to mess with once it’s working and people are still coming to terms with configuring via plugins.

The benefit of authentication plugins is that it allows you to use any plugin you like for authentication - including the v3 plugins. A downside is that being able to load any plugin means that there isn’t the same set of default options present in the sample config files that would indicate the new options available for setting. Particularly as we have to keep the old options around for compatibility.

The most common configuration I expect for v3 authentication with auth_token middleware is:

The password plugin will query the auth_url for supported API versions and then use either v2 or v3 auth depending on what parameters you’ve specified. If you want to save a round trip (once on startup) you can use the v3password plugin which takes the same parameters but requires a V3 URL to be specified in auth_url.

An unfortunate thing we’ve noticed from this is that there is going to be some confusion as most plugins present an auth_url parameter (used by the plugin to know where to authenticate the service user) along with the existing auth_uri parameter (reported in the headers of 403 responses to tell users where to authenticate). This is a known issue we need to address and will likely result in changing the name of the auth_uri parameter as the concept of an auth_url is used by all existing clients and plugins.

For further proof that this works as expected checkout devstack which has been operating this way for a couple of weeks.

NOTE: Support for authentication plugins was released in keystonemiddleware 1.3.0 released 2014-12-18.

I’ve been pushing a lot on the authentication plugins aspect of keystoneclient recently. They allow us to generalize the process of getting a token from OpenStack such that we can enable new mechanisms like Kerberos or client certificate authentication - without having to modify all the clients.

For most people hardcoding credentials into scripts is not an option, both for security and for reusability reasons. By having a standard loading mechanism for this selection of new plugins we can ensure that applications we write can be used with future plugins. I am currently working on getting this method into the existing services to allow for more extensible service authentication, so this pattern should become more common in future.

We can define a plugin from CONF like:

The initially required field here is auth_plugin which specifies the name of the plugin to load. All other parameters in that section are dependant on the information that plugin (in this case v3password) requires.

To load that plugin from an application we do:

Then create novaclient, cinderclient or whichever client you wish to talk to with that session as normal.

You can also use an auth_section parameter to specify a different group in which the authentication credentials are stored. This allows you to reuse the same credentials in multiple places throughout your configuration file without copying and pasting.

The above loading code for [somegroup] or [othergroup] will load separate instances of the same authentication plugin.

The options present on the command line are very similar to that presented via the config file, and follow a pattern familiar to the existing openstack CLI applications. The equivalent options as specified in the config above would be presented as:

Or

This is loaded from python via:

NOTE: I am aware that the syntax is wonky with the command for session loading and auth plugin loading different. This was one of those things that was ‘optimized’ between reviews and managed to slip through. There is a review out to standardize this.

This will also set --help appropriately, so if you are unsure of the arguments that this particular authentication plugin takes you can do:

To prevent polluting your CLI’s help only the ‘Authentication Options’ for the plugin you specified by ‘–os-auth-plugin’ are added to the help.

Having explained all this one of the primary application currently embracing authentication plugins, openstackclient, currently handles its options slightly differently and you will need to use --os-auth-type instead of --os-auth-plugin

## Available plugins

The documentation for plugins provides basic features and parameters however it’s not always going to be up to date with all options, especially for plugins not handled within keystoneclient. The following is a fairly simple script that lists all the plugins that are installed on the system and their options.

Which for the v3password plugin we’ve been using returns:

From that it’s pretty simple to determine the correct format for parameters.

• When using the CLI you should prefix --os-, e.g. auth-url becomes --os-auth-url.
• Environment variables are upper-cased, and prefix OS_ and replace - with _, e.g. auth-url becomes OS_AUTH_URL.
• Conf file variables replace - with _ eg. auth-url becomes auth_url.

# Step-by-Step: Kerberized Keystone

Authentication plugins in Keystoneclient have gotten to the point where they are sufficiently well deployed that we can start to do interesting additional forms of authentication. As Kerberos is a commonly requested authentication mechanism here is a simple, single domain keystone setup using Kerberos authentication. They are not necessarily how you would setup a production deployment, but should give you the information you need to configure that yourself.

They create:

• A FreeIPA server machine called ipa.test.jamielennox.net
• A Packstack all in one deployment of OpenStack called openstack.test.jamielennox.net

# PKI Tokens Don’t Give Better Security

This will be real quick.

Every now and then I come across something that mentions how you should use PKI tokens in keystone as the cryptography gives it better security. It happened today and so I thought I should clarify:

There is no added security benefit to using keystone with PKI tokens over UUID tokens.

There are advantages to PKI tokens:

• Token validation without a request to keystone means less impact on keystone.

• Larger token size.
• Additional complexity to set up.

However the fundamental model, that this opaque chunk of data in the ‘X-Auth-Token’ header indicates that this request is authenticated does not change between PKI and UUID tokens. If someone steals your PKI token you are just as screwed as if they stole your UUID token.

# How to Use Keystoneclient Sessions

In the last post I did on keystoneclient sessions there was a lot of hand waving about how they should work but it’s not merged yet. Standardizing clients has received some more attention again recently - and now that the sessions are more mature and ready it seems like a good opportunity to explain them and how to use them again.

For those of you new to this area the clients have grown very organically, generally forking off some existing client and adding and removing features in ways that worked for that project. Whilst this is in general a problem for user experience (try to get one token and use it with multiple clients without reauthenticating) it is a nightmare for security fixes and new features as they need to be applied individually across each client.

Sessions are an attempt to extract a common authentication and communication layer from the existing clients so that we can handle transport security once, and keystone and deployments can add new authentication mechanisms without having to do it for every client.

## The Basics

Sessions and authentications are user facing objects that you create and pass to a client, they are public objects not a framework for the existing clients. They require a change in how you instantiate clients.

The first step is to create an authentication plugin, currently the available plugins are:

• keystoneclient.auth.identity.v2.Password
• keystoneclient.auth.identity.v2.Token
• keystoneclient.auth.identity.v3.Password
• keystoneclient.auth.identity.v3.Token
• keystoneclient.auth.token_endpoint.Token

For the primary user/password and token authentication mechanisms that keystone supports in v2 and v3 and for the test case where you know the endpoint and token in advance. The parameters will vary depending upon what is required to authenticate with each.

Plugins don’t need to live in the keystoneclient, we are currently in the process of setting up a new repository for kerberos authentication so that it will be an optional dependency. There are also some plugins living in the contrib section of keystoneclient for federation that will also likely be moved to a new repository soon.

You can then create a session with that plugin.

Keystone and nova clients will now share an authentication token fetched with keystone’s v3 authentication. The clients will authenticate on the first request and will re-authenticate automatically when the token expires.

This is a fundamental shift from the existing clients that would authenticate internally to the client and on creation so by opting to use sessions you are acknowledging that some methods won’t work like they used to. For example keystoneclient had an authenticate() function that would save the details of the authentication (user_id etc) on the client object. This process is no longer controlled by keystoneclient and so this function should not be used, however it also cannot be removed because we need to remain backwards compatible with existing client code.

In converting the existing clients we consider that passing a Session means that you are acknowledging that you are using new code and are opting-in to the new behaviour. This will not affect 90% of users who just make calls to the APIs, however if you have got hacks in place to share tokens between the existing clients or you overwrite variables on the clients to force different behaviours then these will probably be broken.

## Per-Client Authentication

The above flow is useful for users where they want to have there one token shared between one or more clients. If you are are an application that uses many authentication plugins (eg, heat or horizon) you may want to take advantage of using a single session’s connection pooling or caching whilst juggling multiple authentications. You can therefore create a session without an authentication plugin and specify the plugin that will be used with that client instance, for example:

Auth plugins set on the client will override any auth plugin set on the session - but I’d recommend you pick one method based on your application’s needs and stick with it.

There is support for loading session and authentication plugins from and oslo.config CONF object. The documentation on exactly what options are supported is lacking right now and you will probably need to look at code to figure out everything that is supported. I promise to improve this, but to get you started you need to register the options globally:

And then load the objects where you need them:

Will load options that look like:

There is also support for transitioning existing code bases to new option names if they are not the same as what your application uses.

A very similar process is used to load sessions and plugins from an argparse parser.

This produces an application with the following options:

There is an ongoing effort to create a standardized CLI plugin that can be used by new clients rather than have people provide an –os-auth-plugin every time. It is not yet ready, however clients can create and specify there own default plugins if –os-auth-plugin is not provided.

## For Client Authors

To make use of the session in your client there is the keystoneclient.adapter.Adapter which provides you with a set of standard variables that your client should take and use with the session. The adapter will handle the per-client authentication plugins, handle region_name, interface, user_agent and similar client parameters that are not part of the more global (across many clients) state that sessions hold.

The basic client should look like:

The adapter then has .get() and .post() and other http methods that the clients expect.

## Conclusion

It’s great to have renewed interest in standardizing client behaviour, and I’m thrilled to see better session adoption. The code has matured to the point it is usable and simplifies use for both users and client authors.

In writing this I kept wanting to link out to official documentation and realized just how lacking it really is. Some explanation is available on the official python-keystoneclient docs pages, there is also module documentation however this is definetly an area in which we (read I) am a long way behind.

# Requests-mock

Having just release v0.5 of requests-mock and having it used by both keystoneclient and novaclient with others in the works I thought I’d finally do a post explaining what it is and how to use it.

## Motivation

I was the person who brought HTTPretty into the OpenStack requirements.

The initial reason for this was that keystoneclient was transitioning from the httplib library to requests and I needed to prove that there was no changes to the HTTP requests during the transition. HTTPretty is a way to mock HTTP responses at the socket level, so it is not dependant on the HTTP library you use and for this it was fairly successful.

As part of that transition I converted all the unit tests so that they actually traversed through to the requesting layer and found a number of edge case bugs because the responses were being mocked out above this point. I have therefore advocated that the clients convert to mocking at the request layer rather than stubbing out returned values. I’m pretty sure that this doesn’t adhere strictly to the unit testing philosophy of testing small isolated changes, but our client libraries aren’t that deep and I’d honestly prefer to just test the whole way through and find those edge cases.

Having done this has made it remarkably easier to transition to using sessions in the clients as well, because we are testing the whole path down to making HTTP requests for all the resource tests so again have assurances that the HTTP requests being sent are equivalent.

At the same time we’ve had a number of problems with HTTPretty:

• It was the lingering last requirement for getting Python 3 support. Thanks to Cyril Roelandt for finally getting that fixed.
• For various reasons it is difficult for the distributions to package.
• It has a bad habit of doing backwards incompatible, or simply broken releases. The current requirements string is: httpretty>=0.8.0,!=0.8.1,!=0.8.2,!=0.8.3
• Because it acts at the socket layer it doesn’t always play nicely with other things using the socket. For example it has to be disabled for live memcache tests.
• It pins its requirements on pypi.

Now I feel like I’m just ranting. There are additional oddities I found in trying to fix these upstream but this is not about bashing HTTPretty.

## requests-mock

requests-mock follows the same concepts allowing users to stub out responses to HTTP requests, however it specifically targets the requests library rather than stubbing the socket. All the OpenStack clients have been converted to requests at this point, and for the general audience if you are writing HTTP code in Python you should be using requests.

Note: a lot of what is used in these examples is only available since the 0.5 release. The current OpenStack requirements still have 0.4 so you’ll need to wait for some of the new syntax.

The intention of requests-mock is to work in as similar way to requests itself as possible. Hence all the variable names and conventions should be as close to a requests.Response as possible. For example:

So text in the mock equates to text in the response and similarly for status_code. Some more advanced usage of the requests library:

You can also use callbacks to create responses dynamically:

Note that because the callback was passed as the json parameter the return type is expected to be the same as if you had passed it as a predefined json=blob value. If you wanted to return text the callback would be on the text parameter.

## Cool tricks

So rather than give a lot of examples i’ll just highlight some of the interesting things you can do with the library and how to do it.

• Queue mutliple responses for a url, each element of the list is interpreted as if they were **kwargs for a response. In this case every request other than the first will get a 401 error:
• See the history of requests:
• match on the only the URL path:
• match on any method:
• or match on any URL:
• match on headers that are part of the request (useful for distinguishing between multiple requests to the same URL):
• be used as a function decorator

## Try it!

There is a lot more it can do and if you want to know more you can check out:

As a final selling point because it was built particularly around OpenStack needs it is:

• Easily integrated with the fixtures library.
• Hosted on stackforge and reviewed via Gerrit.
• Continuously tested against at least keystoneclient and novaclient to prevent backwards incompatible changes.
• Accepted as part of OpenStack requirements.

Patches and bug reports are welcome.

# Git Commands for Messy People

I am terrible at keeping my git branches in order. Particularly since I work across multiple machines and forget where things are I will often have multiple branches with different names being different versions of the same review.

On a project I work on frequently I currently have 71 local branches which are a mix of my code, some code reviews, and some branches that were for trialling ideas. git review at least prefixes branches it downloads with review/ but that doesn’t help to figure out what was happening with local branches labelled auth through auth-4.

However this post isn’t about me fixing my terrible habit it’s about two git commands which help me work with the mess.

The first is an alias which I called branch-date:

This gives a nicely formatted list of branches in the project sorted by the last time they were committed to and how long ago it was. So if I know I’m looking for a branch that I last worked on last week I can quickly locate those branches.

The next is a script to figure out which of my branches have made it through review and have been merged upstream which I called branch-merged.

Using git you can already call git branch --merged master to determine which branches are fully merged into the master branch. However this won’t take into account if a later version of a review was merged, in which case I can probably get rid of that branch.

We can figure this out by using the Commit-Id: field of our Gerrit reviews.

So print out the branches where all the Commit-Ids are also in master. It’s not greatly efficient and if you are working with code bases with long histories you might need to limit the depth, but given that it doesn’t run often it completes quickly enough.

There’s no guarantee that there wasn’t something new in those branches, but most likely it was an earlier review or test code that is no longer relevant. I was considering a tool that could use the Commit-Id to figure out from gerrit if a branch is an exact match to one that was previously up for review and so contained no possibly useful experimenting code, but teaching myself to clean up branches as I go is probably a better use of my time.

# Identity_uri in Auth Token Middleware

As part of the 0.8 release of keystoneclient (2014-04-17) we made an update to the way that you configure auth_token middleware in OpenStack.

Previously you specify the path to the keystone server as a number of individual parameters such as:

This made sense in code when using httplib for communication where you use each of those independent pieces. However we removed httplib a number of releases ago and now simply reconstruct the full URL in code in the form:

This format is much more intuitive for configuration and so should now be used with the key identity_uri. e.g.

Using the original format will continue to work but you’ll see a deprecation message like:

# Client Session Objects

Keystoneclient has recently introduced a Session object. The concept was discussed and generally accepted at the Hong Kong Summit that keystoneclient as the root of authentication (and arguably security) should be responsible for transport (HTTP) and authentication across all the clients.

The majority of the functionality in this post is written and up for review but has not yet been committed. I write this in an attempt to show the direction of clients as there is currently a lot of talk around projects such as the OpenStack-SDK.

When working with clients you would first create an authentication object, then create a session object with that authentication and then re-use that session object across all the clients you instantiate.

Now whenever you want to make an authenticated request you just indicated it as part of the request call.

This was pretty much the extent of the initial proposal, however in working with the plugins I have come to realize that authentication is responsible for much more than simply getting a token.

A large part of the data in a keystone token is the service catalog. This is a listing of the services known to an OpenStack deployment and the URLs that we should use when accessing those services. Because of the disjointed way in which clients have been developed this service catalog is parsed by each client to determine the URL with which to make API calls.

With a session object in control of authentication and the service catalog there is no reason for a client to know its URL, just what it wants to communicate.

The values of service_type and endpoint_type are well known and constant to a client, region_name is generally passed in when instantiating (if required). Requests made via the client object will have these parameters added automatically, so given the client from above the following call is exactly the same:

Where I feel that this will really begin to help though is in dealing with the transition between API versions.

Currently deployments of OpenStack put a versioned endpoint in the service catalog eg for identity http://localhost:5000/v2.0. This made sense initially however now as we try to transition people to the V3 identity API we find that there is no backwards compatible way to advertise both the v2 and v3 services. The agreed solution long-term is that entries in the service catalog should not be versioned eg. http://localhost:5000 as the root path of a service will list the available versions. So how do we handle this transition across the 8+ clients? Easy:

This solution also means that when we have a suitable hack for the transition to unversioned endpoints it needs only be implemented in one place.

Reliant on this is a means to discover the available versions of all the OpenStack services. Turns out that in general the projects are similar enough in structure that it can be done with a few minor hacks. For newer projects there is now a definitive specification on the wiki.

A major advantage of this common approach is we now have a standard way of determining whether a version of a project is available in this cloud. Therefore we get client version discovery pretty much for free:

That’s a little verbose as a client knows that information, so we can extract a wrapper:

or simply:

So the session object has evolved from a pure transport level object and this departure is somewhat concerning as I don’t like mixing layers of responsibility. However in practice we have standardized on the requests library to abstract much of this away and the Session object is providing helpers around this.

So, along with standardizing transport, by using the session object like this we can:

• reduce the basic client down to an object consisting of a few variables indicating the service type and version required.
• finally get a common service discovery mechanism for all the clients.
• shift the problem of API version migration onto someone else - probably me.

#### Disclaimers and Notes

• The examples provided above use keystoneclient and the ‘identity’ service purely because this is what has been implemented so far. In terms of CRUD operations keystoneclient is essentially the same as other client in that it retrieves its endpoint from the service catalog and issues requests to it, so the approach will work equally well.

• Currently none of the other clients rely upon the session object, I have been waiting on the inclusion of authentication plugins and service discovery before making this push.

• Region handling is still a little awkward when using the clients. I blame this completely on the fact that region handling is awkward on the servers. In Juno we should have hierarchical regions and then it may make sense to allow region_name to be set on a session rather than per client.