Frequently doing OpenStack development I find myself wanting to query an API directly and observe the response.
This is a fairly common development task, but it’s more complicated in OpenStack because there is an order in which you are supposed to make calls.
The ideal flow is:
authenticate using credentials (username/password or a range of other mechanisms)
use the service catalog returned with authentication to find the endpoint for a service
find the API version URL you want from the service’s endpoint
make a request to the versioned URL
So we generally end up simply using a combination of curl and jq against a known endpoint with an existing token.
This pattern has existed for so long that the --debug output of most clients is actually in curl command form.
There are numerous drawbacks to this approach including:
you have to manually refresh tokens when they expire.
you have to know the endpoints ahead of time.
for security reasons the actual token is no longer displayed so you can’t simply copy the outputted curl command.
you have to remember all the curl/other tool commands for showing headers, readable output etc - YMMV on this but I always forget.
os-http is an easy to use CLI tool for making requests against OpenStack endpoints correctly.
It’s designed to allow developers to debug and inspect the responses of OpenStack REST APIs without having to manage the details of authentication, service catalog and version negotiation.
Its interface is 100% modelled on the excellent httpie.
I have recently added the 0.1 release to pypi and the source is available on my github though it will probably migrate to the OpenStack infrastructure if it gains adoption.
It is released under the Apache 2 License.
It is still very raw and but I have been using it for some time and feel it may be useful for others.
It is also in fairly desperate need of documentation - contributions welcome.
Because it’s powered by os-client-config the authentication configuration is what you would expect from using openstackclient.
Documentation for preparing this authentication is available from both of these projects.
There are then a number of choices you can make for service discovery:
--os-service-type <name> Service type to request from the catalog--os-service-name <name> Service name to request from the catalog--os-interface <name> API Interface to use [public, internal, admin]--os-region-name <name> Region of the cloud to use--os-endpoint-override <url> Endpoint to use instead of the endpoint in the catalog--os-api-version <version> Which version of the service API to use
As is standard for OpenStack clients these options are also able to be set via the corresponding OS_ environment varibles:
X-My-Header:Value is purely for demonstration purposes and is ignored by glance.
As you can see the output is nicely formatted and in a console even includes some pygments magic for coloring.
os-http is at version 0.1 and has many unimplemented or not quite right things.
There is really only support for GET and other body-less requests.
Whilst you can specify PUT/POST or other to method there is currently no means to specify body data so the request will be empty.
This would be easy to add but I havent used it so I haven’t implemented it - contributions welcome.
The output is intended to be easy for a developer to consume, not for a script to parse (though this may be considered in future).
It is not intended to be a replacement for the existing CLIs in scripts.
The default output may change to include any additional information that could be useful to developers.
Because os-http does requests ‘correctly’ you may find that using –os-api-version gives errors - particularly with nova.
This is because for most installations the service catalog for nova points to a protected endpoint.
There is ongoing work upstream to fix the service catalog in general but for now os-http doesn’t contain the hacks that clients do to work around poor setups.
Using this tool may lead you to discover just how many hacks there are.
Please test it out and report any feedback or bugs.
With auth plugins we are trying to ensure that an individual OpenStack service (like Nova or Glance) should never have to deal with the details of authentication.
One of the improvements we’ve made that has gone largely unnoticed is the addition of the keystone.token_auth authentication plugin that is passed down in a request’s environment variables from auth_token middleware.
This object is a full authentication plugin that uses the token and service catalog of the user that was just validated so that the service does the right thing without having to figure out keystone’s token format.
This means that service to service communication is as simple as:
This is a full service that responds to every request with a JSON formatted list of image names in your project which is not all that useful but proves a point. There are some things to notice:
The session is global.
There are two ways to use a session with authentication.
If you are writing something like a CLI application then you will want to use the same authentication for the lifetime of the program and it can be easier to just pass an auth plugin to the sesion constructor and forget about it.
If you are writing a service that wants to use many different authentications over its lifetime you can pass the auth directly to the client that will consume it.
In our case we want to re-use the session for benefits like connection pooling and caching, but we will often change the authentication being used so we pass the plugin to the client directly.
The session is thread safe and is able to be reused across requests like this.
Consider this to be splitting the application context and the request context.
We create the glanceclient just in time.
As a very small application this isn’t obvious however because all the caching and authentication logic is being handled by the session and plugin there is no reason to keep a client around.
Clients become very cheap to create and so in most situations you can use a client object within a function and then discard it.
We never entered a URL for Glance.
At no point did we have to provide a URL for glance in the config file.
If on any project you encounter you have to enter a fixed URL to communicate with another service please file a bug.
Keystone tokens have a service catalog in them so that all requests made on behalf of a user go to the appropriate URL.
In the past this was a relatively ugly affair involving parsing the information from dictionaries, however this is all encapsulated into the auth plugin now.
There is additional information on the plugin
Whilst not shown in the example the auth_plugin has the following attributes:
If you are storing the auth plugin in a context using these accessors can be much easier that trying to figure out the variables that auth_token middleware also set.
You can’t serialize the auth plugin.
In the case of Nova and others the auth_token middleware check is performed on the API service however most service communication is done in a backend service.
We currently have no way of serializing the plugin to an oslo.context so it is reconstructed on the backend.
This is something we are working on.
It’s available now
Going back to look at the initial review it is 5 days shy of 1 year old (merged 2014-09-15).
There have been improvements since then however the basic functionality has been out for a while and is available in the current minimum global requirements.
Glanceclient on the other hand has only had session support since the 1.0 release (2015-08-31) so you will need a recent version to test the example.
We are doing all we can to prevent services ever having to deal with the details of authentication in OpenStack.
If your project has still not adopted plugins please come find us in #openstack-keystone on freenode as it’s currently making your life more difficult.
Kerberos authentication provides a good experience for allowing users to connect to a service.
However this authentication does not allow the user to take the received ticket and further communicate with another service.
The canonical example of this is when authenticating to a web service we want to use the same user credentials to authenticate with an LDAP service, rather than require credentials for the service itself.
In my specific case if I have a kerberized keystone then when the user talks to Horizon I want to forward the user’s ticket to authenticate with keystone.
The mechanism that allows us to forward these Kerberos tickets is called Service-for-User-to-Proxy or S4U2Proxy.
To mitigate some of the security issues with delegating user tickets there are strict controls over which services are allowed to forward tickets and to whom which have to be configured.
For a more in-depth explanation check out the further reading section at the end of this post.
I intend this guide to be a step by step tutorial into setting up a basic S4U2 proxying service that we can verify and give you enough information to go about setting up more complex delegations.
If you are just looking for the raw commands you can jump down to Setting up the Delegation.
I created 3 Centos 7 virtual machines on a private network:
An IPA server at ipa.s4u2.jamielennox.net
A service provider at service.s4u2.jamielennox.net that will provide the target service.
An S4U2 proxy service at proxy.s4u2.jamielennox.net that will accept a Kerberos ticket and forward it to service.s4u2.jamielennox.net
For this setup I am creating a testing realm called S4U2.JAMIELENNOX.NET.
I will post the setup that works for my environment and leave it up to you to recognize where you should use your own service names.
I pick the option to enable DNS as I think it’s easier, you can skip that but then you’ll need to make /etc/hosts entries for each of the hosts.
Setting up the Service
We start by doing the basic configuration of the machine and setting it up as an IPA client machine.
hostnamectl set-hostname service.s4u2.jamielennox.net
yum install -y ipa-client
vim /etc/resolv.conf # set DNS server to IPA IP addressipa-client-install
yum install -y httpd php mod_auth_kerb
rm /etc/httpd/conf.d/welcome.conf # a stub page that gets in the way
Register that we will be exposing a HTTP service on the machine:
Finally restart Apache to bring up the service site:
systemctl restart httpd
Setting up my local machine
You could easily test all this using curl, however particularly as we are setting up HTTP to HTTP delegation the obvious use is going to be via the browser, so at this point I like to configure firefox to allow Kerberos negotiation.
I don’t want my development machine to be an IPA client so I just configure the Kerberos KDC so that I can get a ticket on my machine with kinit.
These are comma seperated values so you can configure this in addition to any existing realms you might have configured.
To test get a ticket:
I can now point firefox to http://service.s4u2.jamielennox.net and we see the phpinfo() dump of environment variables.
This means we have successfully set up our service host.
Interesting environment variables to check for to ensure this is correct are:
REMOTE_USER admin shows that the ticket belonged to the admin user.
AUTH_TYPE Negotiate indicates that the user was authenticated via the Keberos mechanism.
Create Proxy Service
When you register the service you have to mark it as allowed to delegate credentials.
You can do this anywhere you have an admin ticket or via the web UI, however there’s less options to provide if you use one of the ipa client machines.
Unfortunately FreeIPA has no way to manage S4U2 delegations via the command line or GUI yet and so we must resort to editing LDAP directly.
The s4u2 access permissions are defined from a group of services (groupOfPrincipals) onto a group of services.
ldapmodify -a -H ldaps://ipa.s4u2.jamielennox.net -Y GSSAPI -f delegate.ldif
And that’s the hard work done, the HTTP/proxy.s4u2.jamielennox.net@S4U2.JAMIELENNOX.NET service now has permission to delegate a received ticket to HTTP/service.s4u2.jamielennox.net@S4U2.JAMIELENNOX.NET.
Registering the proxy machine is very similar.
hostnamectl set-hostname proxy.s4u2.jamielennox.net
yum install -y ipa-client
vim /etc/resolv.conf # set DNS server to IPA IP addresssetenforce 0
Because the easiest way I know to test a Kerberos endpoint is with curl I am also going to write the proxy service directly in bash:
#!/bin/shecho"Content-Type: text/html; charset=UTF-8"echo""echo""# simply dump the information from the service pagecurl -s --negotiate -u : http://service.s4u2.jamielennox.net
This works because the cgi-bin sets the request environment into the shell environment, so $KRB5CCNAME is set.
If you are using mod_wsgi or other then you would have to set that into your shell environment before executing any Kerberos commands.
I’m going to skip the IPA client setup and fetching the keytab - this is required and done exactly the same as for the service.
The apache configuration for the proxy is very similar to the configuration of the service except we add:
Within the apache vhost config file to enable it to delegate a Kerberos credential.
After all that aiming firefox at http://proxy.s4u2.jamielennox.net gives me the same phpinfo page I got from when I talked to the service host directly.
You can verify from this site also that the SERVER\_NAME service.s4u2.jamielennox.net and that REMOTE_USER is admin.
There are a couple of sites that this guide is based on:
Adam Young - who initially prototyped a lot of the work for horizon which we hope to have ready soon.
Auth_token is the middleware piece in OpenStack responsible for validating tokens and passing authentication and authorization information down to the services.
It has been a long time complaint of those wishing to move to the V3 identity API that auth_token only supported the v2 API for authentication.
Then auth_token middleware adopted authentication plugins and the people rejoiced!
Or it went by almost completely unnoticed.
Auth is not an area people like to mess with once it’s working and people are still coming to terms with configuring via plugins.
The benefit of authentication plugins is that it allows you to use any plugin you like for authentication - including the v3 plugins.
A downside is that being able to load any plugin means that there isn’t the same set of default options present in the sample config files that would indicate the new options available for setting.
Particularly as we have to keep the old options around for compatibility.
The most common configuration I expect for v3 authentication with auth_token middleware is:
The password plugin will query the auth_url for supported API versions and then use either v2 or v3 auth depending on what parameters you’ve specified.
If you want to save a round trip (once on startup) you can use the v3password plugin which takes the same parameters but requires a V3 URL to be specified in auth_url.
An unfortunate thing we’ve noticed from this is that there is going to be some confusion as most plugins present an auth_url parameter (used by the plugin to know where to authenticate the service user) along with the existing auth_uri parameter (reported in the headers of 403 responses to tell users where to authenticate).
This is a known issue we need to address and will likely result in changing the name of the auth_uri parameter as the concept of an auth_url is used by all existing clients and plugins.
For further proof that this works as expected checkout devstack which has been operating this way for a couple of weeks.
NOTE: Support for authentication plugins was released in keystonemiddleware 1.3.0 released 2014-12-18.
I’ve been pushing a lot on the authentication plugins aspect of keystoneclient recently.
They allow us to generalize the process of getting a token from OpenStack such that we can enable new mechanisms like Kerberos or client certificate authentication - without having to modify all the clients.
For most people hardcoding credentials into scripts is not an option, both for security and for reusability reasons.
By having a standard loading mechanism for this selection of new plugins we can ensure that applications we write can be used with future plugins.
I am currently working on getting this method into the existing services to allow for more extensible service authentication, so this pattern should become more common in future.
There are two loading mechanisms for authentication plugins provided by keystoneclient:
The initially required field here is auth_plugin which specifies the name of the plugin to load.
All other parameters in that section are dependant on the information that plugin (in this case v3password) requires.
To load that plugin from an application we do:
Then create novaclient, cinderclient or whichever client you wish to talk to with that session as normal.
You can also use an auth_section parameter to specify a different group in which the authentication credentials are stored.
This allows you to reuse the same credentials in multiple places throughout your configuration file without copying and pasting.
The above loading code for [somegroup] or [othergroup] will load separate instances of the same authentication plugin.
Loading from the command line
The options present on the command line are very similar to that presented via the config file, and follow a pattern familiar to the existing openstack CLI applications.
The equivalent options as specified in the config above would be presented as:
NOTE: I am aware that the syntax is wonky with the command for session loading and auth plugin loading different.
This was one of those things that was ‘optimized’ between reviews and managed to slip through.
There is a review out to standardize this.
This will also set --help appropriately, so if you are unsure of the arguments that this particular authentication plugin takes you can do:
./myapp --os-auth-plugin v3password --help
usage: myapp [-h][--os-auth-plugin <name>][--os-auth-url OS_AUTH_URL][--os-domain-id OS_DOMAIN_ID][--os-domain-name OS_DOMAIN_NAME][--os-project-id OS_PROJECT_ID][--os-project-name OS_PROJECT_NAME][--os-project-domain-id OS_PROJECT_DOMAIN_ID][--os-project-domain-name OS_PROJECT_DOMAIN_NAME][--os-trust-id OS_TRUST_ID][--os-user-id OS_USER_ID][--os-user-name OS_USERNAME][--os-user-domain-id OS_USER_DOMAIN_ID][--os-user-domain-name OS_USER_DOMAIN_NAME][--os-password OS_PASSWORD][--insecure][--os-cacert <ca-certificate>][--os-cert <certificate>][--os-key <key>][--timeout <seconds>]optional arguments:
-h, --help show this help message and exit --os-auth-plugin <name>
The auth plugin to load
--insecure Explicitly allow client to perform "insecure" TLS
(https) requests. The server's certificate will not be verified against any certificate authorities. This option should be used with caution. --os-cacert <ca-certificate> Specify a CA bundle file to use in verifying a TLS (https) server certificate. Defaults to env[OS_CACERT]. --os-cert <certificate> Defaults to env[OS_CERT]. --os-key <key> Defaults to env[OS_KEY]. --timeout <seconds> Set request timeout (in seconds).Authentication Options: Options specific to the v3password plugin. --os-auth-url OS_AUTH_URL Authentication URL --os-domain-id OS_DOMAIN_ID Domain ID to scope to --os-domain-name OS_DOMAIN_NAME Domain name to scope to --os-project-id OS_PROJECT_ID Project ID to scope to --os-project-name OS_PROJECT_NAME Project name to scope to --os-project-domain-id OS_PROJECT_DOMAIN_ID Domain ID containing project --os-project-domain-name OS_PROJECT_DOMAIN_NAME Domain name containing project --os-trust-id OS_TRUST_ID Trust ID --os-user-id OS_USER_ID User ID --os-user-name OS_USERNAME, --os-username OS_USERNAME Username --os-user-domain-id OS_USER_DOMAIN_ID User's domain id
User's domain name --os-password OS_PASSWORD User's password
To prevent polluting your CLI’s help only the ‘Authentication Options’ for the plugin you specified by ‘–os-auth-plugin’ are added to the help.
Having explained all this one of the primary application currently embracing authentication plugins, openstackclient, currently handles its options slightly differently and you will need to use --os-auth-type instead of --os-auth-plugin
The documentation for plugins provides basic features and parameters however it’s not always going to be up to date with all options, especially for plugins not handled within keystoneclient.
The following is a fairly simple script that lists all the plugins that are installed on the system and their options.
Which for the v3password plugin we’ve been using returns:
auth-url: Authentication URL
domain-id: Domain ID to scope to
domain-name: Domain name to scope to
project-id: Project ID to scope to
project-name: Project name to scope to
project-domain-id: Domain ID containing project
project-domain-name: Domain name containing project
trust-id: Trust ID
user-id: User ID
user-domain-id: User's domain id user-domain-name: User's domain name
password: User's password
From that it’s pretty simple to determine the correct format for parameters.
When using the CLI you should prefix --os-, e.g. auth-url becomes --os-auth-url.
Environment variables are upper-cased, and prefix OS_ and replace - with _, e.g. auth-url becomes OS_AUTH_URL.
Authentication plugins in Keystoneclient have gotten to the point where they are sufficiently well deployed that we can start to do interesting additional forms of authentication.
As Kerberos is a commonly requested authentication mechanism here is a simple, single domain keystone setup using Kerberos authentication.
They are not necessarily how you would setup a production deployment, but should give you the information you need to configure that yourself.
A FreeIPA server machine called ipa.test.jamielennox.net
A Packstack all in one deployment of OpenStack called openstack.test.jamielennox.net
Every now and then I come across something that mentions how you should use PKI tokens in keystone as the cryptography gives it better security.
It happened today and so I thought I should clarify:
There is no added security benefit to using keystone with PKI tokens over UUID tokens.
There are advantages to PKI tokens:
Token validation without a request to keystone means less impact on keystone.
And there are disadvantages:
Larger token size.
Additional complexity to set up.
However the fundamental model, that this opaque chunk of data in the ‘X-Auth-Token’ header indicates that this request is authenticated does not change between PKI and UUID tokens.
If someone steals your PKI token you are just as screwed as if they stole your UUID token.
In the last post I did on keystoneclient sessions there was a lot of hand waving about how they should work but it’s not merged yet.
Standardizing clients has received some more attention again recently - and now that the sessions are more mature and ready it seems like a good opportunity to explain them and how to use them again.
For those of you new to this area the clients have grown very organically, generally forking off some existing client and adding and removing features in ways that worked for that project.
Whilst this is in general a problem for user experience (try to get one token and use it with multiple clients without reauthenticating) it is a nightmare for security fixes and new features as they need to be applied individually across each client.
Sessions are an attempt to extract a common authentication and communication layer from the existing clients so that we can handle transport security once, and keystone and deployments can add new authentication mechanisms without having to do it for every client.
Sessions and authentications are user facing objects that you create and pass to a client, they are public objects not a framework for the existing clients.
They require a change in how you instantiate clients.
The first step is to create an authentication plugin, currently the available plugins are:
For the primary user/password and token authentication mechanisms that keystone supports in v2 and v3 and for the test case where you know the endpoint and token in advance.
The parameters will vary depending upon what is required to authenticate with each.
Plugins don’t need to live in the keystoneclient, we are currently in the process of setting up a new repository for kerberos authentication so that it will be an optional dependency.
There are also some plugins living in the contrib section of keystoneclient for federation that will also likely be moved to a new repository soon.
Keystone and nova clients will now share an authentication token fetched with keystone’s v3 authentication.
The clients will authenticate on the first request and will re-authenticate automatically when the token expires.
This is a fundamental shift from the existing clients that would authenticate internally to the client and on creation so by opting to use sessions you are acknowledging that some methods won’t work like they used to.
For example keystoneclient had an authenticate() function that would save the details of the authentication (user_id etc) on the client object.
This process is no longer controlled by keystoneclient and so this function should not be used, however it also cannot be removed because we need to remain backwards compatible with existing client code.
In converting the existing clients we consider that passing a Session means that you are acknowledging that you are using new code and are opting-in to the new behaviour.
This will not affect 90% of users who just make calls to the APIs, however if you have got hacks in place to share tokens between the existing clients or you overwrite variables on the clients to force different behaviours then these will probably be broken.
The above flow is useful for users where they want to have there one token shared between one or more clients.
If you are are an application that uses many authentication plugins (eg, heat or horizon) you may want to take advantage of using a single session’s connection pooling or caching whilst juggling multiple authentications.
You can therefore create a session without an authentication plugin and specify the plugin that will be used with that client instance, for example:
globalSESSIONifnotSESSION:SESSION=ksc_session.Session()auth=get_auth_plugin()# you could deserialize it from a db,# fetch it based on a cookie value...keystone=keystone_v3.Client(session=SESSION,auth=auth)
Auth plugins set on the client will override any auth plugin set on the session - but I’d recommend you pick one method based on your application’s needs and stick with it.
Loading from a config file
There is support for loading session and authentication plugins from and oslo.config CONF object.
The documentation on exactly what options are supported is lacking right now and you will probably need to look at code to figure out everything that is supported.
I promise to improve this, but to get you started you need to register the options globally:
group='keystoneclient'# the option groupkeystoneclient.session.Session.register_conf_options(CONF,group)keystoneclient.auth.register_conf_options(CONF,group)
There is an ongoing effort to create a standardized CLI plugin that can be used by new clients rather than have people provide an –os-auth-plugin every time.
It is not yet ready, however clients can create and specify there own default plugins if –os-auth-plugin is not provided.
For Client Authors
To make use of the session in your client there is the keystoneclient.adapter.Adapter which provides you with a set of standard variables that your client should take and use with the session.
The adapter will handle the per-client authentication plugins, handle region_name, interface, user_agent and similar client parameters that are not part of the more global (across many clients) state that sessions hold.
The adapter then has .get() and .post() and other http methods that the clients expect.
It’s great to have renewed interest in standardizing client behaviour, and I’m thrilled to see better session adoption.
The code has matured to the point it is usable and simplifies use for both users and client authors.
Having just release v0.5 of requests-mock and having it used by both keystoneclient and novaclient with others in the works I thought I’d finally do a post explaining what it is and how to use it.
I was the person who brought HTTPretty into the OpenStack requirements.
The initial reason for this was that keystoneclient was transitioning from the httplib library to requests and I needed to prove that there was no changes to the HTTP requests during the transition.
HTTPretty is a way to mock HTTP responses at the socket level, so it is not dependant on the HTTP library you use and for this it was fairly successful.
As part of that transition I converted all the unit tests so that they actually traversed through to the requesting layer and found a number of edge case bugs because the responses were being mocked out above this point.
I have therefore advocated that the clients convert to mocking at the request layer rather than stubbing out returned values.
I’m pretty sure that this doesn’t adhere strictly to the unit testing philosophy of testing small isolated changes, but our client libraries aren’t that deep and I’d honestly prefer to just test the whole way through and find those edge cases.
Having done this has made it remarkably easier to transition to using sessions in the clients as well, because we are testing the whole path down to making HTTP requests for all the resource tests so again have assurances that the HTTP requests being sent are equivalent.
At the same time we’ve had a number of problems with HTTPretty:
It was the lingering last requirement for getting Python 3 support. Thanks to Cyril Roelandt for finally getting that fixed.
For various reasons it is difficult for the distributions to package.
It has a bad habit of doing backwards incompatible, or simply broken releases. The current requirements string is: httpretty>=0.8.0,!=0.8.1,!=0.8.2,!=0.8.3
Because it acts at the socket layer it doesn’t always play nicely with other things using the socket. For example it has to be disabled for live memcache tests.
It pins its requirements on pypi.
Now I feel like I’m just ranting.
There are additional oddities I found in trying to fix these upstream but this is not about bashing HTTPretty.
requests-mock follows the same concepts allowing users to stub out responses to HTTP requests, however it specifically targets the requests library rather than stubbing the socket.
All the OpenStack clients have been converted to requests at this point, and for the general audience if you are writing HTTP code in Python you should be using requests.
Note: a lot of what is used in these examples is only available since the 0.5 release.
The current OpenStack requirements still have 0.4 so you’ll need to wait for some of the new syntax.
The intention of requests-mock is to work in as similar way to requests itself as possible.
Hence all the variable names and conventions should be as close to a requests.Response as possible.
Note that because the callback was passed as the json parameter the return type is expected to be the same as if you had passed it as a predefined json=blob value.
If you wanted to return text the callback would be on the text parameter.
So rather than give a lot of examples i’ll just highlight some of the interesting things you can do with the library and how to do it.
Queue mutliple responses for a url, each element of the list is interpreted as if they were **kwargs for a response.
In this case every request other than the first will get a 401 error:
I am terrible at keeping my git branches in order.
Particularly since I work across multiple machines and forget where things are I will often have multiple branches with different names being different versions of the same review.
On a project I work on frequently I currently have 71 local branches which are a mix of my code, some code reviews, and some branches that were for trialling ideas.
git review at least prefixes branches it downloads with review/ but that doesn’t help to figure out what was happening with local branches labelled auth through auth-4.
However this post isn’t about me fixing my terrible habit it’s about two git commands which help me work with the mess.
This gives a nicely formatted list of branches in the project sorted by the last time they were committed to and how long ago it was.
So if I know I’m looking for a branch that I last worked on last week I can quickly locate those branches.
The next is a script to figure out which of my branches have made it through review and have been merged upstream which I called branch-merged.
Using git you can already call git branch --merged master to determine which branches are fully merged into the master branch.
However this won’t take into account if a later version of a review was merged, in which case I can probably get rid of that branch.
We can figure this out by using the Commit-Id: field of our Gerrit reviews.
So print out the branches where all the Commit-Ids are also in master.
It’s not greatly efficient and if you are working with code bases with long histories you might need to limit the depth, but given that it doesn’t run often it completes quickly enough.
There’s no guarantee that there wasn’t something new in those branches, but most likely it was an earlier review or test code that is no longer relevant.
I was considering a tool that could use the Commit-Id to figure out from gerrit if a branch is an exact match to one that was previously up for review and so contained no possibly useful experimenting code, but teaching myself to clean up branches as I go is probably a better use of my time.