Recently, we have been using Google's container engine for deploying our apps, an intro to which can be found here. The code used throughout this post can be found here.
Up to now we have been building, testing and deploying our containers from Circle CI, recently however Google have released the cloud container builder which has piqued our interest for a few reasons:
So, what is Google cloud container builder? Simply put, it is a replacement for other CI processes where each build
step is actually its own docker container with your code mounted and working directory set to /workspace
.
This means that your build step can do anything that can be done from inside a container without needing to worry
about the environment of the host, this opens up a lot of flexibility.
We recently published an article about setting up a kubernetes cluster on google container engine here which this post is based on.
For setting up your cloud build you will need to open up the cloud console:
From now on every time you commit code matching the build regex the build will be triggered. Alternatively you can start a new build by clicking 'Run trigger' from the build triggers page or running:
Above we have used the following build parameters:
id
- A unique identifier so that the step can be referenced later.name
- The name of the container to run the command.args
- The arguments to pass to the container entrypoint.waitFor
- The ids of previous steps to wait for before executing.env
- A list of environment variables to set for the step.For a full description of all available parameters look here. Now lets take a look at these build steps in more detail.
The first thing we need to do is prepare our build environment. We use the base cloud-builder docker image to prepare
our build image (Dockerfile.builder
):
This builds our Dockerfile.builder
inside the cloud-builder base docker image and stores it in a shared docker
state across all steps. Here we chose to build the builder each time so that our requirements are always up to date
however it could as easily be pulled from a docker registry or simply use one of the
cloud builder base containers if you don't have any special
requirements.
In reality, for speed reasons, we would do a combination, where we store a base builder image that installs
most of our dependencies such as python
, gcloud
, docker
and kubectl
and extend this per project
installing our project specific requirements.
In our example we install flake8
so that we can lint our python code, however this will likely include more
requirements for inspecting your image such as docker-compose
and maybe tools like selenium
and web drivers.
This is where we actually build our container. We spin up a new instance of our builder image and run
scripts/build.sh
. Any images built here will also be stored in the docker state for future steps to use.
We specify that this step should wait for the builder to be built by with:
Here we check our code for any style errors. We don't really need to run this inside our final container as that
container is using the current directory as its build context and it would be nice to not have to wait for the image
to build before knowing a line is too long or you have missed a blank line somewhere, so we make this step only wait
for build-builder
. Therefore this step can start as soon as build-builder
is done and can fail the build
before building the main image has finished.
Here we test our container, this will usually involve spinning up database and redis containers but in this example
we just run manage.py test
with an internal sqlite db.
Once all of our tests have passed (wait for lint
and run-tests
) we deploy our code, tagging the commit sha.
It is important to note that variables like $PROJECT_ID
and $COMMIT_SHA
are not actually environment variables
but are substituted into your build config at build time. You can pass them into build steps as build environment
variables using the env
parameter on a step like so:
A full list of substitutions can be found here.
NOTE: The cloud builder doesn't currently support secrets, this prevents the correct auth scopes to be passed into
you builder to interact with kubectl
. For this reason we have some additional work around code that fetches
credentials for another service account from a private storage bucket and activates that for using with kubectl
.
The code looks like this:
Secret handling is currently being developed so hopefully this workaround won't be needed for much longer.
There is a little bit of magic that goes into running your builders locally the same as they do during a build. Firstly
notice we didn't copy any of our source into the builder container, that's because we mount the source through volumes
and set the working directory, for this we use -v `realpath .`:/workspace
and -w /workspace
.
We also mount the docker state from the host by mounting the socket using -v /var/run/docker.sock:/var/run/docker.sock
so that the builder talks to your docker instance and -v ~/.docker:/root/.docker
to load your config.
This gives us the following run command:
You should also add any environment variables specified in your config.
In figuring this stuff out we hit a few gotchas along the way to do with how the docker state is handled.
The first concerns running tests in parallel, it seemed like a great idea to have our unit and selenium tests
running side by side, both spin up their own instances of the web server, db and redis using docker-compose
so both should be completely
independent. In reality however we end up clashing on names based on when containers are created and destroyed by other
processes. One option would be duplicate services for the different test types, alternatively we could move away from
compose and manually link our containers.
The second gotcha we came across was inspecting our services. During our testing we inspect our containers to make sure the db and redis services are fully running before hooking up our web server instance. Originally we inspected localhost for this, however it seems that since our containers are running on the hosts' docker engine and not our builders we can't actually inspect them like this. Instead we create another container that is a copy of our builder (yup we are running our builder inside our builder to inspect our other containers) and link it to our network. From here we can inspect our db, redis and web server using hostnames.
The technology here is really interesting and the ability to run whatever you like without worrying about your environment is very attractive. There are however a few things missing though that make it less attractive than the alternatives in its current incarnation.
kubectl
but is currently in development.Until these are fixed we will be sticking with Circle however the service is still in early beta and hopefully these will be addressed fairly early on.