Docker @ Small Improvements

20171018_152838

Whalecome to this blog post 🐳. I want to share with you how we use Docker at Small Improvements, how it helps us to get our everyday tasks done and what we learned from working with it. For starters, I added an introductory section about what Docker actually is – if you are already familiar with it feel free to skip that part.

This blog post was inspired by my recent visit to Docker Con EU 17 in Copenhagen. I picked up some of the topics that were discussed at the conference and provided links to the according talks for reference. At the very end you find some general information about Docker Con in case you are interested to attend the next one yourself.

But enough of the words now, let’s dive right into it!

What is Docker anyway?

Docker is a container tool that lets you run applications in encapsulated environments (containers). But unlike a VM, a Docker container is just a thin layer on top of the operating system of the host computer. When you imagine a VM to be a house with all it’s own plumbing, heating and wiring, then a Docker container is just one unit within a large, multi-tenant building where all the core infrastructure is centralized and shared. (That analogy is borrowed from Mike Coleman’s talk “Docker?!? But I’m a SYSADMIN”.) As you can imagine, this is a lot more efficient: while a full fledged Linux VM image consumes usually around 1.000 MB of disk space, a Docker image ranges between a couple of MBs or maybe up to some dozen MBs. And whereas a VM can take over a minute to start up, a Docker container is usually up and running in scarcely a second.

In case you are new to Docker and want to get going, here are some starting points for you:

Who is Docker interesting to?

Docker is not just fun to work with, there are numerous incentives to employ it in practice. The main target groups of Docker are:

Developers

As pointed out above, Docker is not just fast and efficient, it also consists of composable pieces that can be joined together in a very pleasing way. If you have a diverse app setup with various services, workers or databases, then it can be quite painful to setup and maintain a local development environment. With Docker however, you can define and spawn up clusters with a single CLI command.

Ops / DevOps / Sys-Admins

The world of operating has vastly changed in the last decade. It is not too long ago when we SSHed to physical servers and apt-get-installed some packages by hand. For a modern web app though, this scenario is barely imaginable anymore. The DevOps movement has broken loose and Docker goes hand in hand with tools like Terraform and is fully integrated in powerful cloud platforms such as AWS.

Management

Since Docker shares resources so efficiently it can be attractive for companies from a pure financial perspective. Paypal for instance took the effort to migrate most of their infrastructure to Docker. Today, after 2 years of planning and executing, they run 150.000 containers in production and their cost savings are massive. The interesting bit about their story is that introducing Docker was a business decision in the first place – getting the developers onboard was one of the last things they did.

How we use Docker

Continuous Integration and deployment

We use TeamCity for continuous integration and continuous deployment (CI/CD). For each pull request that is opened on Github we run all the tests and code style checks to make sure that no regression gets introduced. When the build passes, we are able to push out our app into production with just a couple of clicks.

Infrastructure-wise, the main server and the build agents run in Docker containers. Build and deploy pipelines are separate from one another and they also slightly differ in how they are setup: for the builds we let TeamCity take care of handling all the artefacts according to the particular commit that is being processed. For the deploys we pack the artefacts in Docker volumes and use specialised containers for the various tasks (such as deploying to certain instances or migrating traffic between the instances).

That whole setup works basically fine for us: even though the performance could be better, we are able to push out releases multiple times a day without too much effort. However, we still run into some edge cases every now and then, and generally see room for improving our tooling:

  • The duality in our build setup is unnecessarily complicated and we would like it better to compose one well-defined artefact per build, that then gets passed around for both testing and deployment. Historically seen, the deploy pipeline was added later and was a bit experimental, so our whole CI/CD setup is not super consistent anymore.
  • We are not convinced that our decision to use volumes for packing artefacts is the optimal choice anyway. A better way is to follow the best practice of multi stage builds as described in the official documentation. One problem is the cleanup of these volumes, that we currently need to do by hand every few weeks. (Labelling them would probably allow us to do that automatically, but we must need upgrade our Docker version for that first.)
  • We can do better in securing our production credentials for the live deployment. If we used cluster management tools like Kubernetes or Swarm, we could use their out-of-the box features for safely transferring secrets, which is always better than taking care of the security mechanisms by hand. As we might need to revisit or build process anyway in the mid-future, this will certainly be one of our agenda points.

Local development setup

The Small Improvements application is a single page app in the front end that connects to a REST API. The backend mainly consists of a monolithic Java web server that is hosted on Google App Engine (standard runtime). Other, smaller external services run on SAAS/PAAS platforms as well. Therefore we are not using Docker in production and have currently no strong incentive to change that.

It’s important for us to replicate an environment locally (on developer machines) that comes as close as possible to the production stage. That was quite simple back in the day when our app was singular and only consisted of one server: we just used the official App Engine plugin for IntelliJ Idea and the built-in dev server. However, our infrastructure grew over time, so we have more external services by now. For instance, we use an elastic search for our activity stream feature, we use a mail server for sending notification mails and we use a microservice for handling user avatars. At some point we noticed that our local development setup didn’t fully reflect our production infrastructure anymore and it became too complicated to take care of that manually.

That’s why we currently work on complementing our local Java dev server with a Docker-based solution: in a docker-compose file we describe all the external services and provide a small tool on top of docker compose for interacting with them in a well defined way. That allows us – with just a single command – to fire up all our services, like the elastic search or the mail server. The dispatching behaviour of the GCloud load balancer is emulated by an haproxy that serves as entry point for all incoming requests.

We haven’t rolled it out yet, but the preliminary experiences are looking very promising: no manual installations of tools anymore, and no hassle with different operating systems either. It’s fascinating to see how one universal platform makes such an impact on developer convenience.

Our Docker roadmap

Frankly, we don’t have a Docker roadmap. When it comes to using Docker in production we don’t have a case that is strong enough to change our current infrastructure. We are still quite far away from losing sleep over the invoices that Google sends us every month. Of course our application performance could be better here and there, but using Docker wouldn’t solve these issues.

So, instead of extending our Docker usage, we rather want to improve and sophisticate what we already have. For instance, we consider to rethink or CI/CD pipeline in order to solve the issues that we have. Also, we are about to roll out or Docker-based tool for the local development setup and will enhance that as we move along.

One thing that was still interesting to see at the conference is what Docker offers you in case you have a large legacy Java or .NET application. They offer a special program for what they call “MTA (Modernizing Traditional Applications)”. The idea here is to migrate large (and therefore ponderous) code bases to the cloud, preferably using the Docker Enterprise Edition (EE). Without changing a single line of code, they containerize your existing app and help you setup an automated build and deploy pipeline. I have no experience with Docker EE by myself, but the concept sounds interesting for companies that don’t have any cloud or DevOps knowledge, but still want to go in that direction stepwise with minimal risk.

About Docker Con EU

As I pointed out in the beginning, I was motivated to write this blog post by going to Docker Con EU in the first place. So let me drop a few words about the actual conference:

  • At its core there have been two days of talks, plus the opportunity to go to Moby summit or Docker EE summit. The summits, however, are probably just interesting if you work with these technologies. In addition to the talks they also offered some workshops (which you need to pay extra for, though).
  • The talks were divided into multiple tracks, such as “Use cases”, “Environment” and “Community”. The topics covered everything from security over best practices to architecture, and addressed beginners and experts alike. Everyone can compose their own schedule according to their interests. All talks are also available online.
  • The venue was in the spacious Bell Center in the south of Copenhagen. The event was excellently catered (they served both breakfast and lunch) and there were plenty of tables and seats to rest or work in between the talks.

All in all I enjoyed the talks and the overall atmosphere. Even though there was no big revelation that I took away from the conference, I learned about a lot of smaller details that will certainly help us to consolidate our work with Docker at Small Improvments. In addition, it’s quite interesting to see all the different use cases and experiences that people have. One probably rarely sees such a high diversity of professional backgrounds than on modern DevOps conferences. In that respect, Docker Con provides an excellent forum for interesting conversations and exchange of ideas.

Apart from the professional side of things, Copenhagen is a beautiful city and well worth a visit – so make sure to plan some leisure time if you go there.

Reflections on CSSconf EU 2017 (Berlin)

cssconf-welcome

Recently three of our developers attended the CSSconf 2017 in Berlin. The talks have been inspiring for us and once again we got clear about what a mature language CSS has become by now. The steady addition of new features continue to amaze and the enthusiasm of the community is infectious. The conference itself was well organized (the food was awesome!) and we appreciate that the organizers took so much care about creating a safe and diverse environment for everyone.

In this blog post we reflect on our learnings from the conference and share our experiences on developing efficient CSS at scale.

There are a lot of guidelines on how to write modular JavaScript and tons of books about how to make Java code efficient. But what does good CSS code actually look like? The answer is probably not that much different from other programming languages. The first step towards modular, reliable and reusable CSS code is to perceive CSS as a regular programming language. At CSSconf Ivana Mc Connell puts up the question: “What makes a good CSS developer?”. She points out that CSS still isn’t included as a programming language in the Stack Overflow survey and that there is even a hierarchy in many companies between CSS developers and “real” developers.

“We can always change the hierarchies. CSS is real development – let’s make that a given.”
(Ivana Mc Connell)

There are still developers and managers who think that CSS is just about putting fancy colors and setting some margin here and there. However, CSS has become a powerful tool and new features are continuously added, not to mention the various preprocessors that have become a quasi-standard throughout the last years. Things like grids, animations and filters are first class citizens by now and already wildly supported by browsers.

To showcase the feature richness of CSS, Una Kravets performs a live coding session where she puts up a simple yet fun and interactive browser game just by making use of regular CSS. Mathias Bynens already did something similar at CSSconf EU 2014, where he presented a mock of a shooter game that only consisted of one single line of HTML. The point here is not that CSS should replace JavaScript. On the contrary – while CSS and JavaScript are and always will be different things, it’s interesting to see the borders blurring and how both languages influence each other.

Make it scale, but keep it consistent

At Small Improvements we work on a large single page application. In three feature teams we maintain roughly 40k lines of LESS code. Two of our biggest ongoing challenges are to make our styling consistent and our CSS code modular and reusable.

Maintaining a style guide and establishing a design culture

Especially when multiple teams work on one and the same application there is a certain risk that each team comes up with a slightly different style of implementation. There are numerous examples for this and conducting an interface inventory (as suggested by Brad Frost) can yield surprising results. Achieving consistent styling is even more difficult if the implementation of the frontend is technically not homogeneous. Even though we implement all new frontend features at Small Improvements in React, we still have a lot of Angular code and even some old Wicket pages lingering around. The user doesn’t care about these details, so the question is how to keep track of all the various patterns that we use across the app and provide a seamless design language?

In her talk “Scaffolding CSS at scale”, Sareh Heidari shared an example how to discover and extract visual patterns at the BBC news website. We can confirm that we made good experiences with a similar approach. We recently set out to build a new style guide for our app that allows us to stay aware of all the different patterns we use. This helps us not only to compose new features out of existing components. Instead, the main key for us is not the style guide itself but it is the process around it: having a close eye on everything that’s newly-built and coming together frequently to talk about how to integrate these additions into the bigger picture. We perceive the style guide as a cause for discussion; you could even say that it’s an artless byproduct of our design process.

Project setup and implementation

For us it works best to structure our code base in a domain-driven way. We follow this approach in our entire app and can fully recommend it. For the frontend we decided to use CSS modules (in form of LESS files) that we put right next to our React components. That way the component itself always comes with its own (inward) styling. There are various attempts in the React community for this kind of project layout. (It even became popular to go further by using some sort of inline styling, see Mark Dalgleish’s talk for an overview.) For us CSS Modules worked well since we used LESS previously, which then allowed us for a convenient and gradual migration path.

Glenn Maddern – who heroically stepped in last-minute for Max Stoiber – updated us about the most recent changes in the Styled Components project. But no matter whether you prefer CSS modules or Styled Components, it is crucial to understand the motivation behind these libraries in order to build large scale applications: Glenn Maddern’s talk at ColdFront16 gives a good insight into this way of thinking and why it’s beneficial.

The only thing where we jealously glance over to Styled Components is the ability to parametrize CSS code so easily. Therefore we are looking forward for CSS variables being better supported in browsers, because that would be the native solution to the problem. David Khourshid demonstrates the handover between JavaScript and CSS in his talk “Getting Reactive with CSS”. With this solution the JS-CSS-intersection hassle falls right into place.

Takeaway

We don’t have a catchy conclusion to draw. There is certainly no right or wrong in what approach works best or which library helps most. For us it was nice to see a lot of our current assumptions confirmed, and if we were asked to write down three of them, then it would be these:

  1. CSS is a fully-fledged programming language – for sure! Stay up to date with all new features and take advantage of them once they are commonly supported.
  2. Keep styling and markup closely together. This promotes reusability best. Leverage component interfaces in order to create clear and predictable boundaries.
  3. Talk, talk, and talk frequently about visual patterns to ensure consistency in the long term. Some sort of process or documentation can help here.

The development team here at Small Improvements have done their fair share of conferences in the past (thanks to the individual learning budgets we are offered). It’s awesome now that our design team has grown to the point that we’re also attending designer-developer related conferences such as CSSconf EU. Bring on next year!