Docker @ Small Improvements

20171018_152838

Whalecome to this blog post 🐳. I want to share with you how we use Docker at Small Improvements, how it helps us to get our everyday tasks done and what we learned from working with it. For starters, I added an introductory section about what Docker actually is – if you are already familiar with it feel free to skip that part.

This blog post was inspired by my recent visit to Docker Con EU 17 in Copenhagen. I picked up some of the topics that were discussed at the conference and provided links to the according talks for reference. At the very end you find some general information about Docker Con in case you are interested to attend the next one yourself.

But enough of the words now, let’s dive right into it!

What is Docker anyway?

Docker is a container tool that lets you run applications in encapsulated environments (containers). But unlike a VM, a Docker container is just a thin layer on top of the operating system of the host computer. When you imagine a VM to be a house with all it’s own plumbing, heating and wiring, then a Docker container is just one unit within a large, multi-tenant building where all the core infrastructure is centralized and shared. (That analogy is borrowed from Mike Coleman’s talk “Docker?!? But I’m a SYSADMIN”.) As you can imagine, this is a lot more efficient: while a full fledged Linux VM image consumes usually around 1.000 MB of disk space, a Docker image ranges between a couple of MBs or maybe up to some dozen MBs. And whereas a VM can take over a minute to start up, a Docker container is usually up and running in scarcely a second.

In case you are new to Docker and want to get going, here are some starting points for you:

Who is Docker interesting to?

Docker is not just fun to work with, there are numerous incentives to employ it in practice. The main target groups of Docker are:

Developers

As pointed out above, Docker is not just fast and efficient, it also consists of composable pieces that can be joined together in a very pleasing way. If you have a diverse app setup with various services, workers or databases, then it can be quite painful to setup and maintain a local development environment. With Docker however, you can define and spawn up clusters with a single CLI command.

Ops / DevOps / Sys-Admins

The world of operating has vastly changed in the last decade. It is not too long ago when we SSHed to physical servers and apt-get-installed some packages by hand. For a modern web app though, this scenario is barely imaginable anymore. The DevOps movement has broken loose and Docker goes hand in hand with tools like Terraform and is fully integrated in powerful cloud platforms such as AWS.

Management

Since Docker shares resources so efficiently it can be attractive for companies from a pure financial perspective. Paypal for instance took the effort to migrate most of their infrastructure to Docker. Today, after 2 years of planning and executing, they run 150.000 containers in production and their cost savings are massive. The interesting bit about their story is that introducing Docker was a business decision in the first place – getting the developers onboard was one of the last things they did.

How we use Docker

Continuous Integration and deployment

We use TeamCity for continuous integration and continuous deployment (CI/CD). For each pull request that is opened on Github we run all the tests and code style checks to make sure that no regression gets introduced. When the build passes, we are able to push out our app into production with just a couple of clicks.

Infrastructure-wise, the main server and the build agents run in Docker containers. Build and deploy pipelines are separate from one another and they also slightly differ in how they are setup: for the builds we let TeamCity take care of handling all the artefacts according to the particular commit that is being processed. For the deploys we pack the artefacts in Docker volumes and use specialised containers for the various tasks (such as deploying to certain instances or migrating traffic between the instances).

That whole setup works basically fine for us: even though the performance could be better, we are able to push out releases multiple times a day without too much effort. However, we still run into some edge cases every now and then, and generally see room for improving our tooling:

  • The duality in our build setup is unnecessarily complicated and we would like it better to compose one well-defined artefact per build, that then gets passed around for both testing and deployment. Historically seen, the deploy pipeline was added later and was a bit experimental, so our whole CI/CD setup is not super consistent anymore.
  • We are not convinced that our decision to use volumes for packing artefacts is the optimal choice anyway. A better way is to follow the best practice of multi stage builds as described in the official documentation. One problem is the cleanup of these volumes, that we currently need to do by hand every few weeks. (Labelling them would probably allow us to do that automatically, but we must need upgrade our Docker version for that first.)
  • We can do better in securing our production credentials for the live deployment. If we used cluster management tools like Kubernetes or Swarm, we could use their out-of-the box features for safely transferring secrets, which is always better than taking care of the security mechanisms by hand. As we might need to revisit or build process anyway in the mid-future, this will certainly be one of our agenda points.

Local development setup

The Small Improvements application is a single page app in the front end that connects to a REST API. The backend mainly consists of a monolithic Java web server that is hosted on Google App Engine (standard runtime). Other, smaller external services run on SAAS/PAAS platforms as well. Therefore we are not using Docker in production and have currently no strong incentive to change that.

It’s important for us to replicate an environment locally (on developer machines) that comes as close as possible to the production stage. That was quite simple back in the day when our app was singular and only consisted of one server: we just used the official App Engine plugin for IntelliJ Idea and the built-in dev server. However, our infrastructure grew over time, so we have more external services by now. For instance, we use an elastic search for our activity stream feature, we use a mail server for sending notification mails and we use a microservice for handling user avatars. At some point we noticed that our local development setup didn’t fully reflect our production infrastructure anymore and it became too complicated to take care of that manually.

That’s why we currently work on complementing our local Java dev server with a Docker-based solution: in a docker-compose file we describe all the external services and provide a small tool on top of docker compose for interacting with them in a well defined way. That allows us – with just a single command – to fire up all our services, like the elastic search or the mail server. The dispatching behaviour of the GCloud load balancer is emulated by an haproxy that serves as entry point for all incoming requests.

We haven’t rolled it out yet, but the preliminary experiences are looking very promising: no manual installations of tools anymore, and no hassle with different operating systems either. It’s fascinating to see how one universal platform makes such an impact on developer convenience.

Our Docker roadmap

Frankly, we don’t have a Docker roadmap. When it comes to using Docker in production we don’t have a case that is strong enough to change our current infrastructure. We are still quite far away from losing sleep over the invoices that Google sends us every month. Of course our application performance could be better here and there, but using Docker wouldn’t solve these issues.

So, instead of extending our Docker usage, we rather want to improve and sophisticate what we already have. For instance, we consider to rethink or CI/CD pipeline in order to solve the issues that we have. Also, we are about to roll out or Docker-based tool for the local development setup and will enhance that as we move along.

One thing that was still interesting to see at the conference is what Docker offers you in case you have a large legacy Java or .NET application. They offer a special program for what they call “MTA (Modernizing Traditional Applications)”. The idea here is to migrate large (and therefore ponderous) code bases to the cloud, preferably using the Docker Enterprise Edition (EE). Without changing a single line of code, they containerize your existing app and help you setup an automated build and deploy pipeline. I have no experience with Docker EE by myself, but the concept sounds interesting for companies that don’t have any cloud or DevOps knowledge, but still want to go in that direction stepwise with minimal risk.

About Docker Con EU

As I pointed out in the beginning, I was motivated to write this blog post by going to Docker Con EU in the first place. So let me drop a few words about the actual conference:

  • At its core there have been two days of talks, plus the opportunity to go to Moby summit or Docker EE summit. The summits, however, are probably just interesting if you work with these technologies. In addition to the talks they also offered some workshops (which you need to pay extra for, though).
  • The talks were divided into multiple tracks, such as “Use cases”, “Environment” and “Community”. The topics covered everything from security over best practices to architecture, and addressed beginners and experts alike. Everyone can compose their own schedule according to their interests. All talks are also available online.
  • The venue was in the spacious Bell Center in the south of Copenhagen. The event was excellently catered (they served both breakfast and lunch) and there were plenty of tables and seats to rest or work in between the talks.

All in all I enjoyed the talks and the overall atmosphere. Even though there was no big revelation that I took away from the conference, I learned about a lot of smaller details that will certainly help us to consolidate our work with Docker at Small Improvments. In addition, it’s quite interesting to see all the different use cases and experiences that people have. One probably rarely sees such a high diversity of professional backgrounds than on modern DevOps conferences. In that respect, Docker Con provides an excellent forum for interesting conversations and exchange of ideas.

Apart from the professional side of things, Copenhagen is a beautiful city and well worth a visit – so make sure to plan some leisure time if you go there.

Looking back at the FullStack Fest

It was a couple of months or so ago when I came across this conference called FullStack Fest, skimming through the agenda, I was immediately intrigued and thought “I’ve got to check this out”. The coolest bit? The conference was taking part in the beautiful city of Barcelona.

September finally came around, and just as the Berlin air was getting chilly and bringing signs of the impending winter, I was flying off to the warmth of Spain. I got there a bit early and spent a nice Sunday roaming around on the streets, admiring the architecture and the history. The next day began the Backend bit of the FullStack Fest. It was interesting to step into the intricate world of the architecture of buildings one day, and after admiring it, to step into the equally intricate world of Software Architecture the next.

Sun baked and refreshed, I went to the conference, all set with a notebook and a bag of goodies from the organisers. One must collect stickers for the laptop after all.

The backend days of the conference were abstractly divided into “DevOps” and “Architecture” with the topic being “Problems of Today, Wonders from the Future”. To describe the theme of the conference in a single word, I would say “Distributed Systems”.

Day 1:  DevOps

The first talk was by Karissa McKelvey (@okdistribute). She talked about a project which would allow people to share their scientific work without the consumers having to pay for it. A common problem in research is getting access to the required data, journals, publications etc. This is so because a lot of bureaucracy, censorship and corporate licenses get in the way of open sourcing knowledge. Karissa and her team have worked on something called the Dat Project. This creates a distributed network of many data hosts (mostly universities), through which you can upload your files and download any file through a little identifier. You can access Karissa’s presentation from the conference using this little identifier (dat://353c5107716987682f7b9092e594b567cfd0357f66730603e17b9866f1a892d8) once you install the dat tool on your machine. Though this is still vulnerable to being used as an illegal file hosting service, it’s a good step towards making data and knowledge more reachable and transparent.

Following up on this was an interesting introduction to Ethereum as a way to enter ‘contracts’ without trusting a third party such as a notary, this is done by distributing the idea of trust amongst many participants. As Luca Marchesini (@xbill82) said in his talk:

“The machine is everywhere.. The machine is nowhere”.

With the beautiful underlying power of the Nakamoto consensus protocol that powers the blockchain and the added flexibility of Turing complete capabilities, allowing you to express the intent of your contract and its fulfilment in terms of an actual computer program, you can have the word of truth floating around in the world, verifiable and undeniable.

With the buzz words “microservices” and “serverless” applications going around, one would of course be expecting a talk on these topics. Marcia Villalba (@mavi888uy) gave a great talk on what “serverless” really means…and no, it does not mean there is no server (of course). The idea of a serverless application is to utilise the cloud and write self contained functions to do simple tasks. Some highlights from the talk worth remembering are:

  • Functions in the cloud are triggered by events, they do not have state.
  • Pay as you go, scale automatically.
  • Create a proof of concept and then optimise your solution to take advantage of the cloud.
  • Automate: your CI pipeline and your testing.
  • Reduce latency by being at the edge of the cloud.

Next we stepped into the world of cyber security with Dr. Jessica Barker (@drjessicabarker), who talked about tackling vulnerabilities, specifically those introduced by negligence on the part of an end user. She talked about educating users on security instead of treating them as the weakest link in the chain and ‘dumbing things down’. She made her case in light of the Pygmalion Effect, according to which higher expectations lead to better performance. A common problem when building human friendly security guidelines is that the user is treated as a dumb entity and that leads to the user acting like a dumb entity.

Frank Lyaruu (@lyaruu) then came in with an anecdote about how he wanted a swiss army knife that did everything when he was a child, and ended up with an utterly useless one. It was quite easy to see the analogy here… we have all faced feature bloat, we’ve all wanted a framework to do everything and then been frustrated with the abstractions that make customisations a nightmare. Frank introduced the concept of ‘fullstack databases’. The key idea? Identify your use case and use the right database for it. While SQL databases may work for one scenario, GraphQL would be much better in another. The take away:

“Your thinking influences your choice of tools and your choice of tools influences your thinking.”

A representative from Booking.com, Sahil Dua (@sahildua2305) , then told us how Booking.com handles their deep learning models in production. The problem they need to solve is that different data scientists need access to an independent environment for training. They have their training script in a container, and a container runs on every needed server. The load of containers is managed by Kubernetes. This indeed was a lesson in how to manage different containers and independent environments with very high performance needs.

As Software Engineers, we know one thing for sure, and that is that things will, at some point, fail.

“There are two kinds of systems, those which have fails and those which will.”

Aishraj Dahal (@aishraj) walked us through chaos management. Some useful principles that he talked about were to:

  • Automate what you can and to have a framework for dealing with incidents.
  • Define what a “minor” and “major” incident means..
  • Define business failures in terms of business metrics, for example, the amount of revenue lost per hour of down time..
  • Single Responsibility Principle: One person should be responsible for one task in an incident, if everyone is combing through the git history looking for the last stable commit, its redundant work..
  • Never hesitate to escalate.
  • You need an incident commander, this person is the one who orchestrates the efforts to get back on track.

Day 2: Architecture

The second day of the FullStack Fest began with an exciting talk by John Graham Cumming (@jgrahamc) on the Internet of Things as the vector for DDoS attacks. He showed how vulnerable IoT devices are, with simple lapses like having telnet open on port 23. These devices are exploited by sending small http requests to a server, and sending A LOT of them, demanding a large response targeted towards a victim. As an employee of Cloudflare he could shed some light on how network patterns are used to discern legitimate and other requests. Some ways to protect yourself against DDoS attacks are to install something to do rate limiting, block every entry point that you do not need and use DDoS protection tools from a vendor such as Cloudflare.

One of my favourite talks from Day 2 included James Burns’ (@1mentat) introduction to chaos engineering and distributing tracing. He began by defining a practical distributed system as one that is observable and resilient. Observability comes with tracing whereas resilience can be tested through Chaos Engineering i.e. intentionally causing a system to fail as a “drill” and having the engineers on board try to fix it without knowing the cause of the problem or even what the problem is. If you have many such drills, when real chaos hits the team will be well prepared to tackle it.

Chris Ford (@ctford) took the stage and talked about a hipster programming language called Idris which can be used to specify distributed protocols. In Ford’s words, his 10th rule of microservices is:

“Any sufficiently complicated microservice architecture contains an ad-hoc, informally-specified, bug-ridden, slow implementation of a distributed protocol.”

A distributed protocol’s specification can be tricky to get right. With a language like Idris, whose compiler checks the types, where functions are value and even types are values, the level of strictness when specifying a protocol is greatly increased and the chances of runtime bugs reduced as the compiler is smart enough to capture protocol violations. A protocol can be thought of as a finite state machine and is so specified in the Idris programming language. Be forewarned though, this is still ongoing research and definitely not production ready!

We then dove into philosophy, the nature of order and structure preserving transformations with Jerome Scheuring(@aethyrics). He talked about identifying the core of the application and then building transformations around it. The key being that the structure of your application remains the same when more layers are added onto it. He hinted at functors as a tool for achieving such transformations of architecture.

After some lightning talks and a tutorial on ‘hacking’ into systems that only exist for a few milliseconds (lambdas which are only alive for the scope of a simple execution) and then on how to defend such systems, the backend bit of the conference came to a close.

The conference was a pretty cool look into research topics meeting with the software industry and creating some innovative solutions to existing problems. Though I haven’t listed all the talks here, you can check them out on youtube: https://www.youtube.com/watch?v=_V4tGx85hUA&t=536s.

I left Barcelona having felt that I have gazed into the future of technology and seen the wheels set in motion for many advancements set to come in the next few years. Though the conference could have been even better if it had some more topics related more explicitly to everyday software development, I feel that I walked out a more knowledgeable person than before.

Screen Shot 2017-10-04 at 13.45.21

Broadening one’s horizons, beyond the scope of their job description is not only intellectually stimulating but also makes for a more content and productive mind. Small Improvements, by sponsoring this trip (and many others for their employees’ learning and development) is creating a happier and smarter workplace. I am yet again in Berlin, at my desk, ready to tackle more challenges and apply the numerous things I gleaned from the FullStack Fest. Looking forward to next conference!

Reflections on CSSconf EU 2017 (Berlin)

cssconf-welcome

Recently three of our developers attended the CSSconf 2017 in Berlin. The talks have been inspiring for us and once again we got clear about what a mature language CSS has become by now. The steady addition of new features continue to amaze and the enthusiasm of the community is infectious. The conference itself was well organized (the food was awesome!) and we appreciate that the organizers took so much care about creating a safe and diverse environment for everyone.

In this blog post we reflect on our learnings from the conference and share our experiences on developing efficient CSS at scale.

There are a lot of guidelines on how to write modular JavaScript and tons of books about how to make Java code efficient. But what does good CSS code actually look like? The answer is probably not that much different from other programming languages. The first step towards modular, reliable and reusable CSS code is to perceive CSS as a regular programming language. At CSSconf Ivana Mc Connell puts up the question: “What makes a good CSS developer?”. She points out that CSS still isn’t included as a programming language in the Stack Overflow survey and that there is even a hierarchy in many companies between CSS developers and “real” developers.

“We can always change the hierarchies. CSS is real development – let’s make that a given.”
(Ivana Mc Connell)

There are still developers and managers who think that CSS is just about putting fancy colors and setting some margin here and there. However, CSS has become a powerful tool and new features are continuously added, not to mention the various preprocessors that have become a quasi-standard throughout the last years. Things like grids, animations and filters are first class citizens by now and already wildly supported by browsers.

To showcase the feature richness of CSS, Una Kravets performs a live coding session where she puts up a simple yet fun and interactive browser game just by making use of regular CSS. Mathias Bynens already did something similar at CSSconf EU 2014, where he presented a mock of a shooter game that only consisted of one single line of HTML. The point here is not that CSS should replace JavaScript. On the contrary – while CSS and JavaScript are and always will be different things, it’s interesting to see the borders blurring and how both languages influence each other.

Make it scale, but keep it consistent

At Small Improvements we work on a large single page application. In three feature teams we maintain roughly 40k lines of LESS code. Two of our biggest ongoing challenges are to make our styling consistent and our CSS code modular and reusable.

Maintaining a style guide and establishing a design culture

Especially when multiple teams work on one and the same application there is a certain risk that each team comes up with a slightly different style of implementation. There are numerous examples for this and conducting an interface inventory (as suggested by Brad Frost) can yield surprising results. Achieving consistent styling is even more difficult if the implementation of the frontend is technically not homogeneous. Even though we implement all new frontend features at Small Improvements in React, we still have a lot of Angular code and even some old Wicket pages lingering around. The user doesn’t care about these details, so the question is how to keep track of all the various patterns that we use across the app and provide a seamless design language?

In her talk “Scaffolding CSS at scale”, Sareh Heidari shared an example how to discover and extract visual patterns at the BBC news website. We can confirm that we made good experiences with a similar approach. We recently set out to build a new style guide for our app that allows us to stay aware of all the different patterns we use. This helps us not only to compose new features out of existing components. Instead, the main key for us is not the style guide itself but it is the process around it: having a close eye on everything that’s newly-built and coming together frequently to talk about how to integrate these additions into the bigger picture. We perceive the style guide as a cause for discussion; you could even say that it’s an artless byproduct of our design process.

Project setup and implementation

For us it works best to structure our code base in a domain-driven way. We follow this approach in our entire app and can fully recommend it. For the frontend we decided to use CSS modules (in form of LESS files) that we put right next to our React components. That way the component itself always comes with its own (inward) styling. There are various attempts in the React community for this kind of project layout. (It even became popular to go further by using some sort of inline styling, see Mark Dalgleish’s talk for an overview.) For us CSS Modules worked well since we used LESS previously, which then allowed us for a convenient and gradual migration path.

Glenn Maddern – who heroically stepped in last-minute for Max Stoiber – updated us about the most recent changes in the Styled Components project. But no matter whether you prefer CSS modules or Styled Components, it is crucial to understand the motivation behind these libraries in order to build large scale applications: Glenn Maddern’s talk at ColdFront16 gives a good insight into this way of thinking and why it’s beneficial.

The only thing where we jealously glance over to Styled Components is the ability to parametrize CSS code so easily. Therefore we are looking forward for CSS variables being better supported in browsers, because that would be the native solution to the problem. David Khourshid demonstrates the handover between JavaScript and CSS in his talk “Getting Reactive with CSS”. With this solution the JS-CSS-intersection hassle falls right into place.

Takeaway

We don’t have a catchy conclusion to draw. There is certainly no right or wrong in what approach works best or which library helps most. For us it was nice to see a lot of our current assumptions confirmed, and if we were asked to write down three of them, then it would be these:

  1. CSS is a fully-fledged programming language – for sure! Stay up to date with all new features and take advantage of them once they are commonly supported.
  2. Keep styling and markup closely together. This promotes reusability best. Leverage component interfaces in order to create clear and predictable boundaries.
  3. Talk, talk, and talk frequently about visual patterns to ensure consistency in the long term. Some sort of process or documentation can help here.

The development team here at Small Improvements have done their fair share of conferences in the past (thanks to the individual learning budgets we are offered). It’s awesome now that our design team has grown to the point that we’re also attending designer-developer related conferences such as CSSconf EU. Bring on next year!

Looking Back at GOTO 2016

By Peter Crona and Michael Ruhwedel

goto

First of all, it was an amazing conference as always. None of us presented this year, but look for us in the future. Many of us at Small Improvements tend to go to more specific conferences, such as React Europe, DockerCon or JSUnconf. GOTO is more of a generic software engineering conference, focusing on issues such as architecture, security and new trends in the field. It doesn’t go as deep into the topics as the specialized conferences, but it serves well to give an overview and an introduction to interesting topics. Some of the most interesting and most popular topics were, as expected, microservices, data science, security and ethics. Let’s start with microservices.

Microservices are the Future

Something interesting about the future is that it is also always in the present, just initially hiding a bit in the corners. A clear message from Mary Poppendieck was that microservices are the future. Regardless of whether we want it or not, we need to learn it and will eventually use it.

Susanne Kaiser from Just Software talked about their ongoing journey from a monolith to microservices. She warned us from doing too much at once, but concluded that going from a monolith to microservices was worth it in the end. She also told us about the importance to not underestimate the effort required to do so. Later on Ilya Dmitrichenko walked us through Socks Shop, a demo application to show how an application built with microservices can look. He also showed us how a microservice-based application is deployed.

I urge you to read up on microservices if you haven’t. It is truly fascinating how convenient the configuration is nowadays, and if you’ve been around for awhile, you will find it interesting to compare with how we did it in the good old days. Have a look at this configuration for example, lovely, isn’t it? Let’s move on to another topic, which I have a very strong interest for, namely data science.

Seeing into the Future

It is truly fascinating how quickly data science has become popular and advanced. One of the first talks I went to was “Applied data science and engineering for local weather forecasts” by Nikhil Podduturi from Meteogroup. He took us through how they started using machine learning, running everything on their own laptops and then moving into the cloud. He showed us a bit of their architecture that process more than a terabyte daily. I enjoyed his talk very much and had a chat afterwards, in which he pointed out that, when getting started with data science, it is sensible to start with the basics, learning/repeating the mathematics and then move on to hot techniques such as deep learning. This will make it easier to develop an intuition for which technique to use when and how to find the best parameters. He recommended using Python since it has a very mature ecosystem for machine learning.

Robert Kubis from Google tutored us in Tensorflow, by working the Hello World of machine learning, namely classification of handwritten digits. He pushed the success rate of a neural network, up to an impressive 98%, while touching the basics of the Python API. This was a very interesting and hand-on talk, showing you how to use Tensorflow and giving you an introduction to deep learning.

How to find insights without using machine learning, was the topic of Michael Hunger from Neo Technology talk. He demonstrated how data can be modelled and queried using a graph. His talk focused on how Neo4J was used by journalists to analyze the panama papers.

Even your code repository is a datasource in itself that can be mined. This concept was presented by Dr. Elmar Juergens. By coloring new additions of code and test-coverages of functional tests, he clearly demonstrated that a dev- and a test-department at one of his clients had a serious communication problem: There was little overlap in what was tested and what was newly implemented.

The last two talks about data science were focusing a bit more on possibilities, philosophy and ethics. “Deep Stupidity: What Neural Networks Can and Cannot do …” by Prof J. Mark Bishop discussed about whether we can build general intelligence or not. “Consequences of an Insightful Algorithm” by Carina C. Zona focused on the importance of thinking through the ethical aspects when developing algorithms and using them. We are giving a lot of power to algorithms, and algorithms tend to reinforce prejudices and do not necessarily care about what is right, but are still used to make decisions that affect people’s lives. Let’s now have a look at the security talks.

A Secure Internet

When you learn a new concept, such as microservices, it is important to read up on security. It is easy to make mistakes that introduce vulnerabilities when you are new to technologies. Phil Winder talked about how to make your microservices secure. He was very practical and showed us common mistakes people do, such as running as root in containers and not setting up a sensible network policy. Dr. Jutta Steiner introduced us to Blockchain technology. She pointed out how we can use techniques from safety critical systems development, such as N-version programming, to securely implement it and minimizing the risk of bugs. The talk was unfortunately not going into implementation details of blockchain technology itself, but she made it clear that the technology can be used for much more than just a currency such as Bitcoin. Finally, let’s have a look at the ethics focused talks.

Ethics in Technology

The great thing about goto is, that it’s not got the latest technology topics covered, but also how to better get along with your fellow human beings.

Jamie Dobson encouraged us to think beyond capitalism in his inspiring “Postcapitalism” talk. It’s possible that the  power of 3D printing small and large can bring back the capital and onshore work in developed countries again.

Beginning with a short meditation Jeffery Hackert build a compelling argument for giving our full presence. With a full awareness of ourselves and our workplace come better informed observations, decisions and implementations. After all if you’re ever involved in a trolley problem, it would be really unfortunate if you’d be focused on your cellphone and not the lever.

If you’ve been exhausted by office politics Kate Gray and Chris Young can help you. Their great talk “How to Win Hearts and Minds” is about how the finesse of real world politics were used to push a blocked IT project to success.

Talks ranging from microservices to ethics shows you the great variety offered at GOTO, the conference really has a lot to offer.

Something for Everyone

Let’s end with some words about the conference itself. GOTO has five different tracks and the mix is very good, covering important and trending topics such as architecture (in particular microservices), security, data science and much more. In addition to this you find plenty of interesting people there to share ideas and pain points with. My only disappointment was that there was not a single talk about functional programming. But hey, you can’t fit everything into one conference.