5 Lessons From Build Stuff 2017

At Small Improvements we each have a learning budget and get to choose which conferences we attend. It might seem a bit of an odd choice then that I stepped off the plane in the Lithuanian capital Vilnius on a cold November afternoon. I was there for BuildStuff, a pretty special conference. The lineup was one of the best I’ve seen in a long time. Kevlin Henney, Simon Wardley, J.B Rainsberger, Linda Rising, Dino Esposito and Jurgen Hoeller (to name but a few) all graced the stage over the three day event. I was in my element and eager to learn, here’s a few of my top take aways.

 

1. DevOps is the new legacy, serverless is the new norm

Simon Wardley took the opening Keynote at this years BuildStuff. He started by describing his journey as a startup CEO with a generic business plan to discovering mapping. Maps are important because they guide you on a journey, much better than a set of instructions. All maps have some basic elements. They are visual, context specific, have positional components (based upon an anchor) and show movement. Simon explained that current business planning tools (like SWOT) don’t fit these criteria. He eventually devised his own method of mapping a business. You can read more about Wardley Maps here.

Using his technique, Simon mapped out the evolution of infrastructure and architectural practices. Practices move from left to right across the map as they gain adoption and later become institutionalised. New evolutions of those practised start out on the left again. Using this technique we can see that DevOps is reaching the point of commodity. As each new technology abstracts away from the previous (server racks -> compute -> paas -> serverless) we can see that serverless is an evolution of DevOps. Because the old “best practice” practice becomes industrialised and benefits from the speed and efficiency, we tend to ignore the new “Genesis” practices for a long time (too long some would say).

wardley-map-serverless

2. There is no sprint zero

Martin Hinshelwood was there to remind us that there are no special sprints in Scrum. He described Scrum as the minimal framework required to implement agile. Hence we must be thinking of the agile manifesto in everything we do. Even in a team that doesn’t use Scrum (we use Kanban in our team as Small Improvements), this still rings true.

The first of the twelve principles of Agile Software states

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

By having a sprint zero, planning sprint, or other such planning phase we are breaking this first principle. Our aim is to get something in front of the user as quickly as possible, gather feedback and use that to iterate.

This is something we often forget in software development. We spend valuable time perfecting our new feature but until someone is using it we’ve no idea if it’s actually useful.

3. Everyone should be doing chaos engineering

chaos-monkey
Standing there with his electric guitar in hand, you’d be forgiven for thinking you’d made your way into a rock concert for the day two closing keynote. In fact Russ Miles (who can really play the guitar by the way) treated us to a slice of chaos engineering. This is a topic that most people are aware of through Neflix’s Chaos Monkey project. Chaos Engineering is actually an architectural practice that everyone can do. It’s important to note that this is not just terminating servers to see what happened. Setting out some guidelines, Russ explained:

  •  Learn the healthy state of your application in production before attempting chaos engineering.
  • Always run experiments in productions.
  • Run experiments where you expect the system can cope (not where you expect everything to blow up).
  • Start by varying real world events to limit the blast radius.
  • Make sure everyone at the company knows it’s happening.

4. Functional programming is hot right now

There were at least four talks on functional programming and many others mentioned functional paradigms. My favourite was by Sander Hoogendoorn who reminded us of the power of Monads. We’ve all been using Monads during development without even realising. Promise in Javascript and Optional in Java are both monads (there’s some debate about the latter).

What impressed me about Sanders talk was his explanation of Monads. We can think of them like putting a value in a box. When in the box, all the side effects (such as thrown exceptions) are hidden inside the box.

We can apply functions to the value in the box, transforming it (map/flatMap) and at the end we can take the result out of the box.

monad-box

In another talk, Einar Høst was live coding an Escher Square Limit in F# based on a paper, Functional Geometry by Peter Henderson. Einar’s talk was a real eye opener to the beauty of functional programming. Using composition, he was able to write functions to scale rotate and tile the base fish image to create the Square Limit in 45 minutes and everyone was still following at the end.

5. Reading code is harder than you thought

Think about your job as a programmer, which do you do more of: reading code or writing code? Of course it’s reading. Every time we fix a bug, we have to read the code and understand what is happening. Every time we add a feature to an existing code base, we read the code first. Romeu Moura‘s talk on day three focused on reading code and how to keep our emotions out of the process.

When we read others code, we’re prone to thinking, “Who in their right mind would write code like this?”. Romeu started with Hanlon’s razor as a base for explaining why code is not always as it seems.

Never attribute to malice that which is adequately explained by stupidity

Maybe we can say it’s not stupitidy but ignorance (a lack of knowledge) that caused the developer to write it like this. Or maybe it’s not even ignorance but lack of attention. After all we are all human beings and we’re not blessed with infinite attention spans.

But what if it was not lack of attention but unstated context that causes the code to be like this? We all write code we regret when put under pressure to finish features.

He went on to explain that the systen in which the code is written (i.e the development culture at the organisation) is the cause of bad code. Luckily at Small Improvements we don’t suffer from this systematic bad code.

Grab your coat…

The three days of conference were so full of content that I couldn’t possibly write it all here. With multiple tracks, there was always an interesting talk to attend, not to mention a few hard choices. The conference itself had a constant flow of food, coffee, popcorn (and a wheelbarrow of oranges for some reason) and an excellent party on the Thursday evening. So if you find yourself with a free week next November (who does anything in November anyway?), I strongly encourage you to book a plane to Lithuania and attend BuildStuff 2018. I will see you there.

Beyond Tellerrand 2017

The four of us (Charisse, Jan, Paulo & Timur) arrived early for coffee. Right before the first talk in the morning we were welcomed by a very happy DJ. As it turned out later, he embedded snippets of the talks in his songs of vastly different genres in the breaks.

Over the course of the two-day conference there were a bunch of talks and almost all of them are available now on Vimeo. Most of them are quite good and worth watching, but we’ll go into details for the talks that we found to be most insightful, interesting and relevant to us.

IMG_4557

Robin Christopherson / Out with accessibility – In with inclusive design
Vasilis van Gemert / Exclusive Design

Overall, Robin Christopherson’s talk was a nice summary on why inclusive design is important. Not only people who are obvious to think of – the blind, the deaf, etc. – will profit from it, but also the “temporarily disabled” like the inebriated, people in a time crunch or or the temporarily incapacitated (such as those holding a coffee cup). I found it a bit sad to have to use this argument, optimizing for people with impaired vision should be reason enough in itself. But if it helps why not. He also took some time to explain how smartphones were a huge step for blind people and how excited he is about the next technological leap – voice interfaces. He’s so excited about that, he even got a podcast about his Alexa.

A good follow-up on the next day was “Exclusive Design” by Vasilis van Gemert, who turns the notion of inclusive design on its head. It begs the question: what would it be like if the users that are usually just considered as a technical requirement (“is is accessible?”) would be first-class citizen and get to enjoy their version of the user interface? An exclusive design?

Both talks made us feel somewhat ashamed – we admit we’ve got some homework to do in this regards at Small Improvements. We took it as a wake-up call and in fact made it an action item in our design meeting before carrying it over to our PM.

Yves Peters / Type with character(s) – reclaiming control over OpenType fonts

Peters took part in an open letter to Adobe in 2014 for better support of open type. The original implementation of Open Type in InDesign for example was more of a hackathon project than a well-planned interface. They were heard and now Adobe is actively adding better support for these fancy features. Google is also on board, with support for the latest and most exciting features – Variable Fonts. This enables fine-grained control over several variables of fonts: weight, slant, contrast, optical size, etc. This way request sizes can be minimized because you don’t have to deliver each font in a separate file. Second, you gain a lot of flexibility by not being bound to certain weights. Here at Small Improvements we always wished for something between Avenir Next Medium and Avenir Next Regular! He also shared an awesome website (axis-praxis.org) that shows these possibilities.

For now this still seems to be a thing of the future, but we’re hoping it gets the deserved momentum, and maybe we’ll see better support for all browsers pretty soon.

Alla Kholmatova / From Purpose to Patterns

What surprised us the most about this talk was the neutrality of it. Alla presented us with all things regarding design systems, living style guides, modular design and its pros and cons, but from the perspective of a non-biased researcher, not an opinionated “this is how you should do it.” That was pretty refreshing and particularly relevant for us as we’re now in the process of building our own style guide. She presented some case studies like AirBnB and TED, and how those different design systems fall into different ends of the spectrums – strict/loose, modular/integrated, centralized/distributed. As always, when you ask “what’s the best solution?” the most prudent answer is “it depends”. And that’s where it ended, also leaving us wanting to get her book “Design Systems” where she delves much more into detail, in the hopes of getting a deeper understanding of how we should move forward with our own style guide.

See you next year!

To recap, it was a good two days packed with interesting talks and a lot of ideas. It made us wish we’d have more time and resources to bring them all to our teams and to the things we’ve been building. It’s surprising how Beyond Tellerrand manages to have such a solid quality of speakers with subjects ranging from the freshest news in browser capabilities to the always inspiring career of Paula Scher..

Docker @ Small Improvements

20171018_152838

Whalecome to this blog post 🐳. I want to share with you how we use Docker at Small Improvements, how it helps us to get our everyday tasks done and what we learned from working with it. For starters, I added an introductory section about what Docker actually is – if you are already familiar with it feel free to skip that part.

This blog post was inspired by my recent visit to Docker Con EU 17 in Copenhagen. I picked up some of the topics that were discussed at the conference and provided links to the according talks for reference. At the very end you find some general information about Docker Con in case you are interested to attend the next one yourself.

But enough of the words now, let’s dive right into it!

What is Docker anyway?

Docker is a container tool that lets you run applications in encapsulated environments (containers). But unlike a VM, a Docker container is just a thin layer on top of the operating system of the host computer. When you imagine a VM to be a house with all it’s own plumbing, heating and wiring, then a Docker container is just one unit within a large, multi-tenant building where all the core infrastructure is centralized and shared. (That analogy is borrowed from Mike Coleman’s talk “Docker?!? But I’m a SYSADMIN”.) As you can imagine, this is a lot more efficient: while a full fledged Linux VM image consumes usually around 1.000 MB of disk space, a Docker image ranges between a couple of MBs or maybe up to some dozen MBs. And whereas a VM can take over a minute to start up, a Docker container is usually up and running in scarcely a second.

In case you are new to Docker and want to get going, here are some starting points for you:

Who is Docker interesting to?

Docker is not just fun to work with, there are numerous incentives to employ it in practice. The main target groups of Docker are:

Developers

As pointed out above, Docker is not just fast and efficient, it also consists of composable pieces that can be joined together in a very pleasing way. If you have a diverse app setup with various services, workers or databases, then it can be quite painful to setup and maintain a local development environment. With Docker however, you can define and spawn up clusters with a single CLI command.

Ops / DevOps / Sys-Admins

The world of operating has vastly changed in the last decade. It is not too long ago when we SSHed to physical servers and apt-get-installed some packages by hand. For a modern web app though, this scenario is barely imaginable anymore. The DevOps movement has broken loose and Docker goes hand in hand with tools like Terraform and is fully integrated in powerful cloud platforms such as AWS.

Management

Since Docker shares resources so efficiently it can be attractive for companies from a pure financial perspective. Paypal for instance took the effort to migrate most of their infrastructure to Docker. Today, after 2 years of planning and executing, they run 150.000 containers in production and their cost savings are massive. The interesting bit about their story is that introducing Docker was a business decision in the first place – getting the developers onboard was one of the last things they did.

How we use Docker

Continuous Integration and deployment

We use TeamCity for continuous integration and continuous deployment (CI/CD). For each pull request that is opened on Github we run all the tests and code style checks to make sure that no regression gets introduced. When the build passes, we are able to push out our app into production with just a couple of clicks.

Infrastructure-wise, the main server and the build agents run in Docker containers. Build and deploy pipelines are separate from one another and they also slightly differ in how they are setup: for the builds we let TeamCity take care of handling all the artefacts according to the particular commit that is being processed. For the deploys we pack the artefacts in Docker volumes and use specialised containers for the various tasks (such as deploying to certain instances or migrating traffic between the instances).

That whole setup works basically fine for us: even though the performance could be better, we are able to push out releases multiple times a day without too much effort. However, we still run into some edge cases every now and then, and generally see room for improving our tooling:

  • The duality in our build setup is unnecessarily complicated and we would like it better to compose one well-defined artefact per build, that then gets passed around for both testing and deployment. Historically seen, the deploy pipeline was added later and was a bit experimental, so our whole CI/CD setup is not super consistent anymore.
  • We are not convinced that our decision to use volumes for packing artefacts is the optimal choice anyway. A better way is to follow the best practice of multi stage builds as described in the official documentation. One problem is the cleanup of these volumes, that we currently need to do by hand every few weeks. (Labelling them would probably allow us to do that automatically, but we must need upgrade our Docker version for that first.)
  • We can do better in securing our production credentials for the live deployment. If we used cluster management tools like Kubernetes or Swarm, we could use their out-of-the box features for safely transferring secrets, which is always better than taking care of the security mechanisms by hand. As we might need to revisit or build process anyway in the mid-future, this will certainly be one of our agenda points.

Local development setup

The Small Improvements application is a single page app in the front end that connects to a REST API. The backend mainly consists of a monolithic Java web server that is hosted on Google App Engine (standard runtime). Other, smaller external services run on SAAS/PAAS platforms as well. Therefore we are not using Docker in production and have currently no strong incentive to change that.

It’s important for us to replicate an environment locally (on developer machines) that comes as close as possible to the production stage. That was quite simple back in the day when our app was singular and only consisted of one server: we just used the official App Engine plugin for IntelliJ Idea and the built-in dev server. However, our infrastructure grew over time, so we have more external services by now. For instance, we use an elastic search for our activity stream feature, we use a mail server for sending notification mails and we use a microservice for handling user avatars. At some point we noticed that our local development setup didn’t fully reflect our production infrastructure anymore and it became too complicated to take care of that manually.

That’s why we currently work on complementing our local Java dev server with a Docker-based solution: in a docker-compose file we describe all the external services and provide a small tool on top of docker compose for interacting with them in a well defined way. That allows us – with just a single command – to fire up all our services, like the elastic search or the mail server. The dispatching behaviour of the GCloud load balancer is emulated by an haproxy that serves as entry point for all incoming requests.

We haven’t rolled it out yet, but the preliminary experiences are looking very promising: no manual installations of tools anymore, and no hassle with different operating systems either. It’s fascinating to see how one universal platform makes such an impact on developer convenience.

Our Docker roadmap

Frankly, we don’t have a Docker roadmap. When it comes to using Docker in production we don’t have a case that is strong enough to change our current infrastructure. We are still quite far away from losing sleep over the invoices that Google sends us every month. Of course our application performance could be better here and there, but using Docker wouldn’t solve these issues.

So, instead of extending our Docker usage, we rather want to improve and sophisticate what we already have. For instance, we consider to rethink or CI/CD pipeline in order to solve the issues that we have. Also, we are about to roll out or Docker-based tool for the local development setup and will enhance that as we move along.

One thing that was still interesting to see at the conference is what Docker offers you in case you have a large legacy Java or .NET application. They offer a special program for what they call “MTA (Modernizing Traditional Applications)”. The idea here is to migrate large (and therefore ponderous) code bases to the cloud, preferably using the Docker Enterprise Edition (EE). Without changing a single line of code, they containerize your existing app and help you setup an automated build and deploy pipeline. I have no experience with Docker EE by myself, but the concept sounds interesting for companies that don’t have any cloud or DevOps knowledge, but still want to go in that direction stepwise with minimal risk.

About Docker Con EU

As I pointed out in the beginning, I was motivated to write this blog post by going to Docker Con EU in the first place. So let me drop a few words about the actual conference:

  • At its core there have been two days of talks, plus the opportunity to go to Moby summit or Docker EE summit. The summits, however, are probably just interesting if you work with these technologies. In addition to the talks they also offered some workshops (which you need to pay extra for, though).
  • The talks were divided into multiple tracks, such as “Use cases”, “Environment” and “Community”. The topics covered everything from security over best practices to architecture, and addressed beginners and experts alike. Everyone can compose their own schedule according to their interests. All talks are also available online.
  • The venue was in the spacious Bell Center in the south of Copenhagen. The event was excellently catered (they served both breakfast and lunch) and there were plenty of tables and seats to rest or work in between the talks.

All in all I enjoyed the talks and the overall atmosphere. Even though there was no big revelation that I took away from the conference, I learned about a lot of smaller details that will certainly help us to consolidate our work with Docker at Small Improvments. In addition, it’s quite interesting to see all the different use cases and experiences that people have. One probably rarely sees such a high diversity of professional backgrounds than on modern DevOps conferences. In that respect, Docker Con provides an excellent forum for interesting conversations and exchange of ideas.

Apart from the professional side of things, Copenhagen is a beautiful city and well worth a visit – so make sure to plan some leisure time if you go there.

Looking back at the FullStack Fest

It was a couple of months or so ago when I came across this conference called FullStack Fest, skimming through the agenda, I was immediately intrigued and thought “I’ve got to check this out”. The coolest bit? The conference was taking part in the beautiful city of Barcelona.

September finally came around, and just as the Berlin air was getting chilly and bringing signs of the impending winter, I was flying off to the warmth of Spain. I got there a bit early and spent a nice Sunday roaming around on the streets, admiring the architecture and the history. The next day began the Backend bit of the FullStack Fest. It was interesting to step into the intricate world of the architecture of buildings one day, and after admiring it, to step into the equally intricate world of Software Architecture the next.

Sun baked and refreshed, I went to the conference, all set with a notebook and a bag of goodies from the organisers. One must collect stickers for the laptop after all.

The backend days of the conference were abstractly divided into “DevOps” and “Architecture” with the topic being “Problems of Today, Wonders from the Future”. To describe the theme of the conference in a single word, I would say “Distributed Systems”.

Day 1:  DevOps

The first talk was by Karissa McKelvey (@okdistribute). She talked about a project which would allow people to share their scientific work without the consumers having to pay for it. A common problem in research is getting access to the required data, journals, publications etc. This is so because a lot of bureaucracy, censorship and corporate licenses get in the way of open sourcing knowledge. Karissa and her team have worked on something called the Dat Project. This creates a distributed network of many data hosts (mostly universities), through which you can upload your files and download any file through a little identifier. You can access Karissa’s presentation from the conference using this little identifier (dat://353c5107716987682f7b9092e594b567cfd0357f66730603e17b9866f1a892d8) once you install the dat tool on your machine. Though this is still vulnerable to being used as an illegal file hosting service, it’s a good step towards making data and knowledge more reachable and transparent.

Following up on this was an interesting introduction to Ethereum as a way to enter ‘contracts’ without trusting a third party such as a notary, this is done by distributing the idea of trust amongst many participants. As Luca Marchesini (@xbill82) said in his talk:

“The machine is everywhere.. The machine is nowhere”.

With the beautiful underlying power of the Nakamoto consensus protocol that powers the blockchain and the added flexibility of Turing complete capabilities, allowing you to express the intent of your contract and its fulfilment in terms of an actual computer program, you can have the word of truth floating around in the world, verifiable and undeniable.

With the buzz words “microservices” and “serverless” applications going around, one would of course be expecting a talk on these topics. Marcia Villalba (@mavi888uy) gave a great talk on what “serverless” really means…and no, it does not mean there is no server (of course). The idea of a serverless application is to utilise the cloud and write self contained functions to do simple tasks. Some highlights from the talk worth remembering are:

  • Functions in the cloud are triggered by events, they do not have state.
  • Pay as you go, scale automatically.
  • Create a proof of concept and then optimise your solution to take advantage of the cloud.
  • Automate: your CI pipeline and your testing.
  • Reduce latency by being at the edge of the cloud.

Next we stepped into the world of cyber security with Dr. Jessica Barker (@drjessicabarker), who talked about tackling vulnerabilities, specifically those introduced by negligence on the part of an end user. She talked about educating users on security instead of treating them as the weakest link in the chain and ‘dumbing things down’. She made her case in light of the Pygmalion Effect, according to which higher expectations lead to better performance. A common problem when building human friendly security guidelines is that the user is treated as a dumb entity and that leads to the user acting like a dumb entity.

Frank Lyaruu (@lyaruu) then came in with an anecdote about how he wanted a swiss army knife that did everything when he was a child, and ended up with an utterly useless one. It was quite easy to see the analogy here… we have all faced feature bloat, we’ve all wanted a framework to do everything and then been frustrated with the abstractions that make customisations a nightmare. Frank introduced the concept of ‘fullstack databases’. The key idea? Identify your use case and use the right database for it. While SQL databases may work for one scenario, GraphQL would be much better in another. The take away:

“Your thinking influences your choice of tools and your choice of tools influences your thinking.”

A representative from Booking.com, Sahil Dua (@sahildua2305) , then told us how Booking.com handles their deep learning models in production. The problem they need to solve is that different data scientists need access to an independent environment for training. They have their training script in a container, and a container runs on every needed server. The load of containers is managed by Kubernetes. This indeed was a lesson in how to manage different containers and independent environments with very high performance needs.

As Software Engineers, we know one thing for sure, and that is that things will, at some point, fail.

“There are two kinds of systems, those which have fails and those which will.”

Aishraj Dahal (@aishraj) walked us through chaos management. Some useful principles that he talked about were to:

  • Automate what you can and to have a framework for dealing with incidents.
  • Define what a “minor” and “major” incident means..
  • Define business failures in terms of business metrics, for example, the amount of revenue lost per hour of down time..
  • Single Responsibility Principle: One person should be responsible for one task in an incident, if everyone is combing through the git history looking for the last stable commit, its redundant work..
  • Never hesitate to escalate.
  • You need an incident commander, this person is the one who orchestrates the efforts to get back on track.

Day 2: Architecture

The second day of the FullStack Fest began with an exciting talk by John Graham Cumming (@jgrahamc) on the Internet of Things as the vector for DDoS attacks. He showed how vulnerable IoT devices are, with simple lapses like having telnet open on port 23. These devices are exploited by sending small http requests to a server, and sending A LOT of them, demanding a large response targeted towards a victim. As an employee of Cloudflare he could shed some light on how network patterns are used to discern legitimate and other requests. Some ways to protect yourself against DDoS attacks are to install something to do rate limiting, block every entry point that you do not need and use DDoS protection tools from a vendor such as Cloudflare.

One of my favourite talks from Day 2 included James Burns’ (@1mentat) introduction to chaos engineering and distributing tracing. He began by defining a practical distributed system as one that is observable and resilient. Observability comes with tracing whereas resilience can be tested through Chaos Engineering i.e. intentionally causing a system to fail as a “drill” and having the engineers on board try to fix it without knowing the cause of the problem or even what the problem is. If you have many such drills, when real chaos hits the team will be well prepared to tackle it.

Chris Ford (@ctford) took the stage and talked about a hipster programming language called Idris which can be used to specify distributed protocols. In Ford’s words, his 10th rule of microservices is:

“Any sufficiently complicated microservice architecture contains an ad-hoc, informally-specified, bug-ridden, slow implementation of a distributed protocol.”

A distributed protocol’s specification can be tricky to get right. With a language like Idris, whose compiler checks the types, where functions are value and even types are values, the level of strictness when specifying a protocol is greatly increased and the chances of runtime bugs reduced as the compiler is smart enough to capture protocol violations. A protocol can be thought of as a finite state machine and is so specified in the Idris programming language. Be forewarned though, this is still ongoing research and definitely not production ready!

We then dove into philosophy, the nature of order and structure preserving transformations with Jerome Scheuring(@aethyrics). He talked about identifying the core of the application and then building transformations around it. The key being that the structure of your application remains the same when more layers are added onto it. He hinted at functors as a tool for achieving such transformations of architecture.

After some lightning talks and a tutorial on ‘hacking’ into systems that only exist for a few milliseconds (lambdas which are only alive for the scope of a simple execution) and then on how to defend such systems, the backend bit of the conference came to a close.

The conference was a pretty cool look into research topics meeting with the software industry and creating some innovative solutions to existing problems. Though I haven’t listed all the talks here, you can check them out on youtube: https://www.youtube.com/watch?v=_V4tGx85hUA&t=536s.

I left Barcelona having felt that I have gazed into the future of technology and seen the wheels set in motion for many advancements set to come in the next few years. Though the conference could have been even better if it had some more topics related more explicitly to everyday software development, I feel that I walked out a more knowledgeable person than before.

Screen Shot 2017-10-04 at 13.45.21

Broadening one’s horizons, beyond the scope of their job description is not only intellectually stimulating but also makes for a more content and productive mind. Small Improvements, by sponsoring this trip (and many others for their employees’ learning and development) is creating a happier and smarter workplace. I am yet again in Berlin, at my desk, ready to tackle more challenges and apply the numerous things I gleaned from the FullStack Fest. Looking forward to next conference!

How we develop software in teams

Here at Small Improvements we have 3 development teams. Each team is an autonomous unit that consists of frontend & backend developers, UI/UX developers and designers, so that they can build and ship features independently.

In this blogpost we want to share an insight into what the development process looks like in Team Green. We don’t follow any predefined scheme (like Scrum or Kanban). Rather, we pick the tools and methods that work best for us and adjust them constantly to our needs. Other teams work in a slightly different way, but the overall structure happens to be quite similar.

Iterations

The main building block of our team process is our weekly iteration. Unlike Scrum, these aren’t sprints: it’s not our primary concern to deliver exactly what we had planned, they rather act as a central clock that give us recurring temporal structure.

 

out iteration flow

Weekly planning

Each iteration starts on Tuesday morning with our weekly planning meeting: we sit together as team and review the last iteration in order to clean up leftovers from the last week that need to be taken over. Afterwards, we fill the upcoming iteration with tasks from our team backlog until we feel that we have found a good scope. Usually, we tend to overplan slightly, so that nobody runs out of work. (Read the section about backlogs to find out how we know what to work on next.) Estimations help us to find a meaningful size for our tasks, but we are not too strict about them: we aim for high throughput, but still value quality over delivery dates.

Retrospective

We are doing retrospectives on a fortnightly basis. They last one hour and are a place for discussion about our workflow and process. Everyone can talk about their thoughts as long as they like. However, we don’t just aim for good conversations: our goal is to identify actionable things that we can improve on until the next retrospective. We don’t want to change everything at once. Our philosophy is continuous improvement through small yet steady steps. Our retrospectives are facilitated by one team member, who is in charge of preparing, moderating and documenting the meeting.

All-Hands Meeting

Every Tuesday evening the entire company comes together for an all-hands meeting, including at least four of our employees who are regularly working from different time zones. Every team (not just devs, also marketing, customer success, etc.) reports their progress from the last week and announces their agenda for the upcoming week. Thereby we are making sure that everyone is up to date on what’s being worked on at the moment. In contrast to a sprint review, this meeting is not about giving account. It’s rather a window in time, where every team shares insight into their current status.

Long-term planning

Roughly every four weeks we conduct a long-term (or monthly) planning to stock up our team backlog and discuss our roadmap in the long run.

Team backlog

We have mainly four sources of work that supply our team backlog.

 

sources

Product and feature work

We have two dedicated product managers who maintain a product roadmap, where tasks and stories are prioritized and broken down into smaller pieces. Every developer is actively involved in product development, but our PMs do a lot of organization and planning upfront, which is a great relief. Every dev team has a designated Feature Coordinator who constantly stays in touch with the PMs and arranges meetings and conversations when needed. Together, we prepare features ahead and make sure that – once we start to work – everything is right in place.

Tech work

Another big source of work is our tech roadmap, which is a joint venture from all developers. The tech roadmap usually contains refactoring and innovation projects that don’t necessarily create immediate customer value. We discuss and prioritize these projects together in our weekly Dev-Exchange meeting. As examples, we recently had:

  • Migrating our backend authorization logic to a predicate-based framework written in Groovy
  • Making further progress with our Angular-React migration: one step was to introduce React Router 4, which we did a couple of weeks ago
  • Building and launching a dedicated microservice that renders auto-generated user avatars for all users who didn’t upload an individual avatar yet

Apart from the tech roadmap, we also have a biweekly DevOps meeting. However, since we are hosted on Google App Engine, we are usually not too concerned with DevOps tasks so the workload varies a bit in that respect. That being said, we still have some fun tasks on our current agenda, such as dockerizing our local development setup.

Design work

All designers and frontend-addicted developers form a so called “meta-team” that comes together every week for a design meeting. This yields smaller tasks such as the overhaul of graphics and icons, but they also work on bigger projects like revamping our style guide. Lucas (our UI/UX developer) usually brings tasks from the design meeting into our iterations.

Side projects

Apart from these three bigger backlogs, we have some smaller sources of work that can be summarized as “side projects”:

  • Every employee at Small Improvements is encouraged to take time for personal development. (In Team Green we believe heavily in manifesting our personal goals with Objectives). If someone wants to take time to make progress with their Objectives, they are free to file a time slot in the iterations. For instance, Sharmeen is currently on her mission to increase awareness about possible vulnerabilities so we can maintain the security of our application, and recently wrote a security-oriented development guideline for the backend.
  • Usually, there is always room for small spike and innovation projects in order to explore new technologies and ideas. As an example: a couple of weeks ago, Jan set out for one day to try out flow type annotations in our frontend code. In the end, we decided against it, but the experiment served as basis for a valuable discussion.

Bringing it all together

The interesting question now is how these backlogs are balanced: the answer might be a bit disappointing, since we don’t have a secret formula that tells us where to pick from next. In fact, the team roadmaps are a matter of ongoing negotiation between all involved parties. Sometimes it makes sense for a team to take over a task because they already have expertise in that area and can deliver outcome fast. Another time we decide to assign the same project to a different team, because we think that it’s a valuable opportunity to share knowledge and prevent silos. Small Improvements is still a comparatively small company, so we don’t need formal processes and heavy decision-making hierarchies. Most of the time we can figure it out by just talking to each other, which is a great privilege. 

FIZD7068

What’s a hackathon in SI like?

The first half of 2017 has been quite a busy year for us. With all the features that’ve been rolling out, fixes to deploy, improvements to discuss, design and implement, it can be hard to organise an event that won’t disrupt everybody’s flow. Until we realize that there is never a “right” time.

Here in Small Improvements, we try to make sure that everyone has time to play around with their ideas. Earlier this year, we even decided to gather up the entire 5-person design team for a 2-day illustration hackathon where we experimented with different illustration techniques and brainstormed ideas on how we can expand our Small Improvements universe even further. It was a good experience having all of the design team members gathered in one room, bursting with ideas and energy and discussions. We learned a lot, not just about the art of illustrating ideas itself (and how challenging it could be), but also about ourselves and the design team member’s strengths and weaknesses as well.

But first, breakfast.

We conducted our company-wide hackathon last August 17-18. At Small Improvements, hackathons are a way for everyone in the company to come together and build something that is somewhat related to the product.

Traditionally, a hackathon starts on a Thursday afternoon and runs until the end of the next day. But this time we made an exception, we decided to run it the entire 2 days with the condition of doing ”normal” work like responding to emails and fixing bugs that were labeled critical.

As with most events in-house that starts in the morning, we started the day with a breakfast followed by a kickoff where everyone can talk a little bit more about their idea. Generally, people are encouraged to write a mini-spec about their ideas at least a few days ahead so that each participant can have an idea of which project they want to work on before the kickoff day. These sub-teams can be a mixture of different teams — doesn’t matter if you’re a combination of the Marketing, Design, or Development team. And of course, going solo is totally acceptable too!

After the kickoff, everybody is free to work on their project however they want, wherever they want.

Presentation day

 

Hangouts, drinks and food compose much of the presentation hour. Each team is required to present a demo or a mockup of the project that they worked on. Ideas ranged from developing an internal tool for tracking where and how budgets are spent to sentiment analysis and even a zen mode for when writing feedback. While not every project will end up in the roadmap, it was still great to see cool and interesting ideas implemented in such a short amount of time!

Key takeaways

  • Having the opportunity to work together across teams is a tremendous help in getting insight and ideas that otherwise developers or designers might not think about.
  • Plan ahead! The last hackathon has been announced almost a month ahead and the exact date has been voted for using Doodle. This allows everyone to have a lot of wiggle room to sort out their schedules and think about the project they want to work on.
  • We’ve realized that people had different perceptions on how / what a hackathon is. And so we learned that we should work on making it clear for everyone while still encourage them to do cool and crazy projects.

Displaying a List of Items in React, but Composed

Displaying a list of items is a challenge you will encounter in most web applications. When using a view layer library such as React, you only have to iterate over the list of items and return elements. However, often you want a couple of more features such as filtering, sorting or pagination. Not every list component needs it though, but it would be great to have these functionalities as opt-in features whenever displaying a list of items.

We are excited to open source our in-house solution at Small Improvements which handles this use case: react-redux-composable-list. In our web application, customers often need to work with lists of data while managing their feedback, objectives or other content in our system. At Small Improvements our customers range from an active user count of 20 to 2000 users. Thus it can happen that we need to display a lot of data yet have to keep it accessible for people managing it.

giphy

The requirements for each list of data are different. One list is just fine with a filter functionality. Another list mixes together selectable and filterable items. Each displayed list has different requirements. The library we are open sourcing solves all the requirements we had in-house at Small Improvements. The library is highly extendable and builds up on composition. You can come up with your own opt-in features.

Demo and Features

The react-redux-composable-list comes with the following features:

  • Filtering (AND filter, OR filter, multiple filters)
  • Selecting
  • Sorting
  • Magic Column (collapsing multiple columns in one column)
  • Pagination

There are two demo applications up and running to show the features of react-redux-composable-list.

While the former demonstrates all features in one real world example, the latter separates the examples by feature.

The Real World example shows that all features can be used altogether by composing them. To specify the opt-in features of your list components you use React’s higher order components.

const List = ({ list, stateKey }) => {
  ...
}

...

const EmptyBecauseFilter = () =>
  <div>
    <h3>No Filter Result</h3>
    <p>Sorry, there was no item matching your filter.</p>
  </div>

export default compose(
  withEmpty({ component: EmptyBecauseNoList }),
  withSelectables({ ids: [0] }),
  withPreselectables({ ids: [2, 3] }),
  withUnselectables({ ids: [4, 6] }),
  withFilter(),
  withEmpty({ component: EmptyBecauseFilter }),
  withSort(),
  withPaginate({ size: 10 }),
)(List);

You can find the implementation of both demo applications in the official GitHub repository. Further details about specific features can be found in the official documentation.

Getting Started

If you want to jump right into using the library, you should checkout the Getting Started section in the official documentation.

For instance having a list of items with the option to select items can be accomplished with the following component:

import { components, enhancements } from 'react-redux-composable-list';
const { Enhanced, Row, Cell } = components;
const { withSelectables } = enhancements;

const ListComponent = ({ list, stateKey }) =>
  <Enhanced stateKey={stateKey}>
    {list.map(item =>
      <Row key={item.id}>
        <Cell style={{ width: '70%' }}>{item.title}</Cell>
        <Cell style={{ width: '30%' }}>{item.comment}</Cell>
      </Row>
    )}
  </Enhanced>

export default withSelectables()(ListComponent);

Afterwards it can be simply used by passing a list of items and a state key to identify the table state.

import SelectableListComponent from path/to/ListComponent';

const list = [
  { id: '1', title: 'foo', comment: 'foo foo' },
  { id: '2', title: 'bar', comment: 'bar bar' },
];

const App = () =>
  <SelectableListComponent
    list={list}
    stateKey={'MY_SELECTABLE_LIST'}
  />

If you want to dive deeper into the library, you can checkout the whole documentation to learn more about the library and how to use it.

Extend it & Contribute

You can write your own enhancements and enhancers, because you have provide you with access to the library’s API. To be more specific, the library API is nothing more than action creators and selectors for the Redux store. All the state that is managed for the tables is organized in a Redux store. You will find everything you need to know about the API in each documented feature. In general, the documentation is a good place to get started and to read up everything about all the features.

We would love, if you would give it a shot and give us feedback about it. In addition, we welcome you to make contributions to the library.

Resources