Looking back at the FullStack Fest

It was a couple of months or so ago when I came across this conference called FullStack Fest, skimming through the agenda, I was immediately intrigued and thought “I’ve got to check this out”. The coolest bit? The conference was taking part in the beautiful city of Barcelona.

September finally came around, and just as the Berlin air was getting chilly and bringing signs of the impending winter, I was flying off to the warmth of Spain. I got there a bit early and spent a nice Sunday roaming around on the streets, admiring the architecture and the history. The next day began the Backend bit of the FullStack Fest. It was interesting to step into the intricate world of the architecture of buildings one day, and after admiring it, to step into the equally intricate world of Software Architecture the next.

Sun baked and refreshed, I went to the conference, all set with a notebook and a bag of goodies from the organisers. One must collect stickers for the laptop after all.

The backend days of the conference were abstractly divided into “DevOps” and “Architecture” with the topic being “Problems of Today, Wonders from the Future”. To describe the theme of the conference in a single word, I would say “Distributed Systems”.

Day 1:  DevOps

The first talk was by Karissa McKelvey (@okdistribute). She talked about a project which would allow people to share their scientific work without the consumers having to pay for it. A common problem in research is getting access to the required data, journals, publications etc. This is so because a lot of bureaucracy, censorship and corporate licenses get in the way of open sourcing knowledge. Karissa and her team have worked on something called the Dat Project. This creates a distributed network of many data hosts (mostly universities), through which you can upload your files and download any file through a little identifier. You can access Karissa’s presentation from the conference using this little identifier (dat://353c5107716987682f7b9092e594b567cfd0357f66730603e17b9866f1a892d8) once you install the dat tool on your machine. Though this is still vulnerable to being used as an illegal file hosting service, it’s a good step towards making data and knowledge more reachable and transparent.

Following up on this was an interesting introduction to Ethereum as a way to enter ‘contracts’ without trusting a third party such as a notary, this is done by distributing the idea of trust amongst many participants. As Luca Marchesini (@xbill82) said in his talk:

“The machine is everywhere.. The machine is nowhere”.

With the beautiful underlying power of the Nakamoto consensus protocol that powers the blockchain and the added flexibility of Turing complete capabilities, allowing you to express the intent of your contract and its fulfilment in terms of an actual computer program, you can have the word of truth floating around in the world, verifiable and undeniable.

With the buzz words “microservices” and “serverless” applications going around, one would of course be expecting a talk on these topics. Marcia Villalba (@mavi888uy) gave a great talk on what “serverless” really means…and no, it does not mean there is no server (of course). The idea of a serverless application is to utilise the cloud and write self contained functions to do simple tasks. Some highlights from the talk worth remembering are:

  • Functions in the cloud are triggered by events, they do not have state.
  • Pay as you go, scale automatically.
  • Create a proof of concept and then optimise your solution to take advantage of the cloud.
  • Automate: your CI pipeline and your testing.
  • Reduce latency by being at the edge of the cloud.

Next we stepped into the world of cyber security with Dr. Jessica Barker (@drjessicabarker), who talked about tackling vulnerabilities, specifically those introduced by negligence on the part of an end user. She talked about educating users on security instead of treating them as the weakest link in the chain and ‘dumbing things down’. She made her case in light of the Pygmalion Effect, according to which higher expectations lead to better performance. A common problem when building human friendly security guidelines is that the user is treated as a dumb entity and that leads to the user acting like a dumb entity.

Frank Lyaruu (@lyaruu) then came in with an anecdote about how he wanted a swiss army knife that did everything when he was a child, and ended up with an utterly useless one. It was quite easy to see the analogy here… we have all faced feature bloat, we’ve all wanted a framework to do everything and then been frustrated with the abstractions that make customisations a nightmare. Frank introduced the concept of ‘fullstack databases’. The key idea? Identify your use case and use the right database for it. While SQL databases may work for one scenario, GraphQL would be much better in another. The take away:

“Your thinking influences your choice of tools and your choice of tools influences your thinking.”

A representative from Booking.com, Sahil Dua (@sahildua2305) , then told us how Booking.com handles their deep learning models in production. The problem they need to solve is that different data scientists need access to an independent environment for training. They have their training script in a container, and a container runs on every needed server. The load of containers is managed by Kubernetes. This indeed was a lesson in how to manage different containers and independent environments with very high performance needs.

As Software Engineers, we know one thing for sure, and that is that things will, at some point, fail.

“There are two kinds of systems, those which have fails and those which will.”

Aishraj Dahal (@aishraj) walked us through chaos management. Some useful principles that he talked about were to:

  • Automate what you can and to have a framework for dealing with incidents.
  • Define what a “minor” and “major” incident means..
  • Define business failures in terms of business metrics, for example, the amount of revenue lost per hour of down time..
  • Single Responsibility Principle: One person should be responsible for one task in an incident, if everyone is combing through the git history looking for the last stable commit, its redundant work..
  • Never hesitate to escalate.
  • You need an incident commander, this person is the one who orchestrates the efforts to get back on track.

Day 2: Architecture

The second day of the FullStack Fest began with an exciting talk by John Graham Cumming (@jgrahamc) on the Internet of Things as the vector for DDoS attacks. He showed how vulnerable IoT devices are, with simple lapses like having telnet open on port 23. These devices are exploited by sending small http requests to a server, and sending A LOT of them, demanding a large response targeted towards a victim. As an employee of Cloudflare he could shed some light on how network patterns are used to discern legitimate and other requests. Some ways to protect yourself against DDoS attacks are to install something to do rate limiting, block every entry point that you do not need and use DDoS protection tools from a vendor such as Cloudflare.

One of my favourite talks from Day 2 included James Burns’ (@1mentat) introduction to chaos engineering and distributing tracing. He began by defining a practical distributed system as one that is observable and resilient. Observability comes with tracing whereas resilience can be tested through Chaos Engineering i.e. intentionally causing a system to fail as a “drill” and having the engineers on board try to fix it without knowing the cause of the problem or even what the problem is. If you have many such drills, when real chaos hits the team will be well prepared to tackle it.

Chris Ford (@ctford) took the stage and talked about a hipster programming language called Idris which can be used to specify distributed protocols. In Ford’s words, his 10th rule of microservices is:

“Any sufficiently complicated microservice architecture contains an ad-hoc, informally-specified, bug-ridden, slow implementation of a distributed protocol.”

A distributed protocol’s specification can be tricky to get right. With a language like Idris, whose compiler checks the types, where functions are value and even types are values, the level of strictness when specifying a protocol is greatly increased and the chances of runtime bugs reduced as the compiler is smart enough to capture protocol violations. A protocol can be thought of as a finite state machine and is so specified in the Idris programming language. Be forewarned though, this is still ongoing research and definitely not production ready!

We then dove into philosophy, the nature of order and structure preserving transformations with Jerome Scheuring(@aethyrics). He talked about identifying the core of the application and then building transformations around it. The key being that the structure of your application remains the same when more layers are added onto it. He hinted at functors as a tool for achieving such transformations of architecture.

After some lightning talks and a tutorial on ‘hacking’ into systems that only exist for a few milliseconds (lambdas which are only alive for the scope of a simple execution) and then on how to defend such systems, the backend bit of the conference came to a close.

The conference was a pretty cool look into research topics meeting with the software industry and creating some innovative solutions to existing problems. Though I haven’t listed all the talks here, you can check them out on youtube: https://www.youtube.com/watch?v=_V4tGx85hUA&t=536s.

I left Barcelona having felt that I have gazed into the future of technology and seen the wheels set in motion for many advancements set to come in the next few years. Though the conference could have been even better if it had some more topics related more explicitly to everyday software development, I feel that I walked out a more knowledgeable person than before.

Screen Shot 2017-10-04 at 13.45.21

Broadening one’s horizons, beyond the scope of their job description is not only intellectually stimulating but also makes for a more content and productive mind. Small Improvements, by sponsoring this trip (and many others for their employees’ learning and development) is creating a happier and smarter workplace. I am yet again in Berlin, at my desk, ready to tackle more challenges and apply the numerous things I gleaned from the FullStack Fest. Looking forward to next conference!

How we develop software in teams

Here at Small Improvements we have 3 development teams. Each team is an autonomous unit that consists of frontend & backend developers, UI/UX developers and designers, so that they can build and ship features independently.

In this blogpost we want to share an insight into what the development process looks like in Team Green. We don’t follow any predefined scheme (like Scrum or Kanban). Rather, we pick the tools and methods that work best for us and adjust them constantly to our needs. Other teams work in a slightly different way, but the overall structure happens to be quite similar.


The main building block of our team process is our weekly iteration. Unlike Scrum, these aren’t sprints: it’s not our primary concern to deliver exactly what we had planned, they rather act as a central clock that give us recurring temporal structure.


out iteration flow

Weekly planning

Each iteration starts on Tuesday morning with our weekly planning meeting: we sit together as team and review the last iteration in order to clean up leftovers from the last week that need to be taken over. Afterwards, we fill the upcoming iteration with tasks from our team backlog until we feel that we have found a good scope. Usually, we tend to overplan slightly, so that nobody runs out of work. (Read the section about backlogs to find out how we know what to work on next.) Estimations help us to find a meaningful size for our tasks, but we are not too strict about them: we aim for high throughput, but still value quality over delivery dates.


We are doing retrospectives on a fortnightly basis. They last one hour and are a place for discussion about our workflow and process. Everyone can talk about their thoughts as long as they like. However, we don’t just aim for good conversations: our goal is to identify actionable things that we can improve on until the next retrospective. We don’t want to change everything at once. Our philosophy is continuous improvement through small yet steady steps. Our retrospectives are facilitated by one team member, who is in charge of preparing, moderating and documenting the meeting.

All-Hands Meeting

Every Tuesday evening the entire company comes together for an all-hands meeting, including at least four of our employees who are regularly working from different time zones. Every team (not just devs, also marketing, customer success, etc.) reports their progress from the last week and announces their agenda for the upcoming week. Thereby we are making sure that everyone is up to date on what’s being worked on at the moment. In contrast to a sprint review, this meeting is not about giving account. It’s rather a window in time, where every team shares insight into their current status.

Long-term planning

Roughly every four weeks we conduct a long-term (or monthly) planning to stock up our team backlog and discuss our roadmap in the long run.

Team backlog

We have mainly four sources of work that supply our team backlog.



Product and feature work

We have two dedicated product managers who maintain a product roadmap, where tasks and stories are prioritized and broken down into smaller pieces. Every developer is actively involved in product development, but our PMs do a lot of organization and planning upfront, which is a great relief. Every dev team has a designated Feature Coordinator who constantly stays in touch with the PMs and arranges meetings and conversations when needed. Together, we prepare features ahead and make sure that – once we start to work – everything is right in place.

Tech work

Another big source of work is our tech roadmap, which is a joint venture from all developers. The tech roadmap usually contains refactoring and innovation projects that don’t necessarily create immediate customer value. We discuss and prioritize these projects together in our weekly Dev-Exchange meeting. As examples, we recently had:

  • Migrating our backend authorization logic to a predicate-based framework written in Groovy
  • Making further progress with our Angular-React migration: one step was to introduce React Router 4, which we did a couple of weeks ago
  • Building and launching a dedicated microservice that renders auto-generated user avatars for all users who didn’t upload an individual avatar yet

Apart from the tech roadmap, we also have a biweekly DevOps meeting. However, since we are hosted on Google App Engine, we are usually not too concerned with DevOps tasks so the workload varies a bit in that respect. That being said, we still have some fun tasks on our current agenda, such as dockerizing our local development setup.

Design work

All designers and frontend-addicted developers form a so called “meta-team” that comes together every week for a design meeting. This yields smaller tasks such as the overhaul of graphics and icons, but they also work on bigger projects like revamping our style guide. Lucas (our UI/UX developer) usually brings tasks from the design meeting into our iterations.

Side projects

Apart from these three bigger backlogs, we have some smaller sources of work that can be summarized as “side projects”:

  • Every employee at Small Improvements is encouraged to take time for personal development. (In Team Green we believe heavily in manifesting our personal goals with Objectives). If someone wants to take time to make progress with their Objectives, they are free to file a time slot in the iterations. For instance, Sharmeen is currently on her mission to increase awareness about possible vulnerabilities so we can maintain the security of our application, and recently wrote a security-oriented development guideline for the backend.
  • Usually, there is always room for small spike and innovation projects in order to explore new technologies and ideas. As an example: a couple of weeks ago, Jan set out for one day to try out flow type annotations in our frontend code. In the end, we decided against it, but the experiment served as basis for a valuable discussion.

Bringing it all together

The interesting question now is how these backlogs are balanced: the answer might be a bit disappointing, since we don’t have a secret formula that tells us where to pick from next. In fact, the team roadmaps are a matter of ongoing negotiation between all involved parties. Sometimes it makes sense for a team to take over a task because they already have expertise in that area and can deliver outcome fast. Another time we decide to assign the same project to a different team, because we think that it’s a valuable opportunity to share knowledge and prevent silos. Small Improvements is still a comparatively small company, so we don’t need formal processes and heavy decision-making hierarchies. Most of the time we can figure it out by just talking to each other, which is a great privilege. 


What’s a hackathon in SI like?

The first half of 2017 has been quite a busy year for us. With all the features that’ve been rolling out, fixes to deploy, improvements to discuss, design and implement, it can be hard to organise an event that won’t disrupt everybody’s flow. Until we realize that there is never a “right” time.

Here in Small Improvements, we try to make sure that everyone has time to play around with their ideas. Earlier this year, we even decided to gather up the entire 5-person design team for a 2-day illustration hackathon where we experimented with different illustration techniques and brainstormed ideas on how we can expand our Small Improvements universe even further. It was a good experience having all of the design team members gathered in one room, bursting with ideas and energy and discussions. We learned a lot, not just about the art of illustrating ideas itself (and how challenging it could be), but also about ourselves and the design team member’s strengths and weaknesses as well.

But first, breakfast.

We conducted our company-wide hackathon last August 17-18. At Small Improvements, hackathons are a way for everyone in the company to come together and build something that is somewhat related to the product.

Traditionally, a hackathon starts on a Thursday afternoon and runs until the end of the next day. But this time we made an exception, we decided to run it the entire 2 days with the condition of doing ”normal” work like responding to emails and fixing bugs that were labeled critical.

As with most events in-house that starts in the morning, we started the day with a breakfast followed by a kickoff where everyone can talk a little bit more about their idea. Generally, people are encouraged to write a mini-spec about their ideas at least a few days ahead so that each participant can have an idea of which project they want to work on before the kickoff day. These sub-teams can be a mixture of different teams — doesn’t matter if you’re a combination of the Marketing, Design, or Development team. And of course, going solo is totally acceptable too!

After the kickoff, everybody is free to work on their project however they want, wherever they want.

Presentation day


Hangouts, drinks and food compose much of the presentation hour. Each team is required to present a demo or a mockup of the project that they worked on. Ideas ranged from developing an internal tool for tracking where and how budgets are spent to sentiment analysis and even a zen mode for when writing feedback. While not every project will end up in the roadmap, it was still great to see cool and interesting ideas implemented in such a short amount of time!

Key takeaways

  • Having the opportunity to work together across teams is a tremendous help in getting insight and ideas that otherwise developers or designers might not think about.
  • Plan ahead! The last hackathon has been announced almost a month ahead and the exact date has been voted for using Doodle. This allows everyone to have a lot of wiggle room to sort out their schedules and think about the project they want to work on.
  • We’ve realized that people had different perceptions on how / what a hackathon is. And so we learned that we should work on making it clear for everyone while still encourage them to do cool and crazy projects.

Displaying a List of Items in React, but Composed

Displaying a list of items is a challenge you will encounter in most web applications. When using a view layer library such as React, you only have to iterate over the list of items and return elements. However, often you want a couple of more features such as filtering, sorting or pagination. Not every list component needs it though, but it would be great to have these functionalities as opt-in features whenever displaying a list of items.

We are excited to open source our in-house solution at Small Improvements which handles this use case: react-redux-composable-list. In our web application, customers often need to work with lists of data while managing their feedback, objectives or other content in our system. At Small Improvements our customers range from an active user count of 20 to 2000 users. Thus it can happen that we need to display a lot of data yet have to keep it accessible for people managing it.


The requirements for each list of data are different. One list is just fine with a filter functionality. Another list mixes together selectable and filterable items. Each displayed list has different requirements. The library we are open sourcing solves all the requirements we had in-house at Small Improvements. The library is highly extendable and builds up on composition. You can come up with your own opt-in features.

Demo and Features

The react-redux-composable-list comes with the following features:

  • Filtering (AND filter, OR filter, multiple filters)
  • Selecting
  • Sorting
  • Magic Column (collapsing multiple columns in one column)
  • Pagination

There are two demo applications up and running to show the features of react-redux-composable-list.

While the former demonstrates all features in one real world example, the latter separates the examples by feature.

The Real World example shows that all features can be used altogether by composing them. To specify the opt-in features of your list components you use React’s higher order components.

const List = ({ list, stateKey }) => {


const EmptyBecauseFilter = () =>
    <h3>No Filter Result</h3>
    <p>Sorry, there was no item matching your filter.</p>

export default compose(
  withEmpty({ component: EmptyBecauseNoList }),
  withSelectables({ ids: [0] }),
  withPreselectables({ ids: [2, 3] }),
  withUnselectables({ ids: [4, 6] }),
  withEmpty({ component: EmptyBecauseFilter }),
  withPaginate({ size: 10 }),

You can find the implementation of both demo applications in the official GitHub repository. Further details about specific features can be found in the official documentation.

Getting Started

If you want to jump right into using the library, you should checkout the Getting Started section in the official documentation.

For instance having a list of items with the option to select items can be accomplished with the following component:

import { components, enhancements } from 'react-redux-composable-list';
const { Enhanced, Row, Cell } = components;
const { withSelectables } = enhancements;

const ListComponent = ({ list, stateKey }) =>
  <Enhanced stateKey={stateKey}>
    {list.map(item =>
      <Row key={item.id}>
        <Cell style={{ width: '70%' }}>{item.title}</Cell>
        <Cell style={{ width: '30%' }}>{item.comment}</Cell>

export default withSelectables()(ListComponent);

Afterwards it can be simply used by passing a list of items and a state key to identify the table state.

import SelectableListComponent from path/to/ListComponent';

const list = [
  { id: '1', title: 'foo', comment: 'foo foo' },
  { id: '2', title: 'bar', comment: 'bar bar' },

const App = () =>

If you want to dive deeper into the library, you can checkout the whole documentation to learn more about the library and how to use it.

Extend it & Contribute

You can write your own enhancements and enhancers, because you have provide you with access to the library’s API. To be more specific, the library API is nothing more than action creators and selectors for the Redux store. All the state that is managed for the tables is organized in a Redux store. You will find everything you need to know about the API in each documented feature. In general, the documentation is a good place to get started and to read up everything about all the features.

We would love, if you would give it a shot and give us feedback about it. In addition, we welcome you to make contributions to the library.


Reflections on CSSconf EU 2017 (Berlin)


Recently three of our developers attended the CSSconf 2017 in Berlin. The talks have been inspiring for us and once again we got clear about what a mature language CSS has become by now. The steady addition of new features continue to amaze and the enthusiasm of the community is infectious. The conference itself was well organized (the food was awesome!) and we appreciate that the organizers took so much care about creating a safe and diverse environment for everyone.

In this blog post we reflect on our learnings from the conference and share our experiences on developing efficient CSS at scale.

There are a lot of guidelines on how to write modular JavaScript and tons of books about how to make Java code efficient. But what does good CSS code actually look like? The answer is probably not that much different from other programming languages. The first step towards modular, reliable and reusable CSS code is to perceive CSS as a regular programming language. At CSSconf Ivana Mc Connell puts up the question: “What makes a good CSS developer?”. She points out that CSS still isn’t included as a programming language in the Stack Overflow survey and that there is even a hierarchy in many companies between CSS developers and “real” developers.

“We can always change the hierarchies. CSS is real development – let’s make that a given.”
(Ivana Mc Connell)

There are still developers and managers who think that CSS is just about putting fancy colors and setting some margin here and there. However, CSS has become a powerful tool and new features are continuously added, not to mention the various preprocessors that have become a quasi-standard throughout the last years. Things like grids, animations and filters are first class citizens by now and already wildly supported by browsers.

To showcase the feature richness of CSS, Una Kravets performs a live coding session where she puts up a simple yet fun and interactive browser game just by making use of regular CSS. Mathias Bynens already did something similar at CSSconf EU 2014, where he presented a mock of a shooter game that only consisted of one single line of HTML. The point here is not that CSS should replace JavaScript. On the contrary – while CSS and JavaScript are and always will be different things, it’s interesting to see the borders blurring and how both languages influence each other.

Make it scale, but keep it consistent

At Small Improvements we work on a large single page application. In three feature teams we maintain roughly 40k lines of LESS code. Two of our biggest ongoing challenges are to make our styling consistent and our CSS code modular and reusable.

Maintaining a style guide and establishing a design culture

Especially when multiple teams work on one and the same application there is a certain risk that each team comes up with a slightly different style of implementation. There are numerous examples for this and conducting an interface inventory (as suggested by Brad Frost) can yield surprising results. Achieving consistent styling is even more difficult if the implementation of the frontend is technically not homogeneous. Even though we implement all new frontend features at Small Improvements in React, we still have a lot of Angular code and even some old Wicket pages lingering around. The user doesn’t care about these details, so the question is how to keep track of all the various patterns that we use across the app and provide a seamless design language?

In her talk “Scaffolding CSS at scale”, Sareh Heidari shared an example how to discover and extract visual patterns at the BBC news website. We can confirm that we made good experiences with a similar approach. We recently set out to build a new style guide for our app that allows us to stay aware of all the different patterns we use. This helps us not only to compose new features out of existing components. Instead, the main key for us is not the style guide itself but it is the process around it: having a close eye on everything that’s newly-built and coming together frequently to talk about how to integrate these additions into the bigger picture. We perceive the style guide as a cause for discussion; you could even say that it’s an artless byproduct of our design process.

Project setup and implementation

For us it works best to structure our code base in a domain-driven way. We follow this approach in our entire app and can fully recommend it. For the frontend we decided to use CSS modules (in form of LESS files) that we put right next to our React components. That way the component itself always comes with its own (inward) styling. There are various attempts in the React community for this kind of project layout. (It even became popular to go further by using some sort of inline styling, see Mark Dalgleish’s talk for an overview.) For us CSS Modules worked well since we used LESS previously, which then allowed us for a convenient and gradual migration path.

Glenn Maddern – who heroically stepped in last-minute for Max Stoiber – updated us about the most recent changes in the Styled Components project. But no matter whether you prefer CSS modules or Styled Components, it is crucial to understand the motivation behind these libraries in order to build large scale applications: Glenn Maddern’s talk at ColdFront16 gives a good insight into this way of thinking and why it’s beneficial.

The only thing where we jealously glance over to Styled Components is the ability to parametrize CSS code so easily. Therefore we are looking forward for CSS variables being better supported in browsers, because that would be the native solution to the problem. David Khourshid demonstrates the handover between JavaScript and CSS in his talk “Getting Reactive with CSS”. With this solution the JS-CSS-intersection hassle falls right into place.


We don’t have a catchy conclusion to draw. There is certainly no right or wrong in what approach works best or which library helps most. For us it was nice to see a lot of our current assumptions confirmed, and if we were asked to write down three of them, then it would be these:

  1. CSS is a fully-fledged programming language – for sure! Stay up to date with all new features and take advantage of them once they are commonly supported.
  2. Keep styling and markup closely together. This promotes reusability best. Leverage component interfaces in order to create clear and predictable boundaries.
  3. Talk, talk, and talk frequently about visual patterns to ensure consistency in the long term. Some sort of process or documentation can help here.

The development team here at Small Improvements have done their fair share of conferences in the past (thanks to the individual learning budgets we are offered). It’s awesome now that our design team has grown to the point that we’re also attending designer-developer related conferences such as CSSconf EU. Bring on next year!

Ladda – A New Library for Client-Side API Caching

LaddaLogo-horiz-color (2)

In an ideal world, caching wouldn’t be something we have to care about. However, with more and more mobile users on slow and limited data plans, as well as more advanced applications, we can’t escape reality. We need caching. As a response to this we have invested quite some time in Ladda – a dependency-free client side library for caching, invalidation of caches and handling different representations of the same data. Ladda is implemented using JavaScript (ES2015), framework agnostic (works equally well with React, Vue, Angular or vanilla JavaScript) and designed to make it easy for you to implement a caching solution without increasing the complexity of your application code.

Read on to learn how Ladda can be useful for you, how it helps you implement a sophisticated caching solution, and for a comparison of Ladda with other popular solutions for client-side API caching.

Scenarios Where Ladda Can Help You

There’s no such thing as a free lunch. Caching speeds up your application, but it comes at a cost: it increases the complexity of your application code. The following examples will show you how Ladda can help you to reduce this cost in some common scenarios.

Just Caching

The most straightforward usage of a cache is simply to cache a value, and if it has been previously cached, return it directly from the cache. Consider that you make an API call “getUsers”. The most straightforward solution for implementing caching would look something like:

const getUsers = () => {
   if (!inCache(key) || hasExpired(ttl, key)) {
       const res = api.user.getUsers();
       putInCache(key, res);
   return fromCache(key);

When using Ladda your application code would look like:

const getUsers = api.user.getUsers;

Note how we separate what we want to do (getting the users) from the caching logic, which is just an optimization. This is a pretty simple example, which might not be a sufficient motivation to add a library to your application, but it quickly gets quite complicated as we start to manipulate our data.

Cache Invalidation

Stale data is your new enemy as soon as you introduce caching. Consider the example of users again. You are getting all users, but then you spot a typo in one user’s surname and correct it. Now you are left with two choices: either you update the cache used by “getUsers”, or you remove the cache and refetch the data the next time someone calls “getUsers”. Let’s consider the latter option first. It could look like:

const updateUser = (modifiedUser) => {

With Ladda it would look like:

const updateUser = api.user.updateUser;

Ladda would clear the cache for you, you just need to tell Ladda what to invalidate in a configuration, which lives outside of your application. However, by default Ladda will pick the harder option, it will update the cache for you. This comes with the benefit that after updating your user, you can call “getUsers” and get all the users directly from the cache, with your updated user of course.

Ladda has more to offer, but I’ll leave that for you to read about. You’ve heard a lot of promises and seen some simple code. But as you might have suspected, you still need to specify things such as the TTL (time to live), what to invalidate, and which function is updating the user and which one is retrieving users somewhere.

How does it work

The first claim, that Ladda allows you to add caching without making your application code more complex, is achieved by separating your application code from your caching logic. Ladda allows you in a concise and declarative way to express what TTL you want for a specific entity, such as user, and what you want to invalidate when something happens. Going back to the simple updateUser example, where you simply invalidate the “getUsers” cache, it would look like:

    user: {
        api: userApi,
        invalidates: ['user']

Of course, you don’t even have to specify that ‘user’ invalidates its own cache, since Ladda will update the cache in place for you, so you can simply write:

    user: {
        api: userApi

And rely on Ladda to always ensure that “getUsers” gives you an up-to-date list of users. Now, the only thing left is to create “userApi”. But this is something that you probably already had, it is just a bunch of functions communicating with your user endpoints. Let’s pretend that you have a file:

export function getUsers() { return doHttpGetRequestAndReturnPromise(); }

export function updateUser(user) { doHttpPutRequestAndReturn200(user); }

Ladda only requires you to specify the CRUD-operations:

getUsers.operation = 'READ';
export function getUsers() { return doHttpGetRequestAndReturnPromise(); }

updateUser.operation = 'UPDATE';
export function updateUser(user) { doHttpPutRequestAndReturn200(user); }

That is everything, just adding metadata directly to your functions and putting your entity in a configuration object. There are, of course, plenty more options, such as one mentioned already, TTL. You will find them all in the documentation. You’ll also find complete examples in the repo to make it easy for you to get started. Don’t forget to have a look at Search Hacker News with Ladda and this contact list (which uses all the supported CRUD operations) for examples that you can play around with.

Before we move on, let’s just have a quick look at a final example and what HTTP-requests it will result in:

  // GET-request was sent
  .then(() => api.user.updateSurname(user)) 
  // PUT-request was sent
  // No request was made! Directly from the cache.

A good caching solution tries to maximize the number of cache hits, Ladda is no exception.

Ladda Release
Fig 1. Sequence diagram showing the result of calling getUsers followed by updating a user and then calling getUser again. Note that we do not make a HTTP-request for the final getUsers call.

Ladda is not the first attempt to make caching simple, I believe that it can be the best choice in some cases, but it is important to look into all available options. Let’s do a brief comparison between Ladda and some other popular caching solutions.

Comparison With Other Solutions

First off, keep in mind that I’m not an expert in the other technologies, but I’ve tried to make the comparisons in an objective manner. One very popular solution is Relay. The big difference between Ladda and Relay is that Relay is built for GraphQL. Hence, Ladda and Relay are not really two alternatives to compare, since if you have a GraphQL backend, Relay is without doubt the better choice, but otherwise it isn’t a choice.

Another solution is redux-query. One key difference is already revealed in the name, it is specifically designed for use with Redux. Ladda can be used with any framework as well as without a framework. But let’s assume we are using React and Redux to make a viable comparison. The most prominent difference is that redux-query influences how you write your application code. This means that it has a greater buy-in than Ladda, but it also means that it can handle more things for you. If you want a more complete solution and don’t mind the buy-in, redux-query might be the best choice. But if you have your own solutions in place and just want to speed up your application by caching, then Ladda is probably a better choice. You can potentially add or remove Ladda without changing a single line of application code.

But perhaps more importantly, it’s about which code style you prefer and which library can offer the features you need. Ladda lets you stay with simple function calls that are “magically” very quick sometimes (when you hit the cache). To get users you simply call “getUsers()”. Other solutions tend to use a more declarative approach, where you fetch your data by creating an object describing what data you want.

There are a bunch of other caching libraries in JavaScript, for example js-cache (https://github.com/dirkbonhomme/js-cache). These are more generic than Ladda. They don’t support automatic invalidation logic, views of entities, or many other pieces of functionality that are often required in a sophisticated caching solution.


We hope that you will find Ladda useful and keep it in mind next time you need client-side caching for your API layer. Ladda is dependency-free, only 14KB, has high test coverage and allows you to specify your caching logic in a declarative and very simple way. Give it a shot and let us know what you think!

Our journey migrating 100k lines of code from AngularJS to React (Chapter 1)



This is the first post of a series explaining the story and technical learnings we had from starting to migrate from AngularJS to React. Check out the github repo for examples and the full code.

Our frontend story so far

At Small Improvements we’re aiming to make meaningful feedback available for every employee in every organisation. This also implies that we provide the best experience for our users. Therefore we’ve been on the front line to adopt AngularJS over Wicket, and started to rewrite our core features in AngularJS back in 2012. We saw the great potential in having a dynamic single page application.

In 2014, when the Angular team announced Angular 2, we already had a very large application and had gained a whole lot of knowledge using Angular. We were worried and excited at the same time. We faced a lot of challenges scaling Angular 1 and implementing best practices while moving fast.

In 2015 we sent almost all developers to AngularConnect in London, expecting the Angular 2 BETA release. Two of our developers gave a talk to share our approach to and learnings from writing a huge AngularJS application. We came back with the impression that Angular 2 was still very unstable and no clear migration strategy seemed to be available.

The Small Improvements Team in London at AngularConnect

Testing React plus Relay and GraphQL in the field

Our CEO has a strong engineering background, so he’s very open to play around with new technology and loves hackathons and ship-it weeks. That’s why he was very open to giving React (with Relay and GraphQL) a chance. As a company, our approach to evaluating a new technology is to have one of our dev teams make an initial tech spike. In this case Team Green decided to experiment with the novel technology in the field by coding a prototype for a new feature in the new tech stack.  We found React extremely promising and it solved a lot of challenges we had with Angular 1. Relay was cool, but lacking some core features at that time, such as support for invalidation or lazy loading of expensive fields.

Also adopting Relay would mean a complete buy-in from our whole stack, Frontend and API layer, due to the dependency on GraphQL.

So to sum it up, the outcome was: React: OMG!, Relay: Cool, but…

Our Reasons to go with React

  • Easy to write: It’s closer to vanilla JavaScript and components come without any boilerplate configuration
  • Great use for atomic components. In contrary to AngularJS where every scope is “expensive”
  • Easier to understand, React is the view library and has a slim API
  • Designed with performance in mind: Concept of virtual DOM
  • Attractive for recruiting, new technology attracts passionate developers, because they are keen to adopt new technologies
  • New challenges for the dev team, learn and grow!

When we used it for a large feature – a new Activity Stream – it multiplied our investment, due to an unclear focus. We were shifting between trying out the new technology and building first iterations on the feature.

Lessons learned

Use a smaller feature as playground when experimenting with a new technology.

The migration strategy

Now that we’ve decided to move from AngularJS to React, we saw two options for a migration strategy: A complete rewrite of our frontend or a slow transition. Let me rephrase that: We saw one option: A smooth and focused transition. Nobody wanted to spend months rewriting our whole application, although that would have been a fun argument with our CEO. At Small Improvements, we have a strong customer centred culture, so we didn’t want to slow down too much on our mission. Additionally it is a high risk to rewrite everything with a technology that nobody is experienced with.

Each week all Software Developers at Small Improvements meet for a developer exchange meeting. That’s the place where we share learnings, discuss ideas but also decide on larger undertakings. In this case we discussed and decided on the idea for the migration strategy a sub team of developers has developed and presented.

The basic idea

A frontend application is built like a tree, since HTML documents imply a structure of nested HTML elements. Modern web applications are structured in nested components. A simplified mock of an application displaying a list of comments may look like that:

Screen Shot 2017-01-24 at 09.50.02.png

The corresponding component tree looks like this:

Screen Shot 2017-01-24 at 09.50.06.pngWe looked at how complex it would be to replace and rewrite this tree.


The main Application component is hard, it usually is wired up with complex logic like routing. Similar the Navigation component. The routing is tightly coupled with the main components and in case of AngularJS central piece of the framework. A NavItem is easier, it displays a link and has some trivial logic like “am I active” and displays a link with text. The content part of our app consists of a sub tree displaying a list of comments. The ComponentList is trickier, since it is hooked to the data layer and may contain state like: what item is selected etc. Again we see the Comment is the easiest part of that tree, basically rendering a Comment and handling user interaction. The Text component for instance is simply responsible for rendering the text. That component is easiest to re-write in another technology.

Our conclusion was that the further down the component tree you go, the easier it gets to replace components. With that in mind, we defined guidelines and looked at requirements for that migration strategy.


How to tackle new features?

We wanted a full buy in, so we defined our first guideline:

  1. Every new feature will be built in React & Redux.

How to tackle existing code?

  1. If possible start to migrate leaf-first up to a whole component tree until you hit the routing module.
  2. If you touch old code/ components, estimate how much it would cost to rewrite it, if less than 30 minutes, rewrite, else get a second opinion.

How to migrate common UI components?

The basic building blocks of an application are generic, reusable UI components, like Dropdowns, Buttons, Forms etc. Those are necessary to build new components with React.

  1. Re-write generic UI components when you need them, and let other devs know that they now exist. Use that chance to improve the design/ UX.


  • Component based architecture
  • Angular Directives structured as container/ presenter components, read more here
  • Separation of concerns/ View/ Logic/ Service/ Communication layers and Injectable actions to encapsulate side effects like http calls etc

Fortunately our frontend design already fulfilled the requirements. If you want more information on how to design and structure your application watch our talk How to design large AngularJS applications that scale from AngularConnect or Refactoring To Components by Tero Parviainen.

Building bridges

We found that it was easiest to start by replacing the leaves of our application component tree. The missing piece was a bridge between the “old” world and the “new” world. Meaning AngularJS and React, in our case. How can we use React to render the Text component and get it’s data from an AngularJS component?

Rendering React within AngularJS

A React component is, well, just another UI component. It gets data and actions via props and is rendered to the DOM. It is responsible for internal state and handles user interaction. So a simple concept of our bridge could be an AngularJS component working as thin layer with the responsibility to pass on data to the React Component.

Let’s aim to answer our first uncertainty: Can we use an AngularJS component to render a React component?

This is our AngularJS comment component:

module.exports = angular.module('ngReactExample.comment', [
]).component('comment', {
    bindings: {
        comment: '<',
    template: '{{ $ctrl.comment.text }}',
    controller: function() {

Our React version of a comment looks like that

const Comment = (props) => {
    return (
         { props.comment.text }
export default Comment;

The React component is rendered to the DOM by calling:

ReactDOM.render(<Comment />, element);

Let’s try to call this within an AngularJS component:

import Comment from './Comment';

module.exports = angular.module('ngReactExample.comment', [
]).component('comment', {
    bindings: {
        comment: '<',
    controller: function() {
        ReactDOM.render(<Comment />, $element[0]);

It works! This is the simple yet powerful starting point from where we can now build our AngularJS – React bridge. The elegant part is that we don’t need to mess around with DOM node ids or use the DOM API to query the element we want to render React to. We can directly pass the reference to the AngularJS element. You might have noticed a little detail – at the moment we’re only rendering the React component when this component is initialized. In a dynamic app we want dynamic components. So we want to trigger the rendering whenever the component changes. To achieve this we can use the lifecycle method $onChanges.

import Comment from './Comment';

const render = (element) => {
        <Comment />,

module.exports = angular.module('ngReactExample.comment', [
]).component('comment', {
    bindings: {
        comment: '<',
    controller: function($element) {
        const $ctrl = this;
        $ctrl.$onChanges = () => render($element[0]);

Now whenever our AngularJS component receives changes we’re redrawing the React component.

With this working we can tackle the next question: How we can pass data down to our React component?

Passing data from AngularJS to React

In React we use props as interface to pass data to a component. An AngularJS directive receives inputs via bindings, so we will get the comment data from an outside component and pass it down to our React component. The full working bridge looks like this:

import Comment from './presenter';

const render = (element, props) => {
        <Comment { ...props } />,

module.exports = angular.module('ngReactExample.comment', [
]).component('comment', {
    bindings: {
        comment: '<',
    controller: function($element) {
        const $ctrl = this;
        $ctrl.$onChanges = () => render($element[0], { comment: $ctrl.comment });

Fixing the possible memory leak

As described here React will not automatically clean up the components which can lead to a memory leak. We can use the lifecycle hook $onDestroy() of our AngularJS component to unmount the React component.

import Comment from './presenter';

const render = (element, props) => {
        <Comment { ...props } />,

module.exports = angular.module('ngReactExample.comment', [
]).component('comment', {
    bindings: {
        comment: '<',
    controller: function($element) {
        const $ctrl = this;
        $ctrl.$onChanges = () => render($element[0], { comment: $ctrl.comment });
        $ctrl.$onDestroy = () => ReactDOM.unmountComponentAtNode($element[0]);

Voila! We’ve successfully passed data from AngularJS to a React component.

Completing the bridge from AngularJS to React

We’ve now found a way to wrap a React component with an AngularJS layer, so we can hook it up to the rest of our application.

This is a great starting point and a good proof of concept. Our current bridge is an interesting evolution of this first spark. In the next posts we will go more into technical details, also answering the question what we do, when the AngularJS component get’s destroyed, and more topics.

To be continued…

A sneak peak into the next chapter where we’ll have a closer look at:

  • Using AngularJS services in React
  • Improving the AngularJS-React bridge to work with Hot Reloading and avoid unnecessary re-renderings
  • Rendering AngularJS components in React

Stay tuned! 😉

Thanks for reading and if you have any questions or feedback, don’t be shy and reach out! @sfroestl If you liked the post, please share!

About the author


Sebastian Fröstl

Team Lead. Software Engineer. Trainer. Coach. Speaker. Devoted to Personal Development. Organizer of @angular_berlin.