Ladda – A New Library for Client-Side API Caching

LaddaLogo-horiz-color (2)

In an ideal world, caching wouldn’t be something we have to care about. However, with more and more mobile users on slow and limited data plans, as well as more advanced applications, we can’t escape reality. We need caching. As a response to this we have invested quite some time in Ladda – a dependency-free client side library for caching, invalidation of caches and handling different representations of the same data. Ladda is implemented using JavaScript (ES2015), framework agnostic (works equally well with React, Vue, Angular or vanilla JavaScript) and designed to make it easy for you to implement a caching solution without increasing the complexity of your application code.

Read on to learn how Ladda can be useful for you, how it helps you implement a sophisticated caching solution, and for a comparison of Ladda with other popular solutions for client-side API caching.

Scenarios Where Ladda Can Help You

There’s no such thing as a free lunch. Caching speeds up your application, but it comes at a cost: it increases the complexity of your application code. The following examples will show you how Ladda can help you to reduce this cost in some common scenarios.

Just Caching

The most straightforward usage of a cache is simply to cache a value, and if it has been previously cached, return it directly from the cache. Consider that you make an API call “getUsers”. The most straightforward solution for implementing caching would look something like:

const getUsers = () => {
   if (!inCache(key) || hasExpired(ttl, key)) {
       const res = api.user.getUsers();
       putInCache(key, res);
   }
   return fromCache(key);
}

When using Ladda your application code would look like:

const getUsers = api.user.getUsers;

Note how we separate what we want to do (getting the users) from the caching logic, which is just an optimization. This is a pretty simple example, which might not be a sufficient motivation to add a library to your application, but it quickly gets quite complicated as we start to manipulate our data.

Cache Invalidation

Stale data is your new enemy as soon as you introduce caching. Consider the example of users again. You are getting all users, but then you spot a typo in one user’s surname and correct it. Now you are left with two choices: either you update the cache used by “getUsers”, or you remove the cache and refetch the data the next time someone calls “getUsers”. Let’s consider the latter option first. It could look like:

const updateUser = (modifiedUser) => {
    api.user.updateUser(modifiedUser);
    clearGetUserCache();
}

With Ladda it would look like:

const updateUser = api.user.updateUser;

Ladda would clear the cache for you, you just need to tell Ladda what to invalidate in a configuration, which lives outside of your application. However, by default Ladda will pick the harder option, it will update the cache for you. This comes with the benefit that after updating your user, you can call “getUsers” and get all the users directly from the cache, with your updated user of course.

Ladda has more to offer, but I’ll leave that for you to read about. You’ve heard a lot of promises and seen some simple code. But as you might have suspected, you still need to specify things such as the TTL (time to live), what to invalidate, and which function is updating the user and which one is retrieving users somewhere.

How does it work

The first claim, that Ladda allows you to add caching without making your application code more complex, is achieved by separating your application code from your caching logic. Ladda allows you in a concise and declarative way to express what TTL you want for a specific entity, such as user, and what you want to invalidate when something happens. Going back to the simple updateUser example, where you simply invalidate the “getUsers” cache, it would look like:

{
    user: {
        api: userApi,
        invalidates: ['user']
    }
}

Of course, you don’t even have to specify that ‘user’ invalidates its own cache, since Ladda will update the cache in place for you, so you can simply write:

{
    user: {
        api: userApi
    }
}

And rely on Ladda to always ensure that “getUsers” gives you an up-to-date list of users. Now, the only thing left is to create “userApi”. But this is something that you probably already had, it is just a bunch of functions communicating with your user endpoints. Let’s pretend that you have a file:

export function getUsers() { return doHttpGetRequestAndReturnPromise(); }

export function updateUser(user) { doHttpPutRequestAndReturn200(user); }

Ladda only requires you to specify the CRUD-operations:

getUsers.operation = 'READ';
export function getUsers() { return doHttpGetRequestAndReturnPromise(); }

updateUser.operation = 'UPDATE';
export function updateUser(user) { doHttpPutRequestAndReturn200(user); }

That is everything, just adding metadata directly to your functions and putting your entity in a configuration object. There are, of course, plenty more options, such as one mentioned already, TTL. You will find them all in the documentation. You’ll also find complete examples in the repo to make it easy for you to get started. Don’t forget to have a look at Search Hacker News with Ladda and this contact list (which uses all the supported CRUD operations) for examples that you can play around with.

Before we move on, let’s just have a quick look at a final example and what HTTP-requests it will result in:

api.user.getUsers() 
  // GET-request was sent
  .then(() => api.user.updateSurname(user)) 
  // PUT-request was sent
  .then(api.user.getUsers); 
  // No request was made! Directly from the cache.

A good caching solution tries to maximize the number of cache hits, Ladda is no exception.

Ladda Release
Fig 1. Sequence diagram showing the result of calling getUsers followed by updating a user and then calling getUser again. Note that we do not make a HTTP-request for the final getUsers call.

Ladda is not the first attempt to make caching simple, I believe that it can be the best choice in some cases, but it is important to look into all available options. Let’s do a brief comparison between Ladda and some other popular caching solutions.

Comparison With Other Solutions

First off, keep in mind that I’m not an expert in the other technologies, but I’ve tried to make the comparisons in an objective manner. One very popular solution is Relay. The big difference between Ladda and Relay is that Relay is built for GraphQL. Hence, Ladda and Relay are not really two alternatives to compare, since if you have a GraphQL backend, Relay is without doubt the better choice, but otherwise it isn’t a choice.

Another solution is redux-query. One key difference is already revealed in the name, it is specifically designed for use with Redux. Ladda can be used with any framework as well as without a framework. But let’s assume we are using React and Redux to make a viable comparison. The most prominent difference is that redux-query influences how you write your application code. This means that it has a greater buy-in than Ladda, but it also means that it can handle more things for you. If you want a more complete solution and don’t mind the buy-in, redux-query might be the best choice. But if you have your own solutions in place and just want to speed up your application by caching, then Ladda is probably a better choice. You can potentially add or remove Ladda without changing a single line of application code.

But perhaps more importantly, it’s about which code style you prefer and which library can offer the features you need. Ladda lets you stay with simple function calls that are “magically” very quick sometimes (when you hit the cache). To get users you simply call “getUsers()”. Other solutions tend to use a more declarative approach, where you fetch your data by creating an object describing what data you want.

There are a bunch of other caching libraries in JavaScript, for example js-cache (https://github.com/dirkbonhomme/js-cache). These are more generic than Ladda. They don’t support automatic invalidation logic, views of entities, or many other pieces of functionality that are often required in a sophisticated caching solution.

Conclusion

We hope that you will find Ladda useful and keep it in mind next time you need client-side caching for your API layer. Ladda is dependency-free, only 14KB, has high test coverage and allows you to specify your caching logic in a declarative and very simple way. Give it a shot and let us know what you think!

Our journey migrating 100k lines of code from AngularJS to React (Chapter 1)

Web

Intro

This is the first post of a series explaining the story and technical learnings we had from starting to migrate from AngularJS to React. Check out the github repo for examples and the full code.

Our frontend story so far

At Small Improvements we’re aiming to make meaningful feedback available for every employee in every organisation. This also implies that we provide the best experience for our users. Therefore we’ve been on the front line to adopt AngularJS over Wicket, and started to rewrite our core features in AngularJS back in 2012. We saw the great potential in having a dynamic single page application.

In 2014, when the Angular team announced Angular 2, we already had a very large application and had gained a whole lot of knowledge using Angular. We were worried and excited at the same time. We faced a lot of challenges scaling Angular 1 and implementing best practices while moving fast.

In 2015 we sent almost all developers to AngularConnect in London, expecting the Angular 2 BETA release. Two of our developers gave a talk to share our approach to and learnings from writing a huge AngularJS application. We came back with the impression that Angular 2 was still very unstable and no clear migration strategy seemed to be available.

anguarconnenct.jpg
The Small Improvements Team in London at AngularConnect

Testing React plus Relay and GraphQL in the field

Our CEO has a strong engineering background, so he’s very open to play around with new technology and loves hackathons and ship-it weeks. That’s why he was very open to giving React (with Relay and GraphQL) a chance. As a company, our approach to evaluating a new technology is to have one of our dev teams make an initial tech spike. In this case Team Green decided to experiment with the novel technology in the field by coding a prototype for a new feature in the new tech stack.  We found React extremely promising and it solved a lot of challenges we had with Angular 1. Relay was cool, but lacking some core features at that time, such as support for invalidation or lazy loading of expensive fields.

Also adopting Relay would mean a complete buy-in from our whole stack, Frontend and API layer, due to the dependency on GraphQL.

So to sum it up, the outcome was: React: OMG!, Relay: Cool, but…

Our Reasons to go with React

  • Easy to write: It’s closer to vanilla JavaScript and components come without any boilerplate configuration
  • Great use for atomic components. In contrary to AngularJS where every scope is “expensive”
  • Easier to understand, React is the view library and has a slim API
  • Designed with performance in mind: Concept of virtual DOM
  • Attractive for recruiting, new technology attracts passionate developers, because they are keen to adopt new technologies
  • New challenges for the dev team, learn and grow!

When we used it for a large feature – a new Activity Stream – it multiplied our investment, due to an unclear focus. We were shifting between trying out the new technology and building first iterations on the feature.

Lessons learned

Use a smaller feature as playground when experimenting with a new technology.

The migration strategy

Now that we’ve decided to move from AngularJS to React, we saw two options for a migration strategy: A complete rewrite of our frontend or a slow transition. Let me rephrase that: We saw one option: A smooth and focused transition. Nobody wanted to spend months rewriting our whole application, although that would have been a fun argument with our CEO. At Small Improvements, we have a strong customer centred culture, so we didn’t want to slow down too much on our mission. Additionally it is a high risk to rewrite everything with a technology that nobody is experienced with.

Each week all Software Developers at Small Improvements meet for a developer exchange meeting. That’s the place where we share learnings, discuss ideas but also decide on larger undertakings. In this case we discussed and decided on the idea for the migration strategy a sub team of developers has developed and presented.

The basic idea

A frontend application is built like a tree, since HTML documents imply a structure of nested HTML elements. Modern web applications are structured in nested components. A simplified mock of an application displaying a list of comments may look like that:

Screen Shot 2017-01-24 at 09.50.02.png

The corresponding component tree looks like this:

Screen Shot 2017-01-24 at 09.50.06.pngWe looked at how complex it would be to replace and rewrite this tree.

screen-shot-2017-01-24-at-09-50-34

The main Application component is hard, it usually is wired up with complex logic like routing. Similar the Navigation component. The routing is tightly coupled with the main components and in case of AngularJS central piece of the framework. A NavItem is easier, it displays a link and has some trivial logic like “am I active” and displays a link with text. The content part of our app consists of a sub tree displaying a list of comments. The ComponentList is trickier, since it is hooked to the data layer and may contain state like: what item is selected etc. Again we see the Comment is the easiest part of that tree, basically rendering a Comment and handling user interaction. The Text component for instance is simply responsible for rendering the text. That component is easiest to re-write in another technology.

Our conclusion was that the further down the component tree you go, the easier it gets to replace components. With that in mind, we defined guidelines and looked at requirements for that migration strategy.

Guidelines

How to tackle new features?

We wanted a full buy in, so we defined our first guideline:

  1. Every new feature will be built in React & Redux.

How to tackle existing code?

  1. If possible start to migrate leaf-first up to a whole component tree until you hit the routing module.
  2. If you touch old code/ components, estimate how much it would cost to rewrite it, if less than 30 minutes, rewrite, else get a second opinion.

How to migrate common UI components?

The basic building blocks of an application are generic, reusable UI components, like Dropdowns, Buttons, Forms etc. Those are necessary to build new components with React.

  1. Re-write generic UI components when you need them, and let other devs know that they now exist. Use that chance to improve the design/ UX.

Requirements

  • Component based architecture
  • Angular Directives structured as container/ presenter components, read more here
  • Separation of concerns/ View/ Logic/ Service/ Communication layers and Injectable actions to encapsulate side effects like http calls etc

Fortunately our frontend design already fulfilled the requirements. If you want more information on how to design and structure your application watch our talk How to design large AngularJS applications that scale from AngularConnect or Refactoring To Components by Tero Parviainen.

Building bridges

We found that it was easiest to start by replacing the leaves of our application component tree. The missing piece was a bridge between the “old” world and the “new” world. Meaning AngularJS and React, in our case. How can we use React to render the Text component and get it’s data from an AngularJS component?

Rendering React within AngularJS

A React component is, well, just another UI component. It gets data and actions via props and is rendered to the DOM. It is responsible for internal state and handles user interaction. So a simple concept of our bridge could be an AngularJS component working as thin layer with the responsibility to pass on data to the React Component.

Let’s aim to answer our first uncertainty: Can we use an AngularJS component to render a React component?

This is our AngularJS comment component:

module.exports = angular.module('ngReactExample.comment', [
]).component('comment', {
    bindings: {
        comment: '<',
    },
    template: '{{ $ctrl.comment.text }}',
    controller: function() {
    }
});

Our React version of a comment looks like that

const Comment = (props) => {
    return (
         { props.comment.text }
    );
};
export default Comment;

The React component is rendered to the DOM by calling:

ReactDOM.render(<Comment />, element);

Let’s try to call this within an AngularJS component:

import Comment from './Comment';

module.exports = angular.module('ngReactExample.comment', [
]).component('comment', {
    bindings: {
        comment: '<',
    },
    controller: function() {
        ReactDOM.render(<Comment />, $element[0]);
    }
});

It works! This is the simple yet powerful starting point from where we can now build our AngularJS – React bridge. The elegant part is that we don’t need to mess around with DOM node ids or use the DOM API to query the element we want to render React to. We can directly pass the reference to the AngularJS element. You might have noticed a little detail – at the moment we’re only rendering the React component when this component is initialized. In a dynamic app we want dynamic components. So we want to trigger the rendering whenever the component changes. To achieve this we can use the lifecycle method $onChanges.

import Comment from './Comment';

const render = (element) => {
    ReactDOM.render(
        <Comment />,
        element
    );
}

module.exports = angular.module('ngReactExample.comment', [
]).component('comment', {
    bindings: {
        comment: '<',
    },
    controller: function($element) {
        const $ctrl = this;
        $ctrl.$onChanges = () => render($element[0]);
    }
});

Now whenever our AngularJS component receives changes we’re redrawing the React component.

With this working we can tackle the next question: How we can pass data down to our React component?

Passing data from AngularJS to React

In React we use props as interface to pass data to a component. An AngularJS directive receives inputs via bindings, so we will get the comment data from an outside component and pass it down to our React component. The full working bridge looks like this:

import Comment from './presenter';

const render = (element, props) => {
    ReactDOM.render(
        <Comment { ...props } />,
        element
    );
}

module.exports = angular.module('ngReactExample.comment', [
]).component('comment', {
    bindings: {
        comment: '<',
    },
    controller: function($element) {
        const $ctrl = this;
        $ctrl.$onChanges = () => render($element[0], { comment: $ctrl.comment });
    }
});

Fixing the possible memory leak

As described here React will not automatically clean up the components which can lead to a memory leak. We can use the lifecycle hook $onDestroy() of our AngularJS component to unmount the React component.

import Comment from './presenter';

const render = (element, props) => {
    ReactDOM.render(
        <Comment { ...props } />,
        element
    );
}

module.exports = angular.module('ngReactExample.comment', [
]).component('comment', {
    bindings: {
        comment: '<',
    },
    controller: function($element) {
        const $ctrl = this;
        $ctrl.$onChanges = () => render($element[0], { comment: $ctrl.comment });
        $ctrl.$onDestroy = () => ReactDOM.unmountComponentAtNode($element[0]);
    }
});

Voila! We’ve successfully passed data from AngularJS to a React component.

Completing the bridge from AngularJS to React

We’ve now found a way to wrap a React component with an AngularJS layer, so we can hook it up to the rest of our application.

This is a great starting point and a good proof of concept. Our current bridge is an interesting evolution of this first spark. In the next posts we will go more into technical details, also answering the question what we do, when the AngularJS component get’s destroyed, and more topics.

To be continued…

A sneak peak into the next chapter where we’ll have a closer look at:

  • Using AngularJS services in React
  • Improving the AngularJS-React bridge to work with Hot Reloading and avoid unnecessary re-renderings
  • Rendering AngularJS components in React

Stay tuned! 😉

Thanks for reading and if you have any questions or feedback, don’t be shy and reach out! @sfroestl If you liked the post, please share!

About the author

green3-1024x683

Sebastian Fröstl

Team Lead. Software Engineer. Trainer. Coach. Speaker. Devoted to Personal Development. Organizer of @angular_berlin.
@sfroestl
sebastianfroestl.de

Resources

Redesigning the Small Improvements emails

During Ship It Week, I took the opportunity to redesign our emails. The goal was to deliver a more modern and fluid layout in hopes of strengthening trust and creating a more pleasant user experience among our customers.

Before and After

email-blogpost--before-after-01.jpg
Before and after images of Small Improvements emails

Design

According to research1, aesthetics play a big role on how people interact with things. And while the old email template is usable and performs the task well, it was outdated and not as attractive as the current state of the app itself.

“Attractive things make people feel good, which in turn makes them think more creatively”

Emotional Design by Don Norman

There are many factors that affect how a person feels when interacting with an email from Small Improvements. The key is to simplify it by making it easier for people to understand what the email is about. And since emotions change the way our mind operates – the happier we are, the better we can provide valuable feedback!

We want our users to feel excited when they receive an email feedback request, or whenever a feedback has been made available to them. In the end, it’s not just about how a part of the tool looks – it’s also a way to connect individuals to special events that may happen during their time in a company.

email-blogpost-03.jpg
Different mockups of invitation email

Technical Details

Automatic inline styling

Emails are best structured in tables and styles work best when inlined. Inline styles can be a pain to maintain so I looked for a way to make it easier to update these templates in the future.

The great thing about working in the tech industry is that solutions to some problems are just a few clicks away because you can almost be certain that people have gone to the same problem already. We used a little library called gulp-inline-css that does exactly what it’s supposed to.

Before inliner:

<table class="table-reset">
  <tr>
    <td align="left" class="logo-container padding-copy">
      <!-- header -->
     </td>
   </tr>
   <tr>
     <td align="left" class="article-container padding-copy">
       <!-- content -->
      </td>
   </tr>
</table>

 

After inliner:

<table class="table-reset" style="border: none; border-spacing: 0; padding: 0; width: 100%;">
  <tr>
    <td align="left" class="logo-container padding-copy" style="color: #353535; font-family: 'Avenir Next', 'AvenirNext', Helvetica, Arial, sans-serif; padding-bottom: 20px;">
      <!-- header -->        
    </td>
  </tr>
  <tr>
     <td align="left" class="article-container padding-copy" style="color: #353535; font-family: 'Avenir Next', 'AvenirNext', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; padding: 20px 0 0 0; text-align: left;">
       <!-- content -->
     </td>
  </tr>
</table>

All templates get a .responsive file extension so that the script knows which files to transform. It then outputs them with the correct filename that the accompanying java file needs in order to render correctly. Now everybody can make a CSS file, as they are accustomed to, and the script will automatically inline those styles!

Testing

During the entire process, I used a combination of a local mail server and online email testing platform. The benefit of the local mail server (which we’ve already set up) is that I get to see the email come in through my inbox when triggering events in the app, and can immediately see if something went wrong. Testing on the email testing platform, on the other hand, makes sure the templates are rendered correctly in as many email clients as possible.

Testing on different email clients

Next Steps

Changing the look and feel is just the first step to making our emails more pleasurable to interact with. The next step would be getting rid of the long links and replacing them with buttons and then revisiting emails that need a text overhaul!

Conclusion

Coding a template for emails is, without a doubt, very tricky. With hundreds of email clients and devices available within our grasp today, it’s no wonder that designing emails can quickly turn into a mess. But it can be done! The key is to make it simple and as straight to the point as possible. By combining it with rigorous testing, emails can be made almost as responsive as any website.

Endnotes

1  Apparent Usability vs. Inherent Usability Experimental analysis on the determinants of the apparent usability by Masaaki Kurosu and Kaori Kashimura;

Aesthetics and Apparent Usability: Empirically Assessing Cultural and Methodological Issues by Noam Tractinsky

A Facelift for the Font Family

Today we’re excited to announce a new makeover for the Small Improvements application; a new font family! Please welcome “Avenir Next”!

Avenir Next - our new font

newfont1
Sample Objective within Small Improvements

This is all part of our mission to create a more enjoyable, engaging and enticing experience for  Small Improvements users. The Design Team @ Small Improvements has had a big year; going responsiveupdating colours, icons and badges, and now for the final Christmas treat, we’ve introduced the smart, the elegant, the distinguished style of Avenir Next.

Watching our Weight

Part of the new font release has also included making better use of font weights (eg. Regular vs. Bold) – giving you a clearer view of what’s important on the page, and making it easier to scan the page with your eyes to find the right things.

Keeping it Uniform

In the past, the default SI font varied across devices. That’s because we used ‘system fonts’ only, so some users had Helvetica, some Arial, some had which ever default ‘sans-serif’ font their device had.

Now all users will share the same experience across all devices.

screen-shot-2016-12-12-at-15-55-35
Sample Message within Small Improvements – with new typography

 

 

Looking Back at GOTO 2016

By Peter Crona and Michael Ruhwedel

goto

First of all, it was an amazing conference as always. None of us presented this year, but look for us in the future. Many of us at Small Improvements tend to go to more specific conferences, such as React Europe, DockerCon or JSUnconf. GOTO is more of a generic software engineering conference, focusing on issues such as architecture, security and new trends in the field. It doesn’t go as deep into the topics as the specialized conferences, but it serves well to give an overview and an introduction to interesting topics. Some of the most interesting and most popular topics were, as expected, microservices, data science, security and ethics. Let’s start with microservices.

Microservices are the Future

Something interesting about the future is that it is also always in the present, just initially hiding a bit in the corners. A clear message from Mary Poppendieck was that microservices are the future. Regardless of whether we want it or not, we need to learn it and will eventually use it.

Susanne Kaiser from Just Software talked about their ongoing journey from a monolith to microservices. She warned us from doing too much at once, but concluded that going from a monolith to microservices was worth it in the end. She also told us about the importance to not underestimate the effort required to do so. Later on Ilya Dmitrichenko walked us through Socks Shop, a demo application to show how an application built with microservices can look. He also showed us how a microservice-based application is deployed.

I urge you to read up on microservices if you haven’t. It is truly fascinating how convenient the configuration is nowadays, and if you’ve been around for awhile, you will find it interesting to compare with how we did it in the good old days. Have a look at this configuration for example, lovely, isn’t it? Let’s move on to another topic, which I have a very strong interest for, namely data science.

Seeing into the Future

It is truly fascinating how quickly data science has become popular and advanced. One of the first talks I went to was “Applied data science and engineering for local weather forecasts” by Nikhil Podduturi from Meteogroup. He took us through how they started using machine learning, running everything on their own laptops and then moving into the cloud. He showed us a bit of their architecture that process more than a terabyte daily. I enjoyed his talk very much and had a chat afterwards, in which he pointed out that, when getting started with data science, it is sensible to start with the basics, learning/repeating the mathematics and then move on to hot techniques such as deep learning. This will make it easier to develop an intuition for which technique to use when and how to find the best parameters. He recommended using Python since it has a very mature ecosystem for machine learning.

Robert Kubis from Google tutored us in Tensorflow, by working the Hello World of machine learning, namely classification of handwritten digits. He pushed the success rate of a neural network, up to an impressive 98%, while touching the basics of the Python API. This was a very interesting and hand-on talk, showing you how to use Tensorflow and giving you an introduction to deep learning.

How to find insights without using machine learning, was the topic of Michael Hunger from Neo Technology talk. He demonstrated how data can be modelled and queried using a graph. His talk focused on how Neo4J was used by journalists to analyze the panama papers.

Even your code repository is a datasource in itself that can be mined. This concept was presented by Dr. Elmar Juergens. By coloring new additions of code and test-coverages of functional tests, he clearly demonstrated that a dev- and a test-department at one of his clients had a serious communication problem: There was little overlap in what was tested and what was newly implemented.

The last two talks about data science were focusing a bit more on possibilities, philosophy and ethics. “Deep Stupidity: What Neural Networks Can and Cannot do …” by Prof J. Mark Bishop discussed about whether we can build general intelligence or not. “Consequences of an Insightful Algorithm” by Carina C. Zona focused on the importance of thinking through the ethical aspects when developing algorithms and using them. We are giving a lot of power to algorithms, and algorithms tend to reinforce prejudices and do not necessarily care about what is right, but are still used to make decisions that affect people’s lives. Let’s now have a look at the security talks.

A Secure Internet

When you learn a new concept, such as microservices, it is important to read up on security. It is easy to make mistakes that introduce vulnerabilities when you are new to technologies. Phil Winder talked about how to make your microservices secure. He was very practical and showed us common mistakes people do, such as running as root in containers and not setting up a sensible network policy. Dr. Jutta Steiner introduced us to Blockchain technology. She pointed out how we can use techniques from safety critical systems development, such as N-version programming, to securely implement it and minimizing the risk of bugs. The talk was unfortunately not going into implementation details of blockchain technology itself, but she made it clear that the technology can be used for much more than just a currency such as Bitcoin. Finally, let’s have a look at the ethics focused talks.

Ethics in Technology

The great thing about goto is, that it’s not got the latest technology topics covered, but also how to better get along with your fellow human beings.

Jamie Dobson encouraged us to think beyond capitalism in his inspiring “Postcapitalism” talk. It’s possible that the  power of 3D printing small and large can bring back the capital and onshore work in developed countries again.

Beginning with a short meditation Jeffery Hackert build a compelling argument for giving our full presence. With a full awareness of ourselves and our workplace come better informed observations, decisions and implementations. After all if you’re ever involved in a trolley problem, it would be really unfortunate if you’d be focused on your cellphone and not the lever.

If you’ve been exhausted by office politics Kate Gray and Chris Young can help you. Their great talk “How to Win Hearts and Minds” is about how the finesse of real world politics were used to push a blocked IT project to success.

Talks ranging from microservices to ethics shows you the great variety offered at GOTO, the conference really has a lot to offer.

Something for Everyone

Let’s end with some words about the conference itself. GOTO has five different tracks and the mix is very good, covering important and trending topics such as architecture (in particular microservices), security, data science and much more. In addition to this you find plenty of interesting people there to share ideas and pain points with. My only disappointment was that there was not a single talk about functional programming. But hey, you can’t fit everything into one conference.

Using Haskell to Find Unused Spring MVC Code

Screen Shot 2016-12-02 at 14.43.09.png

Not into reading text? Click here for the code.

Like a lot of people at Small Improvements I’m fascinated by functional programming. After coming back from our company trip in San Francisco I had trouble beating jet lag due to spending the evenings reading about monad transformers, I’m not kidding, it actually kept me awake.

For a while I’ve been thinking about cleaning up a little in our codebase, mainly the backend which is written in Java. I have known for ages that Haskell is really good with abstract syntax trees (ASTs) and was playing with the thought of creating a Haskell tool that would help me with this. However, to not completely violate the “do not reinvent the wheel” rule I first had a quick look at what’s already out there.

Finding An Existing Tool or Building My Own

Most of the developers at work use IDEA (for editing Java) which has built in tools for finding unused code and do all different kinds of code analysis. I tried using it for finding unused code a couple of times with different settings but didn’t manage to get acceptable results. The number of false positives was way too high for it to be useful, in addition to this it was incredibly slow. I also tried Findbugs without satisfying results.

I’m sure it’s possible to configure some existing software, but rather than spending more time finding a COTS-tool I figured I might just code it myself. I was thinking that if it’s specific to our project it shouldn’t be so hard. I quickly realized regular expressions wouldn’t be enough or would be very tricky to use and limit my flexibility. This left me with the choice of writing a custom parser or building a proper AST and work with that.

I have bad experience of working with ASTs in Java, but Haskell is another story, traversing a tree is a piece of cake. I had a quick look at Hackage and noticed that someone already has written a parser for Java in Haskell, so it was settled, I was starting Small Improvements’ first, albeit small, Haskell project. Finally I got to use Haskell at work!

My Solution For Finding Unused Code

It is actually quite simple to find unused Java code. Let’s have a look at my solution. In essence I’m reading all the .java-files in a folder, building an AST using language-java and then traversing the AST to collect information that can later be used to decide if a file is used or not.

The main information I’m looking for is whether any other file imports a file. However, since Java does not require an import statement if the dependency is within the same package I also look for other things such as method calls. After this I’m using the information to actually find unused files.

To find unused files I’m building a graph. Nodes are files and an edge means that a file is used by another file. So the challenge here is to actually add an edge every time a file is used. An obvious thing to do is to add an edge for every import statement.

To improve the result further I’m adding edges for references within a package, eg. used classes or methods within the package. However, this is not enough since Spring MVC has a powerful dependency injection system. It supports injecting dependencies and still only relying on interfaces. You can get all classes of a type (interface) injected or one specific instance but still only depending on its interface.

When harvesting the AST I also collected autowired classes and superclasses. Using this I filtered out files that are autowired, either directly or via an interface. The result is not 100% perfect, but with a small blacklist of classes and some other trivial filtering I managed to make it good enough for it to be very useful. Everything I get from the AST is modeled using the following data structure:

data Result = Result { fileName :: String
                     , imports :: [String]
                     , references :: [String]
                     , topLevelAnnotations :: [String]
                     , methodAnnotations :: [String]
                     , implements :: [String]
                     , autowired :: [Autowiring]
                     } deriving (Show)

Have a look at the code and try it on your own Spring MVC project. Feel free to comment here if you need help or have suggestions of improvements. Let’s now compare coding Haskell with Java / JavaScript that we normally do at Small Improvements.

Reflection of Development With Haskell

I’m a big fan of Haskell and have been for ages. One of the first things I noticed is the wonderful support you get from the compiler. When the compiler blesses your code it is very likely to just work. Once you have established that your code works, that it behaves correctly, then it is really difficult to accidentally change its behavior when refactoring. You might break it, as in making it not compile, but once it compiles again it is very likely to behave like before.

Composition is just beautiful. It strongly promotes breaking your program into trivial pieces and then glueing them together. Types are excellent documentation, the type signature together with the function name often makes it easy to guess exactly what the function does. It’s easy to write relatively clean code in Haskell. I think that the pureness and composition of small functions almost automatically makes it happen.

Actually, in Haskell it is a bit difficult to write functions that are hundreds of lines of code doing many different things. In Java or JavaScript that is what many people begin doing, and something they only unlearn as they become more skilled. I think that it is possible to produce nice code in all languages, but Haskell does help you quite a lot to keep your code nice, not to mention hlint. Haskell does not guarantee that you produce good code though, let’s look at some of my learnings from this project.

Learnings From This Project

One thing I learned is that type aliases are very useful, you should use them whenever it makes your code more readable. Comments are in general not needed if the type signature and function name is good.

Naming your code increases readability, for example extracting out small pieces of code to the where clause of a function or simply making them top-level functions in the module. Putting too many functions that are relatively complex in the where clause is a bad idea, because you lose the explicit type signature (you should always specify it for top-level functions) which makes it difficult to directly understand when they can be used and how they can be combined. A small example of a nice usage of the where clause is:

transformToEdges :: Result -> Node
transformToEdges r = (r, fileName r, outgoingEdges)
  where outgoingEdges = references r ++ imports r ++ implements r

Note the increased readability in the top level expression. The where-clause is used to hide the messy details of what outgoing edges are behind a simple name. By using where it is often possible to make the top level expression very easy to read.

Curried functions are just awesome, they make it possible to compose almost any function. A good way to design them is to think of functions as being configured and getting what they operate on as the final argument.

Lazy evaluation is powerful, I still need to practice how to leverage it fully, but it is important to be aware of it. For example in my case I ran into problem when reading all files lazily. This caused my program to have too many open file handles. It was easily solved though, by hacking a bit to force the complete file to be read directly:

readFileStrict :: FilePath -> IO String
readFileStrict path = do
  file <- readFile path
  _ <- evaluate $ length file
  return file

Recursion further promotes clean code (small functions) and is quite easy to work with when you think of it in terms of base-case and induction/normal case. An interesting thing is that a lot of principles and ideas can be transferred to other languages.

Transferable Knowledge

One example of a transferable idea is solving problems through composition of many small functions, this can be used in JavaScript (eg. using Lodash-fp or Ramda) quite easily. Composition promotes having many small functions solving simple subproblems, and does often result in cleaner code.

It doesn’t end here, Hindley-Milner type signatures might be worth to use in JavaScript as well, even if they aren’t used for more than documentation. Without them all the functions you end up with can be quite difficult to read.

Currying is easy to use in JavaScript (eg. with Lodash-fp or Ramda). I think I would go as far as to say that composition is not especially useful without curried functions.

It is important to be aware of differences between Haskell and other languages though. For example lazy evaluation is a quite unique feature of Haskell, another feature is tail call optimization, which means that you can use recursion without constantly worrying about your stack blowing up. I think there are a lot of other transferable learnings, but they are a bit deeper and you simply have to code Haskell to learn them. If you don’t want to walk the path via Haskell, for JavaScript you might find Professor Frisby’s Mostly Adequate Guide to Functional Programming useful. 

Final Words

I would like to encourage every programmer to experiment with different languages and concepts. It is easy to just use what is immediately required for your daily job. But you miss out on a lot of ideas from other languages and risk getting caught in a small bubble, hindering you from developing as a developer.

At Small Improvements we get to spend around 20% of our time doing other things such as fixing pet peeves and working on side-projects (for example this one). In addition to this we have hackathons and ship-it weeks. I would recommend every company to introduce these kind of events, because I don’t think I’m the only developer who would agree with that programming is way more fun when you keep learning new things and growing as a developer.

To be a good developer you need to keep learning and don’t be afraid of not being instantly awesome when picking up something new. Keep exploring the beautiful world of coding!

Onboarding your Team to React + Redux

Eventually the time will come when your team wants to use React + Redux for their frontend stack. We made that commitment some time ago at Small Improvements – we never had to regret it. As we come from an Angular 1.x frontend application, we needed to decide between React (+ ecosystem) and Angular 2. Staying with Angular 1.x was no option for our three teams. We saw too many benefits in other solutions like React e.g. to embrace functional programming. In the end we decided to go all the way with React + Redux, since most of our developers used it already in their side projects.

The article should give other teams or even companies some learnings and insights to have a smooth onboarding to the React + Redux ecosystem. While some learnings are applicable to React + Redux, others might be general insights about migrating to another technology.

React before Redux

In the beginning not everyone is familiar with React and its ecosystem. Give people time to understand, to experiment and to exchange thoughts. Introduce React without a state management library. Make use of the React lifecycle methods and internal state management (setState) to teach React itself. After a while you will want to introduce a state management library. It’s easy to get around an external library in smaller side projects, but it isn’t when you are contributing to a larger code base. In general don’t introduce a state management library when you don’t know the problem it solves for you.

Introducing Redux

In Angular state management got messy for us. Sharing state between components, watching state changes of components and services, storing server data – it was pretty soon a chaos. We knew the flaws of state management in Angular when the Flux pattern evolved. Perhaps that’s the cause why so many developers at Small Improvements got hooked by React + Redux eventually.

Don’t overengineer Redux

Once you started using Redux as state management library, don’t overengineer it. In the beginning the Redux ecosystem itself can be overwhelming. Moreover the way you deal with state management in Redux is different than what a lot of people are used to from the past. Again, like in React, give your team a chance to understand Redux. Not everyone will be already familiar with the overarching functional programming principals behind it.

Already one little library like redux-actions masks the excellence of Redux to stick to vanilla JavaScript. In redux-actions people might never experience plain actions and action creators. The functionality createAction camouflages some basic usage of Redux. Same goes for the handleActions helper function. People might never experience a plain Redux reducer which itself is plain JavaScript reducer function.

We love to use these little enhancements in Redux, but we made the mistake to introduce them too early. Nowadays it’s easy to npm install a package. Speak with your team about new packages. Don’t take from them the opportunity to learn it themselves.

Understand it, before you use it

Learn React before using Redux. In react-redux you use the connect functionality to literally connect the Redux store to your React components. The Provider component at the root level makes sure that the store is passed as context to the underlying components. Thus you can retrieve the store state in mapStateToProps and pass dispatchable actions on the store to your components in mapDispatchToProps.

But what is the connect doing there? It’s too easy to simply call it magic. When learning React before using Redux, you can make sure that everyone in your team understands the concept of higher order components (HOC). After that everyone has the chance to reproduce the underlying mechanics of connect in react-redux. Eventually everyone is aware of the hidden Redux store in the React context.

Asynchronous Actions

We decided to use redux-saga as we had to introduce asynchronous actions. In our case some people already used it before and felt that it is a great match. After some time using it, we don’t regret to handle our side effects with generators.

But what is the best approach to introduce asynchronous actions? Not everyone is aware of generators after all and it might add yet another level of complexity. Redux-thunk is a great way to begin with asynchronous actions. It allows you to dispatch delayed actions. When the whole team feels comfortable with asynchronous actions, you should decide whether you want to experiment with another solution. The way to deal with asynchronous actions should be a recurring discussion topic until you make the final decision. Otherwise you will delay the decision and end up with bigger refactorings of your asynchronous layer.

Normalize your data?

We don’t normalize our data, even if we were aware of the aspect to keep a flat state in Redux. It was no unconscious decision to keep a deep nested state. Since we already have a large Angular 1.x application, we are used to most of the data structures. In our case it would expand the gap between the two worlds, because we would have to get used to two different data structures. Once we introduce new data structures, we keep the state flat from the beginning. We are not sure yet whether it was a bad (unconscious) decision. Still we feel comfortable to keep our deep nested state immutable by using ES6 spread operators. Moreover reducer tests with deep-freeze help to ensure immutability.

Feature Folders

In the beginning we had a technical folder separation. Everyone is used to it from React + Redux tutorials. We had folders for reducers, actions, components, constants etc. Very early we noticed that the approach would never scale with independent teams. We decided to have feature folders. Now we have packages with clear boundaries. Take for instance a Table component package:

--Table/
----index.js
----components/
------Table/
--------index.js
--------container.js
--------presenter.js
--------style.less
------Cell/
--------index.js
--------presenter.js
--------spec.js
------Row/
--------index.js
--------presenter.js
--------spec.js
----ducks/
------index.js
------filter
--------index.js
--------spec.js
------sort
--------index.js
--------spec.js
------select
--------index.js
--------spec.js

One index file gives an entry point to each package. The ducks index file still exposes all necessary action creators and reducers. When another package wants to dispatch a Table action, it has to import it from “Table” and not from “Table/ducks”. The package has clear boundaries.

Ducks Everywhere

“I saw you are using ducks?” Yes. We decided to use them when we introduced feature folders. The advantage is to have everything in one place. But once you introduce ducks, you should decide on best practices to keep the duck files tidy. Standardize your naming for reducer and action creator functions to distinguish them.

Moreover we noticed that ducks don’t scale very well for us. The lines of code grew very fast. That’s why we decided to split up the ducks responsibilities in smaller domains, like you can see in the ducks folder for the Table in the example above.

What about boundaries to legacy frameworks?

It’s silly that we call Angular 1.x already legacy, but that’s JavaScript today. Still we had to figure out how to connect both worlds.

ReactDOM.render() is all you need to have a React component tree in Angular. Moreover you can simply use the react-redux Provider component to pass the imported Redux singleton store as context to the component tree. You can dispatch actions on the store and get the state from everywhere, since you only need to import the store in your non React world.

The other way around we use a helper to render Angular components in React. Once you have a large code base with complex non React components, you can’t easily rewrite them all at once. That approach ensures us a stable migration from Angular 1.x to React. We can still reuse Angular components. Once we refactor one component from Angular to React, we can easily exchange the component in one place.

What about a synced cache to the legacy framework?

In the beginning we experimented with Relay to facilitate caching of our backend data. Even more we had attempts to make Relay independent of React to use it in Angular as well. But very soon Relay felt like a foreign object in React to us. We stopped the experiment to use Relay + GraphQL and remained with using our RESTful solution.

Still we had to figure out how to cache the server side data in our single page React + Angular application. Since we already used an own store architecture in Angular, we synced the stores to the Redux store. Everything we implement in the future uses the Redux store, but our old Angular pages still get the cached data from our store architecture.

Moreover we introduced ladda to cache requests to our API. It’s an in-house solution by one of our developers, which will get open sourced properly. Ladda introduces a JavaScript data fetching layer with caching without dependencies. You can easily make requests in Angular or React.

Hack & Tell

You read a lot about giving people time to understand the ecosystem properly. Your whole team is sitting in the same boat when introducing something novel. Everyone tries to accomplish a scalable and maintainable code base in the new ecosystem. At Small Improvements we are having weekly Hack & Tells to exchange our recent gatherings. We share learnings to get a mutual understanding of doing things in React + Redux. In general those Hack & Tells don’t apply necessarily to one technology.

Knowledge you could exchange in a weekly Hack & Tell:

  • best practices
  • patterns
  • decisions like naming, folder structure etc.
  • reusable components / feature packages
  • new npm modules which solve a real problem in your code base
  • recent pull requests

Perhaps once a week isn’t even enough to exchange knowledge in a whole new ecosystem. Our code base is scaling well, even though we feel that we could refactor all the time. We don’t regret the step to migrate from Angular to React and its ecosystem.