Reflections on CSSconf EU 2017 (Berlin)

cssconf-welcome

Recently three of our developers attended the CSSconf 2017 in Berlin. The talks have been inspiring for us and once again we got clear about what a mature language CSS has become by now. The steady addition of new features continue to amaze and the enthusiasm of the community is infectious. The conference itself was well organized (the food was awesome!) and we appreciate that the organizers took so much care about creating a safe and diverse environment for everyone.

In this blog post we reflect on our learnings from the conference and share our experiences on developing efficient CSS at scale.

There are a lot of guidelines on how to write modular JavaScript and tons of books about how to make Java code efficient. But what does good CSS code actually look like? The answer is probably not that much different from other programming languages. The first step towards modular, reliable and reusable CSS code is to perceive CSS as a regular programming language. At CSSconf Ivana Mc Connell puts up the question: “What makes a good CSS developer?”. She points out that CSS still isn’t included as a programming language in the Stack Overflow survey and that there is even a hierarchy in many companies between CSS developers and “real” developers.

“We can always change the hierarchies. CSS is real development – let’s make that a given.”
(Ivana Mc Connell)

There are still developers and managers who think that CSS is just about putting fancy colors and setting some margin here and there. However, CSS has become a powerful tool and new features are continuously added, not to mention the various preprocessors that have become a quasi-standard throughout the last years. Things like grids, animations and filters are first class citizens by now and already wildly supported by browsers.

To showcase the feature richness of CSS, Una Kravets performs a live coding session where she puts up a simple yet fun and interactive browser game just by making use of regular CSS. Mathias Bynens already did something similar at CSSconf EU 2014, where he presented a mock of a shooter game that only consisted of one single line of HTML. The point here is not that CSS should replace JavaScript. On the contrary – while CSS and JavaScript are and always will be different things, it’s interesting to see the borders blurring and how both languages influence each other.

Make it scale, but keep it consistent

At Small Improvements we work on a large single page application. In three feature teams we maintain roughly 40k lines of LESS code. Two of our biggest ongoing challenges are to make our styling consistent and our CSS code modular and reusable.

Maintaining a style guide and establishing a design culture

Especially when multiple teams work on one and the same application there is a certain risk that each team comes up with a slightly different style of implementation. There are numerous examples for this and conducting an interface inventory (as suggested by Brad Frost) can yield surprising results. Achieving consistent styling is even more difficult if the implementation of the frontend is technically not homogeneous. Even though we implement all new frontend features at Small Improvements in React, we still have a lot of Angular code and even some old Wicket pages lingering around. The user doesn’t care about these details, so the question is how to keep track of all the various patterns that we use across the app and provide a seamless design language?

In her talk “Scaffolding CSS at scale”, Sareh Heidari shared an example how to discover and extract visual patterns at the BBC news website. We can confirm that we made good experiences with a similar approach. We recently set out to build a new style guide for our app that allows us to stay aware of all the different patterns we use. This helps us not only to compose new features out of existing components. Instead, the main key for us is not the style guide itself but it is the process around it: having a close eye on everything that’s newly-built and coming together frequently to talk about how to integrate these additions into the bigger picture. We perceive the style guide as a cause for discussion; you could even say that it’s an artless byproduct of our design process.

Project setup and implementation

For us it works best to structure our code base in a domain-driven way. We follow this approach in our entire app and can fully recommend it. For the frontend we decided to use CSS modules (in form of LESS files) that we put right next to our React components. That way the component itself always comes with its own (inward) styling. There are various attempts in the React community for this kind of project layout. (It even became popular to go further by using some sort of inline styling, see Mark Dalgleish’s talk for an overview.) For us CSS Modules worked well since we used LESS previously, which then allowed us for a convenient and gradual migration path.

Glenn Maddern – who heroically stepped in last-minute for Max Stoiber – updated us about the most recent changes in the Styled Components project. But no matter whether you prefer CSS modules or Styled Components, it is crucial to understand the motivation behind these libraries in order to build large scale applications: Glenn Maddern’s talk at ColdFront16 gives a good insight into this way of thinking and why it’s beneficial.

The only thing where we jealously glance over to Styled Components is the ability to parametrize CSS code so easily. Therefore we are looking forward for CSS variables being better supported in browsers, because that would be the native solution to the problem. David Khourshid demonstrates the handover between JavaScript and CSS in his talk “Getting Reactive with CSS”. With this solution the JS-CSS-intersection hassle falls right into place.

Takeaway

We don’t have a catchy conclusion to draw. There is certainly no right or wrong in what approach works best or which library helps most. For us it was nice to see a lot of our current assumptions confirmed, and if we were asked to write down three of them, then it would be these:

  1. CSS is a fully-fledged programming language – for sure! Stay up to date with all new features and take advantage of them once they are commonly supported.
  2. Keep styling and markup closely together. This promotes reusability best. Leverage component interfaces in order to create clear and predictable boundaries.
  3. Talk, talk, and talk frequently about visual patterns to ensure consistency in the long term. Some sort of process or documentation can help here.

The development team here at Small Improvements have done their fair share of conferences in the past (thanks to the individual learning budgets we are offered). It’s awesome now that our design team has grown to the point that we’re also attending designer-developer related conferences such as CSSconf EU. Bring on next year!

Ladda – A New Library for Client-Side API Caching

LaddaLogo-horiz-color (2)

In an ideal world, caching wouldn’t be something we have to care about. However, with more and more mobile users on slow and limited data plans, as well as more advanced applications, we can’t escape reality. We need caching. As a response to this we have invested quite some time in Ladda – a dependency-free client side library for caching, invalidation of caches and handling different representations of the same data. Ladda is implemented using JavaScript (ES2015), framework agnostic (works equally well with React, Vue, Angular or vanilla JavaScript) and designed to make it easy for you to implement a caching solution without increasing the complexity of your application code.

Read on to learn how Ladda can be useful for you, how it helps you implement a sophisticated caching solution, and for a comparison of Ladda with other popular solutions for client-side API caching.

Scenarios Where Ladda Can Help You

There’s no such thing as a free lunch. Caching speeds up your application, but it comes at a cost: it increases the complexity of your application code. The following examples will show you how Ladda can help you to reduce this cost in some common scenarios.

Just Caching

The most straightforward usage of a cache is simply to cache a value, and if it has been previously cached, return it directly from the cache. Consider that you make an API call “getUsers”. The most straightforward solution for implementing caching would look something like:

const getUsers = () => {
   if (!inCache(key) || hasExpired(ttl, key)) {
       const res = api.user.getUsers();
       putInCache(key, res);
   }
   return fromCache(key);
}

When using Ladda your application code would look like:

const getUsers = api.user.getUsers;

Note how we separate what we want to do (getting the users) from the caching logic, which is just an optimization. This is a pretty simple example, which might not be a sufficient motivation to add a library to your application, but it quickly gets quite complicated as we start to manipulate our data.

Cache Invalidation

Stale data is your new enemy as soon as you introduce caching. Consider the example of users again. You are getting all users, but then you spot a typo in one user’s surname and correct it. Now you are left with two choices: either you update the cache used by “getUsers”, or you remove the cache and refetch the data the next time someone calls “getUsers”. Let’s consider the latter option first. It could look like:

const updateUser = (modifiedUser) => {
    api.user.updateUser(modifiedUser);
    clearGetUserCache();
}

With Ladda it would look like:

const updateUser = api.user.updateUser;

Ladda would clear the cache for you, you just need to tell Ladda what to invalidate in a configuration, which lives outside of your application. However, by default Ladda will pick the harder option, it will update the cache for you. This comes with the benefit that after updating your user, you can call “getUsers” and get all the users directly from the cache, with your updated user of course.

Ladda has more to offer, but I’ll leave that for you to read about. You’ve heard a lot of promises and seen some simple code. But as you might have suspected, you still need to specify things such as the TTL (time to live), what to invalidate, and which function is updating the user and which one is retrieving users somewhere.

How does it work

The first claim, that Ladda allows you to add caching without making your application code more complex, is achieved by separating your application code from your caching logic. Ladda allows you in a concise and declarative way to express what TTL you want for a specific entity, such as user, and what you want to invalidate when something happens. Going back to the simple updateUser example, where you simply invalidate the “getUsers” cache, it would look like:

{
    user: {
        api: userApi,
        invalidates: ['user']
    }
}

Of course, you don’t even have to specify that ‘user’ invalidates its own cache, since Ladda will update the cache in place for you, so you can simply write:

{
    user: {
        api: userApi
    }
}

And rely on Ladda to always ensure that “getUsers” gives you an up-to-date list of users. Now, the only thing left is to create “userApi”. But this is something that you probably already had, it is just a bunch of functions communicating with your user endpoints. Let’s pretend that you have a file:

export function getUsers() { return doHttpGetRequestAndReturnPromise(); }

export function updateUser(user) { doHttpPutRequestAndReturn200(user); }

Ladda only requires you to specify the CRUD-operations:

getUsers.operation = 'READ';
export function getUsers() { return doHttpGetRequestAndReturnPromise(); }

updateUser.operation = 'UPDATE';
export function updateUser(user) { doHttpPutRequestAndReturn200(user); }

That is everything, just adding metadata directly to your functions and putting your entity in a configuration object. There are, of course, plenty more options, such as one mentioned already, TTL. You will find them all in the documentation. You’ll also find complete examples in the repo to make it easy for you to get started. Don’t forget to have a look at Search Hacker News with Ladda and this contact list (which uses all the supported CRUD operations) for examples that you can play around with.

Before we move on, let’s just have a quick look at a final example and what HTTP-requests it will result in:

api.user.getUsers() 
  // GET-request was sent
  .then(() => api.user.updateSurname(user)) 
  // PUT-request was sent
  .then(api.user.getUsers); 
  // No request was made! Directly from the cache.

A good caching solution tries to maximize the number of cache hits, Ladda is no exception.

Ladda Release
Fig 1. Sequence diagram showing the result of calling getUsers followed by updating a user and then calling getUser again. Note that we do not make a HTTP-request for the final getUsers call.

Ladda is not the first attempt to make caching simple, I believe that it can be the best choice in some cases, but it is important to look into all available options. Let’s do a brief comparison between Ladda and some other popular caching solutions.

Comparison With Other Solutions

First off, keep in mind that I’m not an expert in the other technologies, but I’ve tried to make the comparisons in an objective manner. One very popular solution is Relay. The big difference between Ladda and Relay is that Relay is built for GraphQL. Hence, Ladda and Relay are not really two alternatives to compare, since if you have a GraphQL backend, Relay is without doubt the better choice, but otherwise it isn’t a choice.

Another solution is redux-query. One key difference is already revealed in the name, it is specifically designed for use with Redux. Ladda can be used with any framework as well as without a framework. But let’s assume we are using React and Redux to make a viable comparison. The most prominent difference is that redux-query influences how you write your application code. This means that it has a greater buy-in than Ladda, but it also means that it can handle more things for you. If you want a more complete solution and don’t mind the buy-in, redux-query might be the best choice. But if you have your own solutions in place and just want to speed up your application by caching, then Ladda is probably a better choice. You can potentially add or remove Ladda without changing a single line of application code.

But perhaps more importantly, it’s about which code style you prefer and which library can offer the features you need. Ladda lets you stay with simple function calls that are “magically” very quick sometimes (when you hit the cache). To get users you simply call “getUsers()”. Other solutions tend to use a more declarative approach, where you fetch your data by creating an object describing what data you want.

There are a bunch of other caching libraries in JavaScript, for example js-cache (https://github.com/dirkbonhomme/js-cache). These are more generic than Ladda. They don’t support automatic invalidation logic, views of entities, or many other pieces of functionality that are often required in a sophisticated caching solution.

Conclusion

We hope that you will find Ladda useful and keep it in mind next time you need client-side caching for your API layer. Ladda is dependency-free, only 14KB, has high test coverage and allows you to specify your caching logic in a declarative and very simple way. Give it a shot and let us know what you think!

Our journey migrating 100k lines of code from AngularJS to React (Chapter 1)

Web

Intro

This is the first post of a series explaining the story and technical learnings we had from starting to migrate from AngularJS to React. Check out the github repo for examples and the full code.

Our frontend story so far

At Small Improvements we’re aiming to make meaningful feedback available for every employee in every organisation. This also implies that we provide the best experience for our users. Therefore we’ve been on the front line to adopt AngularJS over Wicket, and started to rewrite our core features in AngularJS back in 2012. We saw the great potential in having a dynamic single page application.

In 2014, when the Angular team announced Angular 2, we already had a very large application and had gained a whole lot of knowledge using Angular. We were worried and excited at the same time. We faced a lot of challenges scaling Angular 1 and implementing best practices while moving fast.

In 2015 we sent almost all developers to AngularConnect in London, expecting the Angular 2 BETA release. Two of our developers gave a talk to share our approach to and learnings from writing a huge AngularJS application. We came back with the impression that Angular 2 was still very unstable and no clear migration strategy seemed to be available.

anguarconnenct.jpg
The Small Improvements Team in London at AngularConnect

Testing React plus Relay and GraphQL in the field

Our CEO has a strong engineering background, so he’s very open to play around with new technology and loves hackathons and ship-it weeks. That’s why he was very open to giving React (with Relay and GraphQL) a chance. As a company, our approach to evaluating a new technology is to have one of our dev teams make an initial tech spike. In this case Team Green decided to experiment with the novel technology in the field by coding a prototype for a new feature in the new tech stack.  We found React extremely promising and it solved a lot of challenges we had with Angular 1. Relay was cool, but lacking some core features at that time, such as support for invalidation or lazy loading of expensive fields.

Also adopting Relay would mean a complete buy-in from our whole stack, Frontend and API layer, due to the dependency on GraphQL.

So to sum it up, the outcome was: React: OMG!, Relay: Cool, but…

Our Reasons to go with React

  • Easy to write: It’s closer to vanilla JavaScript and components come without any boilerplate configuration
  • Great use for atomic components. In contrary to AngularJS where every scope is “expensive”
  • Easier to understand, React is the view library and has a slim API
  • Designed with performance in mind: Concept of virtual DOM
  • Attractive for recruiting, new technology attracts passionate developers, because they are keen to adopt new technologies
  • New challenges for the dev team, learn and grow!

When we used it for a large feature – a new Activity Stream – it multiplied our investment, due to an unclear focus. We were shifting between trying out the new technology and building first iterations on the feature.

Lessons learned

Use a smaller feature as playground when experimenting with a new technology.

The migration strategy

Now that we’ve decided to move from AngularJS to React, we saw two options for a migration strategy: A complete rewrite of our frontend or a slow transition. Let me rephrase that: We saw one option: A smooth and focused transition. Nobody wanted to spend months rewriting our whole application, although that would have been a fun argument with our CEO. At Small Improvements, we have a strong customer centred culture, so we didn’t want to slow down too much on our mission. Additionally it is a high risk to rewrite everything with a technology that nobody is experienced with.

Each week all Software Developers at Small Improvements meet for a developer exchange meeting. That’s the place where we share learnings, discuss ideas but also decide on larger undertakings. In this case we discussed and decided on the idea for the migration strategy a sub team of developers has developed and presented.

The basic idea

A frontend application is built like a tree, since HTML documents imply a structure of nested HTML elements. Modern web applications are structured in nested components. A simplified mock of an application displaying a list of comments may look like that:

Screen Shot 2017-01-24 at 09.50.02.png

The corresponding component tree looks like this:

Screen Shot 2017-01-24 at 09.50.06.pngWe looked at how complex it would be to replace and rewrite this tree.

screen-shot-2017-01-24-at-09-50-34

The main Application component is hard, it usually is wired up with complex logic like routing. Similar the Navigation component. The routing is tightly coupled with the main components and in case of AngularJS central piece of the framework. A NavItem is easier, it displays a link and has some trivial logic like “am I active” and displays a link with text. The content part of our app consists of a sub tree displaying a list of comments. The ComponentList is trickier, since it is hooked to the data layer and may contain state like: what item is selected etc. Again we see the Comment is the easiest part of that tree, basically rendering a Comment and handling user interaction. The Text component for instance is simply responsible for rendering the text. That component is easiest to re-write in another technology.

Our conclusion was that the further down the component tree you go, the easier it gets to replace components. With that in mind, we defined guidelines and looked at requirements for that migration strategy.

Guidelines

How to tackle new features?

We wanted a full buy in, so we defined our first guideline:

  1. Every new feature will be built in React & Redux.

How to tackle existing code?

  1. If possible start to migrate leaf-first up to a whole component tree until you hit the routing module.
  2. If you touch old code/ components, estimate how much it would cost to rewrite it, if less than 30 minutes, rewrite, else get a second opinion.

How to migrate common UI components?

The basic building blocks of an application are generic, reusable UI components, like Dropdowns, Buttons, Forms etc. Those are necessary to build new components with React.

  1. Re-write generic UI components when you need them, and let other devs know that they now exist. Use that chance to improve the design/ UX.

Requirements

  • Component based architecture
  • Angular Directives structured as container/ presenter components, read more here
  • Separation of concerns/ View/ Logic/ Service/ Communication layers and Injectable actions to encapsulate side effects like http calls etc

Fortunately our frontend design already fulfilled the requirements. If you want more information on how to design and structure your application watch our talk How to design large AngularJS applications that scale from AngularConnect or Refactoring To Components by Tero Parviainen.

Building bridges

We found that it was easiest to start by replacing the leaves of our application component tree. The missing piece was a bridge between the “old” world and the “new” world. Meaning AngularJS and React, in our case. How can we use React to render the Text component and get it’s data from an AngularJS component?

Rendering React within AngularJS

A React component is, well, just another UI component. It gets data and actions via props and is rendered to the DOM. It is responsible for internal state and handles user interaction. So a simple concept of our bridge could be an AngularJS component working as thin layer with the responsibility to pass on data to the React Component.

Let’s aim to answer our first uncertainty: Can we use an AngularJS component to render a React component?

This is our AngularJS comment component:

module.exports = angular.module('ngReactExample.comment', [
]).component('comment', {
    bindings: {
        comment: '<',
    },
    template: '{{ $ctrl.comment.text }}',
    controller: function() {
    }
});

Our React version of a comment looks like that

const Comment = (props) => {
    return (
         { props.comment.text }
    );
};
export default Comment;

The React component is rendered to the DOM by calling:

ReactDOM.render(<Comment />, element);

Let’s try to call this within an AngularJS component:

import Comment from './Comment';

module.exports = angular.module('ngReactExample.comment', [
]).component('comment', {
    bindings: {
        comment: '<',
    },
    controller: function() {
        ReactDOM.render(<Comment />, $element[0]);
    }
});

It works! This is the simple yet powerful starting point from where we can now build our AngularJS – React bridge. The elegant part is that we don’t need to mess around with DOM node ids or use the DOM API to query the element we want to render React to. We can directly pass the reference to the AngularJS element. You might have noticed a little detail – at the moment we’re only rendering the React component when this component is initialized. In a dynamic app we want dynamic components. So we want to trigger the rendering whenever the component changes. To achieve this we can use the lifecycle method $onChanges.

import Comment from './Comment';

const render = (element) => {
    ReactDOM.render(
        <Comment />,
        element
    );
}

module.exports = angular.module('ngReactExample.comment', [
]).component('comment', {
    bindings: {
        comment: '<',
    },
    controller: function($element) {
        const $ctrl = this;
        $ctrl.$onChanges = () => render($element[0]);
    }
});

Now whenever our AngularJS component receives changes we’re redrawing the React component.

With this working we can tackle the next question: How we can pass data down to our React component?

Passing data from AngularJS to React

In React we use props as interface to pass data to a component. An AngularJS directive receives inputs via bindings, so we will get the comment data from an outside component and pass it down to our React component. The full working bridge looks like this:

import Comment from './presenter';

const render = (element, props) => {
    ReactDOM.render(
        <Comment { ...props } />,
        element
    );
}

module.exports = angular.module('ngReactExample.comment', [
]).component('comment', {
    bindings: {
        comment: '<',
    },
    controller: function($element) {
        const $ctrl = this;
        $ctrl.$onChanges = () => render($element[0], { comment: $ctrl.comment });
    }
});

Fixing the possible memory leak

As described here React will not automatically clean up the components which can lead to a memory leak. We can use the lifecycle hook $onDestroy() of our AngularJS component to unmount the React component.

import Comment from './presenter';

const render = (element, props) => {
    ReactDOM.render(
        <Comment { ...props } />,
        element
    );
}

module.exports = angular.module('ngReactExample.comment', [
]).component('comment', {
    bindings: {
        comment: '<',
    },
    controller: function($element) {
        const $ctrl = this;
        $ctrl.$onChanges = () => render($element[0], { comment: $ctrl.comment });
        $ctrl.$onDestroy = () => ReactDOM.unmountComponentAtNode($element[0]);
    }
});

Voila! We’ve successfully passed data from AngularJS to a React component.

Completing the bridge from AngularJS to React

We’ve now found a way to wrap a React component with an AngularJS layer, so we can hook it up to the rest of our application.

This is a great starting point and a good proof of concept. Our current bridge is an interesting evolution of this first spark. In the next posts we will go more into technical details, also answering the question what we do, when the AngularJS component get’s destroyed, and more topics.

To be continued…

A sneak peak into the next chapter where we’ll have a closer look at:

  • Using AngularJS services in React
  • Improving the AngularJS-React bridge to work with Hot Reloading and avoid unnecessary re-renderings
  • Rendering AngularJS components in React

Stay tuned! 😉

Thanks for reading and if you have any questions or feedback, don’t be shy and reach out! @sfroestl If you liked the post, please share!

About the author

green3-1024x683

Sebastian Fröstl

Team Lead. Software Engineer. Trainer. Coach. Speaker. Devoted to Personal Development. Organizer of @angular_berlin.
@sfroestl
sebastianfroestl.de

Resources

Redesigning the Small Improvements emails

During Ship It Week, I took the opportunity to redesign our emails. The goal was to deliver a more modern and fluid layout in hopes of strengthening trust and creating a more pleasant user experience among our customers.

Before and After

email-blogpost--before-after-01.jpg
Before and after images of Small Improvements emails

Design

According to research1, aesthetics play a big role on how people interact with things. And while the old email template is usable and performs the task well, it was outdated and not as attractive as the current state of the app itself.

“Attractive things make people feel good, which in turn makes them think more creatively”

Emotional Design by Don Norman

There are many factors that affect how a person feels when interacting with an email from Small Improvements. The key is to simplify it by making it easier for people to understand what the email is about. And since emotions change the way our mind operates – the happier we are, the better we can provide valuable feedback!

We want our users to feel excited when they receive an email feedback request, or whenever a feedback has been made available to them. In the end, it’s not just about how a part of the tool looks – it’s also a way to connect individuals to special events that may happen during their time in a company.

email-blogpost-03.jpg
Different mockups of invitation email

Technical Details

Automatic inline styling

Emails are best structured in tables and styles work best when inlined. Inline styles can be a pain to maintain so I looked for a way to make it easier to update these templates in the future.

The great thing about working in the tech industry is that solutions to some problems are just a few clicks away because you can almost be certain that people have gone to the same problem already. We used a little library called gulp-inline-css that does exactly what it’s supposed to.

Before inliner:

<table class="table-reset">
  <tr>
    <td align="left" class="logo-container padding-copy">
      <!-- header -->
     </td>
   </tr>
   <tr>
     <td align="left" class="article-container padding-copy">
       <!-- content -->
      </td>
   </tr>
</table>

 

After inliner:

<table class="table-reset" style="border: none; border-spacing: 0; padding: 0; width: 100%;">
  <tr>
    <td align="left" class="logo-container padding-copy" style="color: #353535; font-family: 'Avenir Next', 'AvenirNext', Helvetica, Arial, sans-serif; padding-bottom: 20px;">
      <!-- header -->        
    </td>
  </tr>
  <tr>
     <td align="left" class="article-container padding-copy" style="color: #353535; font-family: 'Avenir Next', 'AvenirNext', Helvetica, Arial, sans-serif; font-size: 16px; line-height: 25px; padding: 20px 0 0 0; text-align: left;">
       <!-- content -->
     </td>
  </tr>
</table>

All templates get a .responsive file extension so that the script knows which files to transform. It then outputs them with the correct filename that the accompanying java file needs in order to render correctly. Now everybody can make a CSS file, as they are accustomed to, and the script will automatically inline those styles!

Testing

During the entire process, I used a combination of a local mail server and online email testing platform. The benefit of the local mail server (which we’ve already set up) is that I get to see the email come in through my inbox when triggering events in the app, and can immediately see if something went wrong. Testing on the email testing platform, on the other hand, makes sure the templates are rendered correctly in as many email clients as possible.

Testing on different email clients

Next Steps

Changing the look and feel is just the first step to making our emails more pleasurable to interact with. The next step would be getting rid of the long links and replacing them with buttons and then revisiting emails that need a text overhaul!

Conclusion

Coding a template for emails is, without a doubt, very tricky. With hundreds of email clients and devices available within our grasp today, it’s no wonder that designing emails can quickly turn into a mess. But it can be done! The key is to make it simple and as straight to the point as possible. By combining it with rigorous testing, emails can be made almost as responsive as any website.

Endnotes

1  Apparent Usability vs. Inherent Usability Experimental analysis on the determinants of the apparent usability by Masaaki Kurosu and Kaori Kashimura;

Aesthetics and Apparent Usability: Empirically Assessing Cultural and Methodological Issues by Noam Tractinsky

A Facelift for the Font Family

Today we’re excited to announce a new makeover for the Small Improvements application; a new font family! Please welcome “Avenir Next”!

Avenir Next - our new font

newfont1
Sample Objective within Small Improvements

This is all part of our mission to create a more enjoyable, engaging and enticing experience for  Small Improvements users. The Design Team @ Small Improvements has had a big year; going responsiveupdating colours, icons and badges, and now for the final Christmas treat, we’ve introduced the smart, the elegant, the distinguished style of Avenir Next.

Watching our Weight

Part of the new font release has also included making better use of font weights (eg. Regular vs. Bold) – giving you a clearer view of what’s important on the page, and making it easier to scan the page with your eyes to find the right things.

Keeping it Uniform

In the past, the default SI font varied across devices. That’s because we used ‘system fonts’ only, so some users had Helvetica, some Arial, some had which ever default ‘sans-serif’ font their device had.

Now all users will share the same experience across all devices.

screen-shot-2016-12-12-at-15-55-35
Sample Message within Small Improvements – with new typography

 

 

Looking Back at GOTO 2016

By Peter Crona and Michael Ruhwedel

goto

First of all, it was an amazing conference as always. None of us presented this year, but look for us in the future. Many of us at Small Improvements tend to go to more specific conferences, such as React Europe, DockerCon or JSUnconf. GOTO is more of a generic software engineering conference, focusing on issues such as architecture, security and new trends in the field. It doesn’t go as deep into the topics as the specialized conferences, but it serves well to give an overview and an introduction to interesting topics. Some of the most interesting and most popular topics were, as expected, microservices, data science, security and ethics. Let’s start with microservices.

Microservices are the Future

Something interesting about the future is that it is also always in the present, just initially hiding a bit in the corners. A clear message from Mary Poppendieck was that microservices are the future. Regardless of whether we want it or not, we need to learn it and will eventually use it.

Susanne Kaiser from Just Software talked about their ongoing journey from a monolith to microservices. She warned us from doing too much at once, but concluded that going from a monolith to microservices was worth it in the end. She also told us about the importance to not underestimate the effort required to do so. Later on Ilya Dmitrichenko walked us through Socks Shop, a demo application to show how an application built with microservices can look. He also showed us how a microservice-based application is deployed.

I urge you to read up on microservices if you haven’t. It is truly fascinating how convenient the configuration is nowadays, and if you’ve been around for awhile, you will find it interesting to compare with how we did it in the good old days. Have a look at this configuration for example, lovely, isn’t it? Let’s move on to another topic, which I have a very strong interest for, namely data science.

Seeing into the Future

It is truly fascinating how quickly data science has become popular and advanced. One of the first talks I went to was “Applied data science and engineering for local weather forecasts” by Nikhil Podduturi from Meteogroup. He took us through how they started using machine learning, running everything on their own laptops and then moving into the cloud. He showed us a bit of their architecture that process more than a terabyte daily. I enjoyed his talk very much and had a chat afterwards, in which he pointed out that, when getting started with data science, it is sensible to start with the basics, learning/repeating the mathematics and then move on to hot techniques such as deep learning. This will make it easier to develop an intuition for which technique to use when and how to find the best parameters. He recommended using Python since it has a very mature ecosystem for machine learning.

Robert Kubis from Google tutored us in Tensorflow, by working the Hello World of machine learning, namely classification of handwritten digits. He pushed the success rate of a neural network, up to an impressive 98%, while touching the basics of the Python API. This was a very interesting and hand-on talk, showing you how to use Tensorflow and giving you an introduction to deep learning.

How to find insights without using machine learning, was the topic of Michael Hunger from Neo Technology talk. He demonstrated how data can be modelled and queried using a graph. His talk focused on how Neo4J was used by journalists to analyze the panama papers.

Even your code repository is a datasource in itself that can be mined. This concept was presented by Dr. Elmar Juergens. By coloring new additions of code and test-coverages of functional tests, he clearly demonstrated that a dev- and a test-department at one of his clients had a serious communication problem: There was little overlap in what was tested and what was newly implemented.

The last two talks about data science were focusing a bit more on possibilities, philosophy and ethics. “Deep Stupidity: What Neural Networks Can and Cannot do …” by Prof J. Mark Bishop discussed about whether we can build general intelligence or not. “Consequences of an Insightful Algorithm” by Carina C. Zona focused on the importance of thinking through the ethical aspects when developing algorithms and using them. We are giving a lot of power to algorithms, and algorithms tend to reinforce prejudices and do not necessarily care about what is right, but are still used to make decisions that affect people’s lives. Let’s now have a look at the security talks.

A Secure Internet

When you learn a new concept, such as microservices, it is important to read up on security. It is easy to make mistakes that introduce vulnerabilities when you are new to technologies. Phil Winder talked about how to make your microservices secure. He was very practical and showed us common mistakes people do, such as running as root in containers and not setting up a sensible network policy. Dr. Jutta Steiner introduced us to Blockchain technology. She pointed out how we can use techniques from safety critical systems development, such as N-version programming, to securely implement it and minimizing the risk of bugs. The talk was unfortunately not going into implementation details of blockchain technology itself, but she made it clear that the technology can be used for much more than just a currency such as Bitcoin. Finally, let’s have a look at the ethics focused talks.

Ethics in Technology

The great thing about goto is, that it’s not got the latest technology topics covered, but also how to better get along with your fellow human beings.

Jamie Dobson encouraged us to think beyond capitalism in his inspiring “Postcapitalism” talk. It’s possible that the  power of 3D printing small and large can bring back the capital and onshore work in developed countries again.

Beginning with a short meditation Jeffery Hackert build a compelling argument for giving our full presence. With a full awareness of ourselves and our workplace come better informed observations, decisions and implementations. After all if you’re ever involved in a trolley problem, it would be really unfortunate if you’d be focused on your cellphone and not the lever.

If you’ve been exhausted by office politics Kate Gray and Chris Young can help you. Their great talk “How to Win Hearts and Minds” is about how the finesse of real world politics were used to push a blocked IT project to success.

Talks ranging from microservices to ethics shows you the great variety offered at GOTO, the conference really has a lot to offer.

Something for Everyone

Let’s end with some words about the conference itself. GOTO has five different tracks and the mix is very good, covering important and trending topics such as architecture (in particular microservices), security, data science and much more. In addition to this you find plenty of interesting people there to share ideas and pain points with. My only disappointment was that there was not a single talk about functional programming. But hey, you can’t fit everything into one conference.

Using Haskell to Find Unused Spring MVC Code

Screen Shot 2016-12-02 at 14.43.09.png

Not into reading text? Click here for the code.

Like a lot of people at Small Improvements I’m fascinated by functional programming. After coming back from our company trip in San Francisco I had trouble beating jet lag due to spending the evenings reading about monad transformers, I’m not kidding, it actually kept me awake.

For a while I’ve been thinking about cleaning up a little in our codebase, mainly the backend which is written in Java. I have known for ages that Haskell is really good with abstract syntax trees (ASTs) and was playing with the thought of creating a Haskell tool that would help me with this. However, to not completely violate the “do not reinvent the wheel” rule I first had a quick look at what’s already out there.

Finding An Existing Tool or Building My Own

Most of the developers at work use IDEA (for editing Java) which has built in tools for finding unused code and do all different kinds of code analysis. I tried using it for finding unused code a couple of times with different settings but didn’t manage to get acceptable results. The number of false positives was way too high for it to be useful, in addition to this it was incredibly slow. I also tried Findbugs without satisfying results.

I’m sure it’s possible to configure some existing software, but rather than spending more time finding a COTS-tool I figured I might just code it myself. I was thinking that if it’s specific to our project it shouldn’t be so hard. I quickly realized regular expressions wouldn’t be enough or would be very tricky to use and limit my flexibility. This left me with the choice of writing a custom parser or building a proper AST and work with that.

I have bad experience of working with ASTs in Java, but Haskell is another story, traversing a tree is a piece of cake. I had a quick look at Hackage and noticed that someone already has written a parser for Java in Haskell, so it was settled, I was starting Small Improvements’ first, albeit small, Haskell project. Finally I got to use Haskell at work!

My Solution For Finding Unused Code

It is actually quite simple to find unused Java code. Let’s have a look at my solution. In essence I’m reading all the .java-files in a folder, building an AST using language-java and then traversing the AST to collect information that can later be used to decide if a file is used or not.

The main information I’m looking for is whether any other file imports a file. However, since Java does not require an import statement if the dependency is within the same package I also look for other things such as method calls. After this I’m using the information to actually find unused files.

To find unused files I’m building a graph. Nodes are files and an edge means that a file is used by another file. So the challenge here is to actually add an edge every time a file is used. An obvious thing to do is to add an edge for every import statement.

To improve the result further I’m adding edges for references within a package, eg. used classes or methods within the package. However, this is not enough since Spring MVC has a powerful dependency injection system. It supports injecting dependencies and still only relying on interfaces. You can get all classes of a type (interface) injected or one specific instance but still only depending on its interface.

When harvesting the AST I also collected autowired classes and superclasses. Using this I filtered out files that are autowired, either directly or via an interface. The result is not 100% perfect, but with a small blacklist of classes and some other trivial filtering I managed to make it good enough for it to be very useful. Everything I get from the AST is modeled using the following data structure:

data Result = Result { fileName :: String
                     , imports :: [String]
                     , references :: [String]
                     , topLevelAnnotations :: [String]
                     , methodAnnotations :: [String]
                     , implements :: [String]
                     , autowired :: [Autowiring]
                     } deriving (Show)

Have a look at the code and try it on your own Spring MVC project. Feel free to comment here if you need help or have suggestions of improvements. Let’s now compare coding Haskell with Java / JavaScript that we normally do at Small Improvements.

Reflection of Development With Haskell

I’m a big fan of Haskell and have been for ages. One of the first things I noticed is the wonderful support you get from the compiler. When the compiler blesses your code it is very likely to just work. Once you have established that your code works, that it behaves correctly, then it is really difficult to accidentally change its behavior when refactoring. You might break it, as in making it not compile, but once it compiles again it is very likely to behave like before.

Composition is just beautiful. It strongly promotes breaking your program into trivial pieces and then glueing them together. Types are excellent documentation, the type signature together with the function name often makes it easy to guess exactly what the function does. It’s easy to write relatively clean code in Haskell. I think that the pureness and composition of small functions almost automatically makes it happen.

Actually, in Haskell it is a bit difficult to write functions that are hundreds of lines of code doing many different things. In Java or JavaScript that is what many people begin doing, and something they only unlearn as they become more skilled. I think that it is possible to produce nice code in all languages, but Haskell does help you quite a lot to keep your code nice, not to mention hlint. Haskell does not guarantee that you produce good code though, let’s look at some of my learnings from this project.

Learnings From This Project

One thing I learned is that type aliases are very useful, you should use them whenever it makes your code more readable. Comments are in general not needed if the type signature and function name is good.

Naming your code increases readability, for example extracting out small pieces of code to the where clause of a function or simply making them top-level functions in the module. Putting too many functions that are relatively complex in the where clause is a bad idea, because you lose the explicit type signature (you should always specify it for top-level functions) which makes it difficult to directly understand when they can be used and how they can be combined. A small example of a nice usage of the where clause is:

transformToEdges :: Result -> Node
transformToEdges r = (r, fileName r, outgoingEdges)
  where outgoingEdges = references r ++ imports r ++ implements r

Note the increased readability in the top level expression. The where-clause is used to hide the messy details of what outgoing edges are behind a simple name. By using where it is often possible to make the top level expression very easy to read.

Curried functions are just awesome, they make it possible to compose almost any function. A good way to design them is to think of functions as being configured and getting what they operate on as the final argument.

Lazy evaluation is powerful, I still need to practice how to leverage it fully, but it is important to be aware of it. For example in my case I ran into problem when reading all files lazily. This caused my program to have too many open file handles. It was easily solved though, by hacking a bit to force the complete file to be read directly:

readFileStrict :: FilePath -> IO String
readFileStrict path = do
  file <- readFile path
  _ <- evaluate $ length file
  return file

Recursion further promotes clean code (small functions) and is quite easy to work with when you think of it in terms of base-case and induction/normal case. An interesting thing is that a lot of principles and ideas can be transferred to other languages.

Transferable Knowledge

One example of a transferable idea is solving problems through composition of many small functions, this can be used in JavaScript (eg. using Lodash-fp or Ramda) quite easily. Composition promotes having many small functions solving simple subproblems, and does often result in cleaner code.

It doesn’t end here, Hindley-Milner type signatures might be worth to use in JavaScript as well, even if they aren’t used for more than documentation. Without them all the functions you end up with can be quite difficult to read.

Currying is easy to use in JavaScript (eg. with Lodash-fp or Ramda). I think I would go as far as to say that composition is not especially useful without curried functions.

It is important to be aware of differences between Haskell and other languages though. For example lazy evaluation is a quite unique feature of Haskell, another feature is tail call optimization, which means that you can use recursion without constantly worrying about your stack blowing up. I think there are a lot of other transferable learnings, but they are a bit deeper and you simply have to code Haskell to learn them. If you don’t want to walk the path via Haskell, for JavaScript you might find Professor Frisby’s Mostly Adequate Guide to Functional Programming useful. 

Final Words

I would like to encourage every programmer to experiment with different languages and concepts. It is easy to just use what is immediately required for your daily job. But you miss out on a lot of ideas from other languages and risk getting caught in a small bubble, hindering you from developing as a developer.

At Small Improvements we get to spend around 20% of our time doing other things such as fixing pet peeves and working on side-projects (for example this one). In addition to this we have hackathons and ship-it weeks. I would recommend every company to introduce these kind of events, because I don’t think I’m the only developer who would agree with that programming is way more fun when you keep learning new things and growing as a developer.

To be a good developer you need to keep learning and don’t be afraid of not being instantly awesome when picking up something new. Keep exploring the beautiful world of coding!