I worked as a developer at the Financial Times for a few months earlier in the year. I found many positive aspects to the way they're set up for front end engineering, and would like to delve into a few.
All the developers share the same guiding principles towards front end development. This is rarer than it sounds and a huge win - little time is wasted having the same irreconcileable discussions, and instead everyone can push forward with feature development in a mood of confidence and lower stress.
There are still discussions and disagreements at the FT of course, but the major tenets are in place. So what are they?
An aspect of PE that is often overlooked - it's an extremely agile way to develop. You can ship a basic working feature - eg a form post / page reload - to your users quickly. Then iterate - eg enhance as an ajax form post.
Not only the right thing to do, but a smart business decision - client contracts worth millions of pounds hinged on the site being fully accessible.
Page load performance
Make the critical rendering path as short and quick as possible and defer everything else.
Again, this sounds like a no-brainer but you wouldn't believe the rationales sometimes given for page performance being neglected. I was once told by an engineering lead that it didn't matter if a page actually loaded slowly as there were well-known 'UX techniques' that would make the user feel that the page was fast (he was referring to spinners!)
Shared UI component library
The FT's set of UI components is called Origami.
Of course UI component libraries are common - companies either build their own or adopt / adapt existing ones like Bootstrap. But a few things at FT make their library particularly effective:
Dedicated dev resource
A handful of developers maintain and expand the component set and tooling (plus PRs are readily accepted from feature teams). This is obviously a serious investment but the paybacks include the increased speed at which feature teams can develop, the consistency of branding, and high quality, eg accessibility.
I noticed it could be challenging to develop components outside of the context in which they'll be used. From a technical POV you don't know what the edge cases are eg how much space will this component be able to occupy; how much content will it have to contain? Those parameters can be conveyed by a designer but it's more effective (and I'd have thought, motivational) to see how new components work in a real user flow.
Many companies won't justify dedicated resource, but giving feature team developers the remit and breathing space to centralise as much shared UI as possible could be an easier sell.
The library is the source of truth
Another thing that stood out at FT - the designers that I worked with were fully on board with the library in code as the source of truth. I've seen elsewhere that designers might be only slightly aware of the library or not at all; they don't reference known components or patterns in their designs, or only loosely.
The source of truth, I think, has to be the living, breathing implementations of the components, not a Sketch file.
Share by default
It's so easy to make style modifications in consuming apps and not push those back up to the central copy of the component, but the cost is a proliferation of app-specific styles, less code reuse, and a lack of consistency. Even the FT isn't immune to this.
Something that I often see missing from UI libraries is 'micro-layout' utilities. eg flexbox helpers to horizontally align / justify multiple components, or dedicated classes for spacing. You'll sometimes hear 'this is a component library, layout utilities don't have a place here'. But that means the layout styles will be written and rewritten across multiple apps. It's a missed opportunity.
Use live services where possible during local development
In the past few years I've experienced a couple of approaches to doing front end development in a microservices setup.
- By default, run all dependent services locally.
- By default, run only the service you're working on locally - use production services for all other dependencies.
Obviously there will be scenarios where one of these is far more suitable than the other. If in the same ticket you're going to work on a slice through a UI app, then another microservice that acts as a thin proxy API through to one or more other APIs, and you're building the request path from one to another, then option 1 is what you need.
However the vast majority of tickets that I see on team boards, require work on either a UI app or an API, but not both at the same time. For this reason the default should be 2, with the option to switch to 1 when required.
Your reaction to this observation might well be 'meh' at this point, but I do believe it has big implications for the default tooling adopted by a dev team and therefore the simplicity of a developer's day-to-day workflow.
For example, if the favoured workflow in your team is to run all dependent microservices locally by default, you'll probably want tooling to pull changes to those repos, spin them up together, possibly use something like Docker to sync and run any DB technologies. Rather a lot going on there for something that, remember, isn't the most common scenario.
Or for option 2, you simply use a router microservice that during local development, runs locally and for every request determines if there's a locally-running service for it and routes the request to that. Otherwise it routes to the live copy of the service. Simpler and quicker to build, run and maintain with fewer moving parts to go wrong during the developer's day-to-day workflow.
The FT has gone with option 2 and development is certainly simpler for it.
The flipside of this is that the engineer has to be online, with a half-decent connection, to get much of anything done. For teams that are committed to enabling offline and remote work, that could be a dealbreaker.
Local development to live
The FT approach is 'straight to live' - once you've pushed code and merged your PR, it will be in production in around 10 minutes, potentially exposed to hundreds of thousands of users. Sounds terrifying? Two protective devices are used to mitigate risks:
Feature flags (aka toggles)
Another technique commonly used outside the FT but I haven't seen it as well implemented, or heavily relied on.
All new feature code must be wrapped in a flag. A dashboard allows access to all flags and lets the developer override them either for local development, or to test a new feature in live.
The maintenance burden, confusion and impedence of deploying through separate environments is all blown away.
A perfect solution? Not completely - I've seen areas of code and config that aren't within the reach of flags, in other words the only way to test them is by releasing to live.
In place for certain key microservices eg the router - a live release is initially to only a small proportion of traffic. When it is proved to be stable via metrics / logging then it is scaled up to all traffic.
No setup is perfect but the FT did seem to get a lot right - the consistency of approach, the focus on fast, painless deployment, and the effort put into shared code and tooling all made for an environment where, as a developer you can move quickly and with confidence.