Scaling Up from Seven to Seventy

In June 2016 I was working for a venture capital backed media startup called The Tab. As the CTO one of my core focus areas was enabling a team of 7 engineers to reinvent student journalism across the UK and US. Having come to the end of a venture funding round I had a decision to make. To stay and help with the next phase of growth, or go elsewhere to focus on my own.

It was one the most difficult professional decisions that I have made, due to the depth which I had sunk into the problem space and the amazing people that I was working with. However my desire was to learn how to run multiple high performing teams concurrently at scale and I had confidence that the team at The Tab was strong enough to do it without me.

I started Photobox on 1st August 2016, since then I have been iterating on people, process and principles. With a goal to enable over 70 people embedded in more than 10 teams to have maximum impact within a private equity investment cycle. By no means has this journey been straight forward or is it finished, however I feel like it is a good time reflect on my thoughts so far.

Start with principles

Starting with a set of principles which explain the how and why is a good way to begin a process of alignment around change.

High trust environments enable more effective change

Conversations around change are best done in a safe space with high trust. My experience is that this most often exists already within existing team structures. You get much better feedback and can tailor responses to the teams context which keeps the conversation relevant and engaged.

Start small and iterate

Start small and iterate applies to teams, org structures and operating models as much as it does to releasing software. Big bang changes tend to need the most support and have potential to cause the most negative impact. We have now moved to a model where we change one team at a time and iterate with learning into the next one. This has allowed us to refine the change process as we go and make sure to embed feedback and build out good content for wider sharing.

Capacity for change is not uniform

Change impacts different people and cultures in different ways. When change is constant people can become stuck in the trough of uncertainty and this has a massive impact on their ability to have impact. Understanding that change impacts me in a different way to others helped me take a much more empathetic approach.

Write things down

Writing how you want to work down and then debating it with wider and wider circles is a great way to iron out the kinks in an operating model. Google docs collaboration features helped us work through and collate feedback along the way.

Compromise is necessary for aspirational principles

With significant size, scale and legacy it will be hard to always respect all of your principles all of the time. What to compromise where and how is key to making sure that trust stays high throughout the change process.

Matrix management reduces alignment

Matrix management structures are really hard to scale effectively. Especially when managers have different goals to the individuals that they are accountable for and those don’t match team goals.

Alignment increases impact

To have the most impact build teams of aligned individuals with a shared goal they helped create and believe they can achieve. Then help them to reduce dependencies and remove impediments.

Build collaborative processes

The best external structures to the teams are informal ones that are representative, meet regularly to collaborate on issues and maintain standards. However you need a leader to drive this in order for it to continue over a longer term. If you can create collaborative process where people learn together, regularly talk about issues and are then given time to address them change can be a groundswell rather than trying to push water uphill.

Aligned autonomy takes time

Scaling teams to become high performing through aligned autonomy takes time. You need to let teams test their boundaries to enable them to get comfortable with where their constraints lie within their autonomy. This process is best facilitated through regular structured communication with stakeholders.

Consistency can have a high cost

The larger the number of teams the greater the overhead to enable consistency. One of the biggest challenges with my engineering background was to understand this. People scale in very different ways than code. The right balance between standardisation and customisation is key. This is changes over time and should be inspected and adapted as you move go.

Structured external communication is key

Teams should be able to define and improve their own processes as long as they communicate in a consistent way with stakeholders. The way I think about this is much like a contract for an API define the interactions with the outside world first and as long as they are respected it doesn’t matter if the internal workings are different per team.

Metrics are hard

Finding the right metric for a team to focus on also takes time and iteration. Teams need to be given the appropriate time to go deep into the problem space to figure this out. This often requires engineering effort and a good reliable data platform.

Balancing metrics are key

Commercial goals need to be balanced with quality, performance and security measures that the whole team believe in. This takes time and a level of maturity to get right.

Accountability

If multiple people are accountable then no one is accountable.

Hire smart people

Make sure that you hire smarter leaders than yourself, co-create a vision, collaborate on a goal, give regular feedback to enable them to have the maximum impact.

Centre of excellence

A centre of excellence is a great way to ensure that areas of quality but limited resources are utilised and share knowledge effectively. We have moved to a model where they draw up a contract with teams based on a menu of their services. This ensures that expectations are clear and met. This has really helped as we have moved away from having dedicated resources in teams such as agile coaching and testing.

Full Stack

Full stack is a mindset that really can enable teams to have more impact but is good to both train as well as hire. Generalists can also have negative impact on quality so it is a hard balance to get right.

Effective communication underpins everything

Communicate, communicate, communicate. Communication scales exponentially I do not.

Autonomy, Master & Purpose

Autonomy, mastery and purpose are key to setting teams and individuals up for success. However getting the balance right with autonomy takes time, teams and individuals take longer to find their purpose and you need to think about the balance of experience levels in teams with regards to mastery.

The key learning from the last two and a half years is how hard significant change can be at an organisation level. Timing is key which means approaches need to evolve over time as there is no right answer to a lot of questions. Persistence, belief and drive are key to reach the tipping point where change itself becomes a natural part of the process.

Lessons from 15 months at The Tab a venture capital backed media startup

The Tab Team

The last twelve months have been a very turbulent time in the media industry with profit forecasts and jobs being slashed across both digital and print. Growing a sustainable business in the modern diversified world of media is a hard nut to crack. This is even harder when you have to raise funds every 12 months in order to survive. Now that my tenure at The Tab has come to a close I have reflected on what being involved in this for last fifteen months has taught me.

Consistent revenue growth is more important than fast audience growth

This is the statement which has changed the most in the last fifteen months. When The Tab raised in 2015 it was to drive audience growth via an American expansion. The revenue following that growth was a secondary concern. Having since been involved in a funding round I can say that this attitude has changed. Clear, consistent and diversified revenues are a huge part of owning your destiny or being able to raise additional funds to grow.

The free growth years are over

As most people have noted the free growth years for media are over. Some say Facebook is capping the amount of traffic that it is sending to publishers. They are definitely focused on optimising the type of content displayed to each of their users. Also the more you are locked into their ecosystem the less attractive you are for investment.

However there are still opportunities for growth

As Facebook rolls out new initiatives there are still windows of opportunities for growth. Ripped video content for example has been very effectively growing page likes. However you usually need to skate close to the line to make this happen and quickly backfill with proper content once your knuckles get rapped.

Facebook Instant Articles provide a decent revenue stream

Considering all the alternatives early indications are that Facebook Instant Articles are a decent revenue stream. With programmatic on mobile still 12-18 months away from where it is on desktop and users demanding a fast experience (as well as Facebook using speed as a Newsfeed ranking factor) this is good news. With auto-fill rates above 95% and decent CPM’s so far for The Tab’s audience this does seem like a good play until other mobile units catch up.

Throwing resources at growth does not build a sustainable business

What is the key driver of your business and revenue? Find that and focus on it relentlessly. It is much easier to build new things rather than fix core issues. Especially when flush with venture capital. Doing more does nothing but move you further away from what you actually need to fix.

Focus is hard when you are unsure of what you are being measured on

What matters most? Uniques, revenue, page views, stories published or cost per story? There are so many things you can be measured on at times it can be bewildering. Bottom line is that you need a clear repeatable, scalable and cost effective growth model including a plan for profitability. Everything else should drop out of that.

There is still a large appetite for great original content

So much of the media industry has been focused on the most efficient way to repackage content which means that everyone is fighting over the same eyeballs. However there are still a lot of amazing original stories out there you just have to have an efficient way to find them and deliver to a relevant audience.

Raising money is a full time job

The time and effort that goes into raising funds is a full time job. It can take many months, hundreds of meetings and due diligence to close out a round. It is a non-linear process as well with many people you meet along the way introducing you to more people to talk to.

VCs with aligned portfolios

When selecting a venture capitalist one that has a decent sized aligned portfolio can provide you access to learning and introductions that others can’t. In such a competitive industry these insights can be gold dust and definitely worth considering as you search for investment.

Negative aspects of raising

Not knowing if you are going to have enough money to continue as your runway is short and terms sheets are close but not signed can really dampen momentum. It takes very strong belief to be able to continue to drive forward in the face of such uncertainty. The volume of feedback you get from pitches can be overwhelming at times. It can be easy for this to become a distraction which can really hurt focus and belief.

Positive aspects of raising

Putting your company on the line and distilling everything you down to a slide deck and presenting it hundreds of times can help sharpen your focus. Some of the feedback you receive throughout this process will be valuable, the trick is to figure out which bits. The pressure which comes from this process either makes you or breaks you, at times it can do both in a single day.

More money, more problems

One of the biggest losses with fast growth by VC money can be the loss of creativity that comes from constraint. Getting the balance right between hitting short term targets and long term issues is hard.

I am very proud of what we achieved with The Tab and I have every confidence in the team going on to achieve great things. These lessons should give me a great foundation for my next challenge.

Competitively actionable analytics at The Tab

Competitively actionable analytics was a core element of the technology and product strategy at The Tab. The below details how this approach has translated into product and how that went on to affect user behaviour. The Tab had built a set of micro services to data mine Facebook, WordPress and Google Analytics which created a rich dataset in their data warehouse. This was of little value until it was in front of contributors to drive their learning and retention from competitively actionable analytics data.

The Tab Team

The Tab’s Team analytics platform was created with a main goal of providing actionable analytics data in a competitive framework for individuals and teams. The product aimed to get authors excited about how individual efforts had contributed to their teams overall success. What you see in red is an authors impact on the team performance. Authors are given a clear overview of how each of their recent stories has performed. At the highest level an overview of the top performers is presented as a leaderboard broken down by country and then team, author and story allowing multiple layers of competition and learning across our network.

Author Impact in Team Dashboard

Author Impact in Team Dashboard

Team, Reporter and Story Leaderboards

Team, Reporter and Story Leaderboards

Personal

Personal Dashboard

Previous to this project data access was restricted to a small set of users with a Google Analytics login and a high level understanding of custom filters. WordPress also had some stats but only at a team level and they were reasonably hidden within the dashboard.

Take the data to the people

The initial launch was a web based front end, the next step was an upgrade of the notification system to be able to send detailed data summaries via email. Distributing data via email increased users to Team by 200% with close to 100% of our active authors logging in monthly. These emails have been able to maintain a 65% open rate and over 10% click through rate after being sent thousands of times.

Weekly Stats Email

Weekly Stats Email

Monthly Stats Email

Monthly Stats Email

Usage growth after email prompt (1st Feb)

Usage growth after email

Team Retention

Team Retention

Data everywhere

Analytics are a natural form of gamification and authors were constantly posting screenshots in Facebook groups or Snapchats. To further enable view ability and use stats were embedded on every page of the site for logged in users. The goal was to show as much data as possible within the visual interface that was being used daily. A completely open approach to numbers allowed authors to learn from each others success, there are no restrictions in who can see what data once someone is logged in.

Homepage Stats (Logged In)

Homepage Stats (Logged In)

Story Stats (Logged In)

Story Stats (Logged In)

Author Stats (Public)

Author Stats (Public)

Real time notifications

Team has been running successfully since November 2015 delivering a clear path to re-engagement and a core part of The Tab’s retention strategy. Since then notifications have been extended to be able to deliver realtime readers stats as well as historical page views. These notifications are delivered by an iOS app The Tab + as well as Facebook Messenger. Push notifications enabled The Tab + to have a very similar level of use and retention to Team. The number of interactions with users has increased as the data is broken down into multiple smaller messages and sent as soon as thresholds are reached. This helped authors to stay excited about their existing story for longer and the more they enjoy process the more likely they are to repeat it.

The Tab +

The Tab +

Messenger Push Notification

Messenger Push Notification

Tabitha Messenger Interface

Tabitha Messenger Interface

The Tab + Consistent Engagement

The Tab + Consistent Engagement

The Tab + Retention

The Tab + Retention

Actionable analytics for editorial decision making

In a bid to take the data to where editors gather a Slack bot was built to enable access to detailed Google and Facebook data just by pasting in a url. Slack channels were created where real time publish and stats updates allowed editors to make decisions about what to promote across our network. Previously editors had to log into multiple systems to gather this information and spend a lot of time filtering. Decisions are now able to be made and discussed in near realtime.

Tabitha Slack Story Stats

Tabitha Slack Story Stats

Tabitha Slack Channel to Promote Post

Tabitha Slack Channel to Promote Post

Big Screen Dashboards

With authors being served with data in a competitively actionable analytics framework editors within the office were the next to be targetted. Utilising Geckoboard, custom API’s and Google Sheets The Tab designed big screen dashboards to drive competition and growth centrally.

UK Editorial Dashboard

UK Editorial Dashboard

US Editorial Dashboard

US Editorial Dashboard

Global Sharing Dashboard

Global Sharing Dashboard

It was a lot of effort to get the data in the right place and then expose it in creative ways to drive learning and performance but now competitively actionable analytics has permeated into The Tab’s culture there is no going back.

The geeky bit

Millions of rows of data a day are processed using Node.js, Amazon Simple Queue Service, Elastic Beanstalk Workers and MySQL running on RDS. The data is cleaned and then transformed into day, month and year increments as well as aggregated for users and teams. This preprocessing allows very fast recall of any data set. Amazon API gateway using a Node.js Lambda based server less API layer is used for data retrieval. This handles a lot of the standard API paradigms like security and caching which kept the focus on data and user experience. The front end was built in React.js utilising Chart.js for graphing.

The team

Big up to Richard Coombes for helping with the backend magic, Matteo Gildone for helping with the frontend magic, Serge Bondarevsky for design and Charlie Gardner-Hill for the dashboards. My focus was on the product management, data collection, data transformation and building the API layer.

Metro10: Going native growing on Android

Metro10 Play Store

I spent nine months in 2014 designing, building and iterating an Android app called Metro10 for the UK Newspaper Metro. This taught me a lot about the benefits and frustrations of native app development and marketing.

The benefits of app users are huge in terms of their engagement and propensity to return daily. Being able to target users via push notifications is a great way to create a trigger habitual consumption of your content. These work best when targeted, relevant and contextual or are quickly turned off.

However these advantages can come at a high cost in terms of acquiring new users. We ran extensive internal banner advertising campaigns on metro.co.uk with limited success. Banners are not the greatest advertising medium, especially mobile banners with a call to action. I think we were naive to think that people would want to install an app whist browsing the web. 

You can get much better results from retargeting campaigns based on device and previous visit to your site. Especially if retargeting via Facebook app installs ads. However effective they are at acquiring customers without a clear ROI they were a cost we could not bear.

Metro already had multiple apps for newspaper based consumption. Being yet another app in a constellation hurts when you are the new kid on the block. Visibility from search in the Google Play store also really hurt us due Metro being a very common name. The beta nature of our approach also ruled out using our contacts for App Store promotion. 

While developing I released as often as I could. Pushing a daily alpha build and using it on different devices than what I was using for development. This could be quickly rolled out to production every other day once bugs were fixed. The fact it only took hours to get these in the hands of users due to automation was a great advantage of Android. 

I ended up with one mobile, one seven inch for coding and one mobile, one seven inch for testing releases. I also managed to get a good group of beta testers in our Google+ Community for feedback. 

Google analytics is amazing for tracking performance and especially bugs. Crashes only get sent through to the Play store if people submit but show up in Google Analytics regardless. Due to the limits of our testing (I was the developer and tester) this feedback was invaluable. The sheer number of devices on offer also made a release and fix a necessary approach.

I put a lot of effort into tracking all of the actions that people took within the app. This was a really useful dataset in helping make some big product decisions. I would recommend this approach to all development. It wasn’t a large amount of effort to setup either.

Push notifications are very important not just to be clicked on but as a visual reminder of your app on the phone. We utilised Parse from Facebook to get this capability setup without much engineering effort. However these had to be manually sent due to engineering constraints.

The best growth hack we did was run a competition asking for feedback via a Google Doc. For some reason everyone who left us feedback also left us a positive review with really helped us avoid the cold start syndrome with reviews.

metro10-bottom

We had a small vocal minority which reached out to us from App Store reviews. Quickly fixing their issues and responding helped us turn a few reviews around and got some great product feedback. My biggest amazement is the amount of people that never upgraded to a new version even though the one they were on was buggy.

With all of our efforts we only managed to get a few hundred daily users. They were and probably still are a very loyal bunch but not enough to maintain ongoing development. Plus I left for a new challenge so that didnt help momentum.

My gut however is that the real issue with apps based around single brand content consumption is that people’s habits have changed. Users want to dip in and out of news from their social feeds and friends updates. Without large marketing budgets due to the lack of significant revenue uplift getting on users home pages is a struggle and building enough difference/providing enough content is a struggle to keep them engaged.

How Docker Containers simplify Microservice management and deployment

My recent personal and professional development efforts have taken a microservices approach. This escalated to having 8 services running 3 different languages across 5 different frameworks. After banging my head against the command line for a few days trying to get these to coexist I decided to try Docker containers to attempt streamline and simplify this process.

A software container is a lightweight virtualisation technology that abstracts away the complexity of the operating system and simply exposes ports to the host it runs on. You can run containers on most operating systems and platforms including all major PAAS providers. Keeping the complexity within the container means that host systems can focus on scaling and management. You also get a high level of consistency allowing you to ship containers across different servers or platforms with ease. Essentially building once, saving the image and then pulling on to each of your environments for testing and then deployment.

Utilising the micro service instance per container pattern is a great way to manage a set of services which have the following benefits. Including increased scalability by changing the number of container instances. A great level of encapsulation which means that all services can be stopped started and deployed in the same way. You are able to limit the CPU and memory at a service level. Finally containers are much faster to work with than fully fledged virtual machines and simpler to ship around to platforms for deployment. Amazon for example has built in support via their Container Service as well as Elastic Beanstalk. I have used Ansible for deployment as it has really nice Docker wrapper which makes starting, stoping, pushing and pulling images between servers only a couple of lines of code.

One of the things that you need to watch out for is container reuse as they can get quite large. Base images are a way to minimise this and promote reuse and allow you to be able to control the underlying approach to multiple repositories without code duplication. Each step within the build of the Docker container is cached so only stages that are changed need to be pushed when changed. So be careful around the order that you run scripts leaving the stages that change till last. DockerHub is like a GitHub repository for built images and makes the pushing and pulling images require minimal infrastructure and learning. You can pay for private repositories in the same way that GitHub allows you to, or you can setup your own one if you are that way inclined.

Running Docker containers locally on a Mac is pretty straight forward with Boot2Docker which spins up a local Vagrant box which has the docker daemon running on it and allows you to easily test, build, push and query the main docker repo. Kitematic was also recently acquired by Docker as an alternative to people who are adverse to the command line. There are also a large set of officially maintained base container images for running Node, Jenkins and WordPress amongst many others. You need to understand some patterns around how best to ensure that data is persisted as if you stop a Docker container you loose the data within it. However data only containers are a way around this and a pattern that allows you to persist containers without losing the data or binding it too closely to the underlying operating system.

Microservices allow us to choose the right tool for the job and Docker containers abstract some of the complexity of this approach. Utilising base images promotes code reuse and the Dockerfiles themselves are checked in with the projects so any one who pulls the project can see how it is built which is a huge bonus. Most downsides like persistence and the size of containers have strategies to minimise their impact.

I would be very interested to hear your thoughts and experiences with microservices and containerisation.

Lessons from virtualising local development environments

its complicated

While working at ANDigital we used multiple languages/frameworks/web servers for both internal projects and external client work. Our goal was to be able to continuously deploy each of the services that we were involved with. I came to the conclusion that scripted virtualisation for local development put us on the right track to achieving this goal. The below outlines my learning from getting this process working.

Vagrant wrapped VirtualBox for local virtualisation

Vagrant is a handy wrapper for VirtualBox which allows you to programmatically setup underlying OS, mapped drives and define provisioning scripts. It also configures your local networking so you can see it as localhost. This can be checked into to your source control for ease of distribution as well as inbuilt change control.

Ansible for provisioning

Ansible is a configuration management tool that connects via SSH and programatically runs scripts for you. You use a simple language to define what each step will do and has default helpers for a lot of different tools such as Docker. The great thing about Ansible is that you can use it for both provisioning of servers and deployment of code. This is also checked in to a repository for distribution and change control.

Same scripts locally and at all stages

The advantage of using Ansible is that you can utilise very similar scripts for provisioning across every environment. The task based approach allows you to construct subtle differences but increases confidence of consistency. Automation of deployment also helps ensure that from your local environment through each stage that code lives in the same places executed in the same way.

Package, build and tests management

Ensuring that your development machine has the correct versions of packages, build and test tools can be quite a challenge. Especially when working on multiple projects with subtle differences. Having a script that configures these takes away a lot of the upfront setup and ensures you are running the same versions as your continuous integration servers.

It works on my machine

This is a such a common cry from developers and something we are slowly moving away from. As our local servers run a VM which is almost identical to development and production testing locally is much more realistic. It also forces developers to consider that there code will be running on a server. This is an important mindset that helps move away from this issue.

My IDE does that for me

The main pushback I have encountered is from people using a fully fledged IDE which contains a web server. I think that these two approaches can work in tandem which a push to your local server before checking code in as an extra step. I have also put an IDE within the server for an even higher level of consistency.

Cost and productivity boosts

The quicker that you can find a problem the easier and cheaper it is to solve. The lack of context switching required as you are awaiting feedback from test and regression is also a real bonus.

No longer can it be the operations teams responsibility to push code to production, developers should comprehend the impact that their code makes on all environments and local virtualisation really helps this mindset. Being able to switch between different environments with a simple vagrant up is definitely a future I want to be part of.

What 275 days of intensive care taught me about managing complex projects

Jack Jensen

For 275 days after his birth my son Jack lived in the intensive care units of various London hospitals. A large team of consultants, doctors, nurses and cleaners worked together 24/7 to support Jack’s daily needs whilst solving his long term health problems. Managing complex projects is a large part of my job so being a geek I couldn’t help but analyse the key mindsets and approaches that positively contributed to Jack’s journey. I believe that the concepts below can be applied to managing any complex long term problem and I hope you find them useful.

Set the right long term goal to give context to all decisions

In Jack’s case this was the ability for him to go to a normal school without any assistance. This context provided assistance to making harder short term decisions.

Complex questions rarely have a 100% answer

Doctors are rarely able to give you a 100% answer to complex questions. I have come to appreciate this way of thinking as it avoids setting you up for disappointment. This reflects the reality of the unknown and changable nature of environments.

The more people involved the harder consistent communication becomes

Communication between all of the parties involved in 24/7 care is a constant challenge. This can be helped by writing things down, putting them on walls and doing as much as possible face to face.

Start from the worst case scenario and work backward

Planning for the worst ensures you really think about all options. This is a great mind hack to be happier with outcomes that aren’t the best case scenario.

Capture as much information as possible

Over longer periods of time it is essential to write down decisions and observations so anyone can revisit the context and data around decisions if they weren’t involved in them at the time.

Establishing baselines and thresholds help autonomous decisioning

Every baby is unique and collecting data is a great way to understand their current state compared to their history. Once you have established a baseline it is easier to empower people to act if thresholds are broken. Overall population baselines are also useful over a longer term view.

Monitoring should be visual and constant

All monitors should be highly visible and when something deviates from the established baseline then they should alarm. Alarms should have clear levels between their various states.

Daily stand ups are essential

A daily conversation with all of the people that are going to be involved in the care of the child are essential. This coupled with data enables distributed decision making. Face to face conversation ensures everyone gets the chance to contribute.

Choosing the right option when many are available is difficult

There is a decent amount of trial and error in solving complex problems. There are standard approaches which give you options for the next step but only by trying and measuring will you actually find out how effective they are.

A clear path of escalation is essential

Knowing who to ask if you are blocked or have an emergency is essential. This coupled with having access to people with greater levels of experience can really help move things forward.

The last 8 months have been an incredible journey and I am unbelievably grateful for everyone who has helped us along the way. This process has broadened my approach, understanding and mindset for managing complex projects. I am thankful to the systems that have enabled Jack to have the smile that now warms my heart on a daily basis.

Scarcity and the trap of the daily deadline


For the past four years I have been working in an editorial environment Metro, the third largest daily UK newspaper. Over this time I have been amazed at the number of times that serious change has been attempted and failed. This seems to be a common problem with newspapers. Initially it confused me as there are a lot of very clever, passionate and motivated individuals involved. Over time however I have come to believe that there are three main components behind this.

Having to fill a set number of pages by a daily deadline creates scarcity of time. The focus required to achieve this creates tunnel vision that both helps and hinders. It helps editorial climb their daily mountain but the bandwidth tax of doing this reduces their cognitive capacity for change. This theory formed after reading Scarcity by Sendhil Mullainathan and Eldar Shafir. The Guardian review sums up the main premise of the book well:

“Scarcity captures the mind,” explain Mullainathan and Shafir. It promotes tunnel vision, helping us focus on the crisis at hand but making us “less insightful, less forward-thinking, less controlled”. Wise long-term decisions and willpower require cognitive resources. Poverty (the book’s core example) leaves far less of those resources at our disposal.

The editorial process is driven by risk aversion, due to a lack of ability to change a printed product after the deadline. Banks of subs check and recheck work and a single person runs each section as well as an overall editor. These create multiple bottlenecks which adds significant overhead, coupled with low levels of autonomy which increases the queuing further. Most of the paper remains a work in progress until the last possible minute.

Many of the systems that enable editorial processes are very old and not built for the current world we live in. Complex to change and expensive to run, this is the final piece holding back progress. Change requires running multiple concurrent systems. Each of these is tightly coupled with other complex systems and risk aversion is high. Cost to achieve this change is very high in monetary terms, training and complexity.

Complex process coupled with complex systems and a reduced cognitive capacity for anything outside of their deadline has been holding back editorial progress for years. I believe this is one of the reasons why newer entrants without this legacy have fared so well. They don’t have any of these previous constraints. Change is possible but usually underestimated due to the multiple layers of complexity and need to continue the existing process whilst building the new one.

21 product development tips from the trenches


Over the past four years at Metro we have delivered one replatform, four redesigns, multiple native apps and built and sold an online casino. From these experiences we have iteratively built a process and environment that aids product development. An Agile mindset has helped the development team achieve a consistent output. This coupled with Lean thinking delivered growth that convinced the business to fully embrace our process. The below are 21 product development tips that were hewn in the trenches of failure we call learning.

1: Ensure everyone has a clear vision of the end goal

This needs to be concise, measurable and most importantly achievable. A strong reason behind why that goal was chosen will help motivation. Everyone should be clear on what they can do to affect the goal. This should be the main job of leadership.

2: Use small cross functional, self organising teams

Should have as much autonomy as possible in how they affect their goal. This is key to enabling faster decision making which increases learning velocity. Proximity is the best hack to maximise face to face communication. This is the most effective way to ensure a common understanding.

3: Timing is everything

The biggest challenge is building the right product/feature, at the right time, using the right technology. The biggest waste is building things that people don’t want or need now.

4: Focus on figuring out the next releasable step that takes you closer to your goal

Ensure it is small, releasable and gets you both closer to your goal and provides valuable feedback.

5: Prototypes are a great way to improve early and ongoing feedback

Paper/whiteboards are a great place to start as they allow the quickest iteration. Later prototypes are most effective when viewed in the medium they will be delivered in e.g. In browser/device.

6: Project plans should as high level as possible

They are great for making a high level view of major deliverables visible. If they constantly need updating they are too granular.

7: UX/Design should be 2-4 weeks ahead of development

Designing and building prototypes with a goal of getting as much feedback as possible before development begins.

8: What is designed and what is built are two separate things

Both should inform the other but there is no master due to constraints on both sides.

9: Just in time is the best approach to detailed planning

Any earlier can be wasteful due to risk of new data from earlier releases or prototypes. Pair up on ticket writing using face to face communication and attach any prototypes/mockups/wireframes.

10: Less is more with process once you have a mature team

An agile journey must start somewhere and a fixed process like Scrum/Kanban is a great place to begin. However as the team and process matures your aim should be to reduce this to the minimum possible for your environment.

11: Centralised communication is best done outside of email

Slack/Trello are great examples of products that allow a participatory conversation without the cognitive overload of email.

12: Evolutionary architectural approach works best, complexity shows where work is needed

The simpler you start the quicker you can get real feedback. Avoiding over architecture allows you to combat scaling issues when required, usually by following established patterns. This avoids adding unnecessary complexity early on which can seriously hamper your ability to learn fast.

13: Micro services are a great pattern for a service based architecture

The ability to pick up, modify and release a service with complete confidence that it won’t impact anything else helps you move faster. They also allow you to prototype new technologies in a production environment. Focus on automation up front to minimise overheads of running multiple services.

14: Focus

The less we build the better we build what we do. Cognitive overload of working on multiple things at a time has a huge impact on quality.

15: Limiting work in progress is the best way to speed up delivery

Queues are very ineffective, the more queues you have and the more items they contain the worse they perform.

16: Product feedback should happen on a regular basis and have all stakeholders attend

Feedback should be constructive, open and honest. As we release everyday we have two demos a week and our Slack channel is constantly open for feedback. This is where work is prioritised, discussed and tweaked.

17: Data wins arguments

It’s ok to loose a battle based on opinion to come back and win the war with data. Make sure you are measuring the right things, this takes time but is worth the investment. Then look for anomalies or what I call “data smells”. Following these to their cause will give you great insight into your product.

18: Innovation needs time and space to happen which initially needs to be forced

Hacks days/afternoons are a great way to kick start this process. Give people the a fixed time with a clear direction and see what they come up with.

19: Beware of the curse of knowledge

Don’t get frustrated, embrace the fact that some people aren’t as far along the journey as you and help them take those next steps.

20: People should have strong opinions that are weakly held

An opinion is always useful to speed up the ideation phase. Logic around environmental constraints should shape the final decision.

21: Embrace your constraints

Each environment has a unique set of constraints. Use these to aid quick decision making. This should give you time to focus on what you need to change long term to be most effective.

During my time at Metro the only constant was change. We were able to embrace this and use it to our advantage. Iterative learning based approaches helped us maintain consistent growth. Delaying every decision until we had the best data possible kept our failures small and valuable. Building great products is about forming a team around an achievable goal and iterating based on the best feedback available at each stage.

Evolution of the Metro.co.uk homepage over the past four years

This slideshow requires JavaScript.

How software ate manual content placement on Metro.co.uk

Trending and Newsfeed Automatic Placements

Trending and Newsfeed Automatic Placements


The majority of content placement on metro.co.uk is now managed by software. This has been a long journey based on real world feedback and incremental addition of complexity. My goal has always been to take a developer’s view of the editorial process and optimise where possible. Looking at the numbers it became clear that for large areas of the site a disproportionate amount of time was spent on content placement for the value it returned. My previous post covered the first part of the journey and now I will explain how we extended this to run the majority of the site.

We gather a lot of information from WordPress, Facebook, Twitter and Omniture into a MySQL database. This data is passed through a stored procedure to return five different sorts:

Score

((Tweets + Facebook Interactions) * Social Multiplier) + Views

Trending

Score Now – Score 30 Mins Ago

Social

Tweets + Facebook Interactions

Date

Date descending

Coefficient

((((Tweets + Facebook Interactions) * Social Multiplier) + Views + Editorial Boost) * Hour Coefficient) + Tag Boost

We also pass in the following to modify the results depending on the channel, e.g. news, that the data sits within.

Hours to return

e.g. From 24-336 since publishing

Filter Subcategories

e.g. Football,Oddballs,Weird,Food

Boost Tag

e.g. arsenal-fc

Social Multiplier

e.g. 10

Content Type

e.g. Blog or Post

Remove Tags

e.g. tottenham-hotspur-fc

Coefficient

e.g. 0-4 * 3, 4-12 * 1. 13-24 * 0.5, 25-48 0.3

The coefficient sort now takes input from the articles that the editors place at the top of each of the channel pages of the site. This editorial boost allowed us to keep everything feeling much fresher until the data catches up. We have also built our own real time page view tracking system with Node and Redis to get around the time lag in Omniture of 30-60 mins.

We recently centralised all of the settings so that they are easy to view and change. I focused on optimising the cluster of content returned and timeframes to retrieve within to get them working with publishing patterns per channel. The ability to cluster content of similar channels has helped ensure we offer a wider variety of content at different stages of the user’s journey.

Using different sort methods coupled with this clustering has reduced duplication. The design has also helped by using different image ratios and colours for different sections of that page that may contain the same content. We have standardised the bottom of all pages to be the same. This means if we are able to improve performance then it is felt across the entire site.

The last addition was the ability to boost by tag. This has enabled the article based algorithm to be much more relevant. At this level of granularity we decided context is much more important than freshness. Moving the tag boost outside of the coefficient enabled this to be clustered at the top but we limit this to the five most recent related articles.

Our API is able to deliver everything that the front end needs for rendering including title, image and excerpt. This has also enabled us to use this data in multiple places such as the native Tablet and Phone Editions and the glance journalism experiment Metro10 and MetroVision. Our feeds finally go through an additional layer that allow us to add sponsored content at fixed positions for traffic driving purposes.

The great part of all of this is that the maths is still very simple and can be explained to anyone who is interested. Having a set of values to tweak per channel has enabled us to have enough options to slice the data for use in multiple contexts. It has taken 12 months and a full redesign to really see this come to life but I hope it will be a part of Metro for years to come.

Home Page

Metro Homepage Placements

Metro Homepage Placements

Article Page

Metro Article Page Placement

Metro Article Page Placement

My talk at the WordPress VIP Big Media Meetup on this.