Competitively actionable analytics at The Tab

Competitively actionable analytics was a core element of the technology and product strategy at The Tab. The below details how this approach has translated into product and how that went on to affect user behaviour. The Tab had built a set of micro services to data mine Facebook, WordPress and Google Analytics which created a rich dataset in their data warehouse. This was of little value until it was in front of contributors to drive their learning and retention from competitively actionable analytics data.

The Tab Team

The Tab’s Team analytics platform was created with a main goal of providing actionable analytics data in a competitive framework for individuals and teams. The product aimed to get authors excited about how individual efforts had contributed to their teams overall success. What you see in red is an authors impact on the team performance. Authors are given a clear overview of how each of their recent stories has performed. At the highest level an overview of the top performers is presented as a leaderboard broken down by country and then team, author and story allowing multiple layers of competition and learning across our network.

Author Impact in Team Dashboard

Author Impact in Team Dashboard

Team, Reporter and Story Leaderboards

Team, Reporter and Story Leaderboards

Personal

Personal Dashboard

Previous to this project data access was restricted to a small set of users with a Google Analytics login and a high level understanding of custom filters. WordPress also had some stats but only at a team level and they were reasonably hidden within the dashboard.

Take the data to the people

The initial launch was a web based front end, the next step was an upgrade of the notification system to be able to send detailed data summaries via email. Distributing data via email increased users to Team by 200% with close to 100% of our active authors logging in monthly. These emails have been able to maintain a 65% open rate and over 10% click through rate after being sent thousands of times.

Weekly Stats Email

Weekly Stats Email

Monthly Stats Email

Monthly Stats Email

Usage growth after email prompt (1st Feb)

Usage growth after email

Team Retention

Team Retention

Data everywhere

Analytics are a natural form of gamification and authors were constantly posting screenshots in Facebook groups or Snapchats. To further enable view ability and use stats were embedded on every page of the site for logged in users. The goal was to show as much data as possible within the visual interface that was being used daily. A completely open approach to numbers allowed authors to learn from each others success, there are no restrictions in who can see what data once someone is logged in.

Homepage Stats (Logged In)

Homepage Stats (Logged In)

Story Stats (Logged In)

Story Stats (Logged In)

Author Stats (Public)

Author Stats (Public)

Real time notifications

Team has been running successfully since November 2015 delivering a clear path to re-engagement and a core part of The Tab’s retention strategy. Since then notifications have been extended to be able to deliver realtime readers stats as well as historical page views. These notifications are delivered by an iOS app The Tab + as well as Facebook Messenger. Push notifications enabled The Tab + to have a very similar level of use and retention to Team. The number of interactions with users has increased as the data is broken down into multiple smaller messages and sent as soon as thresholds are reached. This helped authors to stay excited about their existing story for longer and the more they enjoy process the more likely they are to repeat it.

The Tab +

The Tab +

Messenger Push Notification

Messenger Push Notification

Tabitha Messenger Interface

Tabitha Messenger Interface

The Tab + Consistent Engagement

The Tab + Consistent Engagement

The Tab + Retention

The Tab + Retention

Actionable analytics for editorial decision making

In a bid to take the data to where editors gather a Slack bot was built to enable access to detailed Google and Facebook data just by pasting in a url. Slack channels were created where real time publish and stats updates allowed editors to make decisions about what to promote across our network. Previously editors had to log into multiple systems to gather this information and spend a lot of time filtering. Decisions are now able to be made and discussed in near realtime.

Tabitha Slack Story Stats

Tabitha Slack Story Stats

Tabitha Slack Channel to Promote Post

Tabitha Slack Channel to Promote Post

Big Screen Dashboards

With authors being served with data in a competitively actionable analytics framework editors within the office were the next to be targetted. Utilising Geckoboard, custom API’s and Google Sheets The Tab designed big screen dashboards to drive competition and growth centrally.

UK Editorial Dashboard

UK Editorial Dashboard

US Editorial Dashboard

US Editorial Dashboard

Global Sharing Dashboard

Global Sharing Dashboard

It was a lot of effort to get the data in the right place and then expose it in creative ways to drive learning and performance but now competitively actionable analytics has permeated into The Tab’s culture there is no going back.

The geeky bit

Millions of rows of data a day are processed using Node.js, Amazon Simple Queue Service, Elastic Beanstalk Workers and MySQL running on RDS. The data is cleaned and then transformed into day, month and year increments as well as aggregated for users and teams. This preprocessing allows very fast recall of any data set. Amazon API gateway using a Node.js Lambda based server less API layer is used for data retrieval. This handles a lot of the standard API paradigms like security and caching which kept the focus on data and user experience. The front end was built in React.js utilising Chart.js for graphing.

The team

Big up to Richard Coombes for helping with the backend magic, Serge Bondarevsky for design and Charlie Gardner-Hill for the dashboards. My focus was on the product management, data collection, data transformation and building the API layer.

Metro10: Going native growing on Android

Metro10 Play Store

I spent nine months in 2014 designing, building and iterating an Android app called Metro10 for the UK Newspaper Metro. This taught me a lot about the benefits and frustrations of native app development and marketing.

The benefits of app users are huge in terms of their engagement and propensity to return daily. Being able to target users via push notifications is a great way to create a trigger habitual consumption of your content. These work best when targeted, relevant and contextual or are quickly turned off.

However these advantages can come at a high cost in terms of acquiring new users. We ran extensive internal banner advertising campaigns on metro.co.uk with limited success. Banners are not the greatest advertising medium, especially mobile banners with a call to action. I think we were naive to think that people would want to install an app whist browsing the web. 

You can get much better results from retargeting campaigns based on device and previous visit to your site. Especially if retargeting via Facebook app installs ads. However effective they are at acquiring customers without a clear ROI they were a cost we could not bear.

Metro already had multiple apps for newspaper based consumption. Being yet another app in a constellation hurts when you are the new kid on the block. Visibility from search in the Google Play store also really hurt us due Metro being a very common name. The beta nature of our approach also ruled out using our contacts for App Store promotion. 

While developing I released as often as I could. Pushing a daily alpha build and using it on different devices than what I was using for development. This could be quickly rolled out to production every other day once bugs were fixed. The fact it only took hours to get these in the hands of users due to automation was a great advantage of Android. 

I ended up with one mobile, one seven inch for coding and one mobile, one seven inch for testing releases. I also managed to get a good group of beta testers in our Google+ Community for feedback. 

Google analytics is amazing for tracking performance and especially bugs. Crashes only get sent through to the Play store if people submit but show up in Google Analytics regardless. Due to the limits of our testing (I was the developer and tester) this feedback was invaluable. The sheer number of devices on offer also made a release and fix a necessary approach.

I put a lot of effort into tracking all of the actions that people took within the app. This was a really useful dataset in helping make some big product decisions. I would recommend this approach to all development. It wasn’t a large amount of effort to setup either.

Push notifications are very important not just to be clicked on but as a visual reminder of your app on the phone. We utilised Parse from Facebook to get this capability setup without much engineering effort. However these had to be manually sent due to engineering constraints.

The best growth hack we did was run a competition asking for feedback via a Google Doc. For some reason everyone who left us feedback also left us a positive review with really helped us avoid the cold start syndrome with reviews.

metro10-bottom

We had a small vocal minority which reached out to us from App Store reviews. Quickly fixing their issues and responding helped us turn a few reviews around and got some great product feedback. My biggest amazement is the amount of people that never upgraded to a new version even though the one they were on was buggy.

With all of our efforts we only managed to get a few hundred daily users. They were and probably still are a very loyal bunch but not enough to maintain ongoing development. Plus I left for a new challenge so that didnt help momentum.

My gut however is that the real issue with apps based around single brand content consumption is that people’s habits have changed. Users want to dip in and out of news from their social feeds and friends updates. Without large marketing budgets due to the lack of significant revenue uplift getting on users home pages is a struggle and building enough difference/providing enough content is a struggle to keep them engaged.

How Docker Containers simplify Microservice management and deployment

My recent personal and professional development efforts have taken a microservices approach. This escalated to having 8 services running 3 different languages across 5 different frameworks. After banging my head against the command line for a few days trying to get these to coexist I decided to try Docker containers to attempt streamline and simplify this process.

A software container is a lightweight virtualisation technology that abstracts away the complexity of the operating system and simply exposes ports to the host it runs on. You can run containers on most operating systems and platforms including all major PAAS providers. Keeping the complexity within the container means that host systems can focus on scaling and management. You also get a high level of consistency allowing you to ship containers across different servers or platforms with ease. Essentially building once, saving the image and then pulling on to each of your environments for testing and then deployment.

Utilising the micro service instance per container pattern is a great way to manage a set of services which have the following benefits. Including increased scalability by changing the number of container instances. A great level of encapsulation which means that all services can be stopped started and deployed in the same way. You are able to limit the CPU and memory at a service level. Finally containers are much faster to work with than fully fledged virtual machines and simpler to ship around to platforms for deployment. Amazon for example has built in support via their Container Service as well as Elastic Beanstalk. I have used Ansible for deployment as it has really nice Docker wrapper which makes starting, stoping, pushing and pulling images between servers only a couple of lines of code.

One of the things that you need to watch out for is container reuse as they can get quite large. Base images are a way to minimise this and promote reuse and allow you to be able to control the underlying approach to multiple repositories without code duplication. Each step within the build of the Docker container is cached so only stages that are changed need to be pushed when changed. So be careful around the order that you run scripts leaving the stages that change till last. DockerHub is like a GitHub repository for built images and makes the pushing and pulling images require minimal infrastructure and learning. You can pay for private repositories in the same way that GitHub allows you to, or you can setup your own one if you are that way inclined.

Running Docker containers locally on a Mac is pretty straight forward with Boot2Docker which spins up a local Vagrant box which has the docker daemon running on it and allows you to easily test, build, push and query the main docker repo. Kitematic was also recently acquired by Docker as an alternative to people who are adverse to the command line. There are also a large set of officially maintained base container images for running Node, Jenkins and WordPress amongst many others. You need to understand some patterns around how best to ensure that data is persisted as if you stop a Docker container you loose the data within it. However data only containers are a way around this and a pattern that allows you to persist containers without losing the data or binding it too closely to the underlying operating system.

Microservices allow us to choose the right tool for the job and Docker containers abstract some of the complexity of this approach. Utilising base images promotes code reuse and the Dockerfiles themselves are checked in with the projects so any one who pulls the project can see how it is built which is a huge bonus. Most downsides like persistence and the size of containers have strategies to minimise their impact.

I would be very interested to hear your thoughts and experiences with microservices and containerisation.

Lessons from virtualising local development environments

its complicated

While working at ANDigital we used multiple languages/frameworks/web servers for both internal projects and external client work. Our goal was to be able to continuously deploy each of the services that we were involved with. I came to the conclusion that scripted virtualisation for local development put us on the right track to achieving this goal. The below outlines my learning from getting this process working.

Vagrant wrapped VirtualBox for local virtualisation

Vagrant is a handy wrapper for VirtualBox which allows you to programmatically setup underlying OS, mapped drives and define provisioning scripts. It also configures your local networking so you can see it as localhost. This can be checked into to your source control for ease of distribution as well as inbuilt change control.

Ansible for provisioning

Ansible is a configuration management tool that connects via SSH and programatically runs scripts for you. You use a simple language to define what each step will do and has default helpers for a lot of different tools such as Docker. The great thing about Ansible is that you can use it for both provisioning of servers and deployment of code. This is also checked in to a repository for distribution and change control.

Same scripts locally and at all stages

The advantage of using Ansible is that you can utilise very similar scripts for provisioning across every environment. The task based approach allows you to construct subtle differences but increases confidence of consistency. Automation of deployment also helps ensure that from your local environment through each stage that code lives in the same places executed in the same way.

Package, build and tests management

Ensuring that your development machine has the correct versions of packages, build and test tools can be quite a challenge. Especially when working on multiple projects with subtle differences. Having a script that configures these takes away a lot of the upfront setup and ensures you are running the same versions as your continuous integration servers.

It works on my machine

This is a such a common cry from developers and something we are slowly moving away from. As our local servers run a VM which is almost identical to development and production testing locally is much more realistic. It also forces developers to consider that there code will be running on a server. This is an important mindset that helps move away from this issue.

My IDE does that for me

The main pushback I have encountered is from people using a fully fledged IDE which contains a web server. I think that these two approaches can work in tandem which a push to your local server before checking code in as an extra step. I have also put an IDE within the server for an even higher level of consistency.

Cost and productivity boosts

The quicker that you can find a problem the easier and cheaper it is to solve. The lack of context switching required as you are awaiting feedback from test and regression is also a real bonus.

No longer can it be the operations teams responsibility to push code to production, developers should comprehend the impact that their code makes on all environments and local virtualisation really helps this mindset. Being able to switch between different environments with a simple vagrant up is definitely a future I want to be part of.

What 275 days of intensive care taught me about managing complex projects

Jack Jensen

For 275 days after his birth my son Jack lived in the intensive care units of various London hospitals. A large team of consultants, doctors, nurses and cleaners worked together 24/7 to support Jack’s daily needs whilst solving his long term health problems. Managing complex projects is a large part of my job so being a geek I couldn’t help but analyse the key mindsets and approaches that positively contributed to Jack’s journey. I believe that the concepts below can be applied to managing any complex long term problem and I hope you find them useful.

Set the right long term goal to give context to all decisions

In Jack’s case this was the ability for him to go to a normal school without any assistance. This context provided assistance to making harder short term decisions.

Complex questions rarely have a 100% answer

Doctors are rarely able to give you a 100% answer to complex questions. I have come to appreciate this way of thinking as it avoids setting you up for disappointment. This reflects the reality of the unknown and changable nature of environments.

The more people involved the harder consistent communication becomes

Communication between all of the parties involved in 24/7 care is a constant challenge. This can be helped by writing things down, putting them on walls and doing as much as possible face to face.

Start from the worst case scenario and work backward

Planning for the worst ensures you really think about all options. This is a great mind hack to be happier with outcomes that aren’t the best case scenario.

Capture as much information as possible

Over longer periods of time it is essential to write down decisions and observations so anyone can revisit the context and data around decisions if they weren’t involved in them at the time.

Establishing baselines and thresholds help autonomous decisioning

Every baby is unique and collecting data is a great way to understand their current state compared to their history. Once you have established a baseline it is easier to empower people to act if thresholds are broken. Overall population baselines are also useful over a longer term view.

Monitoring should be visual and constant

All monitors should be highly visible and when something deviates from the established baseline then they should alarm. Alarms should have clear levels between their various states.

Daily stand ups are essential

A daily conversation with all of the people that are going to be involved in the care of the child are essential. This coupled with data enables distributed decision making. Face to face conversation ensures everyone gets the chance to contribute.

Choosing the right option when many are available is difficult

There is a decent amount of trial and error in solving complex problems. There are standard approaches which give you options for the next step but only by trying and measuring will you actually find out how effective they are.

A clear path of escalation is essential

Knowing who to ask if you are blocked or have an emergency is essential. This coupled with having access to people with greater levels of experience can really help move things forward.

The last 8 months have been an incredible journey and I am unbelievably grateful for everyone who has helped us along the way. This process has broadened my approach, understanding and mindset for managing complex projects. I am thankful to the systems that have enabled Jack to have the smile that now warms my heart on a daily basis.

Scarcity and the trap of the daily deadline


For the past four years I have been working in an editorial environment Metro, the third largest daily UK newspaper. Over this time I have been amazed at the number of times that serious change has been attempted and failed. This seems to be a common problem with newspapers. Initially it confused me as there are a lot of very clever, passionate and motivated individuals involved. Over time however I have come to believe that there are three main components behind this.

Having to fill a set number of pages by a daily deadline creates scarcity of time. The focus required to achieve this creates tunnel vision that both helps and hinders. It helps editorial climb their daily mountain but the bandwidth tax of doing this reduces their cognitive capacity for change. This theory formed after reading Scarcity by Sendhil Mullainathan and Eldar Shafir. The Guardian review sums up the main premise of the book well:

“Scarcity captures the mind,” explain Mullainathan and Shafir. It promotes tunnel vision, helping us focus on the crisis at hand but making us “less insightful, less forward-thinking, less controlled”. Wise long-term decisions and willpower require cognitive resources. Poverty (the book’s core example) leaves far less of those resources at our disposal.

The editorial process is driven by risk aversion, due to a lack of ability to change a printed product after the deadline. Banks of subs check and recheck work and a single person runs each section as well as an overall editor. These create multiple bottlenecks which adds significant overhead, coupled with low levels of autonomy which increases the queuing further. Most of the paper remains a work in progress until the last possible minute.

Many of the systems that enable editorial processes are very old and not built for the current world we live in. Complex to change and expensive to run, this is the final piece holding back progress. Change requires running multiple concurrent systems. Each of these is tightly coupled with other complex systems and risk aversion is high. Cost to achieve this change is very high in monetary terms, training and complexity.

Complex process coupled with complex systems and a reduced cognitive capacity for anything outside of their deadline has been holding back editorial progress for years. I believe this is one of the reasons why newer entrants without this legacy have fared so well. They don’t have any of these previous constraints. Change is possible but usually underestimated due to the multiple layers of complexity and need to continue the existing process whilst building the new one.

21 product development tips from the trenches


Over the past four years at Metro we have delivered one replatform, four redesigns, multiple native apps and built and sold an online casino. From these experiences we have iteratively built a process and environment that aids product development. An Agile mindset has helped the development team achieve a consistent output. This coupled with Lean thinking delivered growth that convinced the business to fully embrace our process. The below are 21 product development tips that were hewn in the trenches of failure we call learning.

1: Ensure everyone has a clear vision of the end goal

This needs to be concise, measurable and most importantly achievable. A strong reason behind why that goal was chosen will help motivation. Everyone should be clear on what they can do to affect the goal. This should be the main job of leadership.

2: Use small cross functional, self organising teams

Should have as much autonomy as possible in how they affect their goal. This is key to enabling faster decision making which increases learning velocity. Proximity is the best hack to maximise face to face communication. This is the most effective way to ensure a common understanding.

3: Timing is everything

The biggest challenge is building the right product/feature, at the right time, using the right technology. The biggest waste is building things that people don’t want or need now.

4: Focus on figuring out the next releasable step that takes you closer to your goal

Ensure it is small, releasable and gets you both closer to your goal and provides valuable feedback.

5: Prototypes are a great way to improve early and ongoing feedback

Paper/whiteboards are a great place to start as they allow the quickest iteration. Later prototypes are most effective when viewed in the medium they will be delivered in e.g. In browser/device.

6: Project plans should as high level as possible

They are great for making a high level view of major deliverables visible. If they constantly need updating they are too granular.

7: UX/Design should be 2-4 weeks ahead of development

Designing and building prototypes with a goal of getting as much feedback as possible before development begins.

8: What is designed and what is built are two separate things

Both should inform the other but there is no master due to constraints on both sides.

9: Just in time is the best approach to detailed planning

Any earlier can be wasteful due to risk of new data from earlier releases or prototypes. Pair up on ticket writing using face to face communication and attach any prototypes/mockups/wireframes.

10: Less is more with process once you have a mature team

An agile journey must start somewhere and a fixed process like Scrum/Kanban is a great place to begin. However as the team and process matures your aim should be to reduce this to the minimum possible for your environment.

11: Centralised communication is best done outside of email

Slack/Trello are great examples of products that allow a participatory conversation without the cognitive overload of email.

12: Evolutionary architectural approach works best, complexity shows where work is needed

The simpler you start the quicker you can get real feedback. Avoiding over architecture allows you to combat scaling issues when required, usually by following established patterns. This avoids adding unnecessary complexity early on which can seriously hamper your ability to learn fast.

13: Micro services are a great pattern for a service based architecture

The ability to pick up, modify and release a service with complete confidence that it won’t impact anything else helps you move faster. They also allow you to prototype new technologies in a production environment. Focus on automation up front to minimise overheads of running multiple services.

14: Focus

The less we build the better we build what we do. Cognitive overload of working on multiple things at a time has a huge impact on quality.

15: Limiting work in progress is the best way to speed up delivery

Queues are very ineffective, the more queues you have and the more items they contain the worse they perform.

16: Product feedback should happen on a regular basis and have all stakeholders attend

Feedback should be constructive, open and honest. As we release everyday we have two demos a week and our Slack channel is constantly open for feedback. This is where work is prioritised, discussed and tweaked.

17: Data wins arguments

It’s ok to loose a battle based on opinion to come back and win the war with data. Make sure you are measuring the right things, this takes time but is worth the investment. Then look for anomalies or what I call “data smells”. Following these to their cause will give you great insight into your product.

18: Innovation needs time and space to happen which initially needs to be forced

Hacks days/afternoons are a great way to kick start this process. Give people the a fixed time with a clear direction and see what they come up with.

19: Beware of the curse of knowledge

Don’t get frustrated, embrace the fact that some people aren’t as far along the journey as you and help them take those next steps.

20: People should have strong opinions that are weakly held

An opinion is always useful to speed up the ideation phase. Logic around environmental constraints should shape the final decision.

21: Embrace your constraints

Each environment has a unique set of constraints. Use these to aid quick decision making. This should give you time to focus on what you need to change long term to be most effective.

During my time at Metro the only constant was change. We were able to embrace this and use it to our advantage. Iterative learning based approaches helped us maintain consistent growth. Delaying every decision until we had the best data possible kept our failures small and valuable. Building great products is about forming a team around an achievable goal and iterating based on the best feedback available at each stage.

Evolution of the Metro.co.uk homepage over the past four years

This slideshow requires JavaScript.