I have spent the last three years at Metro constantly tweaking our development process to fit our environment and teams maturity. At the same time the product development process had remained reasonably unchanged. Someone comes up with an idea we all have a debate about how good/bad we think it will be and someones opinion wins out the day. However in the last nine months we have also begun to iterate on this part of our process to make it more of a metrics driven product development process. It has been a slower process than I had anticipated and thought I would share some of the learning around that here.
I think that the lack of established templates to follow such as SCRUM has made this a harder process as there isn’t a template to start from. I also think that in order to get it right the metrics need to be driven from the business, so you are dependent on them being engaged. The toolset that helps you move to this type of approach is also lacking and in the end we had to build/hack our own.
Our journey began by the business setting one key goal backed by a clear metric that the whole business could understand. This was easier said than done and were definitely parts of the business that feel left out by this singular approach. However just having one key metric in the beginning is very important as it aids focus. Metro’s was to have an average of 700,000 daily mobile visitors in September 2013.
The next phase was to have a look at all the data that makes up the key metric and to break it down further to show each area of the business what they could effect. Being as open and honest about where the numbers have come from at this stage is essential. There will definitely be questions about how the decision was made especially if it is a large goal and being open at this stage is the best way to combat that (even if it it is a vanity/stretch target).
As the largest part of the goal was going to come from metro.co.uk and that partly my teams responsibility we started by looking at our web analytics (Omniture) to see what data we were already collecting. We also looked to the content and search teams to see what metrics they were tracking as only by working together would we be successful. Common goals should bring people together to talk about frustrations/improvements, this alignment should provide major benefits. However it still surprises me in large businesses how little people actually talk.
I think part of this problem is the fact that so many parts of the business have their own language. Development as much as anyone contributes to this as it takes quite a lot of time to be able to converse with most developers. Marketing, Sales and Content all also have their own ways of working and language which can be just as daunting for developers. Getting the translation right between these different groups so they can talk enough of the same language to communicate effectively is a major business benefit. I think this is one of the reasons that startups are able to iterate faster as they have to talk the same language as you have a limited set of people doing a lot of of different things but all speaking the same language.
Having had and looked at all the existing metrics we decided that there wasn’t actually enough specific product based feedback mechanisms to measure the success of our work. So to understand the product more we had to build our own measurement/tracking framework on top of Omniture. This took some work to get right and then took even longer to get the tracking across our entire site. However once we were able to see the results of each of our improvements in both click perspective and overall impact on the wider numbers a really important feedback mechanism was created.
Measuring the impact was only the first part however we needed to get the data into a format that everyone could see and understand. This was to be a common theme throughout this process you go from having no data to having too much data that doesn’t make a whole lot of sense. Having dedicated resources and time to create dashboards that compiled this data into meaning was the next step on this journey. Then we created regular meetings that talked around this data and tried to put meaning around it. Product Performance was born and we scheduled it in for every two weeks. We get all of the product owners and stakeholders that are involved in product development into a room and talk through performance of the previous two weeks. We also do a product based standup twice a week where all stakeholders stand in front of the Kanban board and discuss what is being done and about to drop into the process. Lots of small regular conversations definitely provide more value and leverage for change than large infrequent meetings.
We also got large televisions that monitored real time traffic and put them up in visible locations so everyone could see what was going on and feel the excitement when a large story took off. Real time data is the next step in this process and the faster you can see the data the faster you can react to it. The next step after having everything measured was to begin A/B testing. This step again proved to be a much harder than we initially thought. As our site is news based and constantly changing we found we couldn’t really use a Javascript based framework like Optimisely as things changes too much. Plus the cost was very prohibitive for a site of our size. So we ended up building a simple framework that extended our measurement in Omniture.
The other key learning was as we release multiple changes everyday what do you test and what do you just measure. We got to a place that if you were working on an incremental improvement then A/B test otherwise just ensure that everything is measured. We were surprised as well that sometimes the A/B results were 50:50 but when we made then change anyway it had a longer term affect on user behaviour that was positive and not captured during the testing.
We also had some issues with our Google News Sitemap and indexation. This was out most experimented feature over months. We ended up creating a dedicated swim lane on our Kansan board just for this. We only ran one test at a time and then iterated to the next one. I think for harder more complex options running just one test at a time really helped focus in the results of each before moving on.
Running too many tests at the same time proved to be confusing and I ended up making one of the team the chief growth hacker and he became the go to person for what and how to run tests. Now that we are in a place with a nice testing framework and approach it was time to see if it would work towards a more major site redesign. We needed to break this big design down into testable chunks that would prove our hypothesis that people would interact more by scrolling than swiping. Creating stories that were just for learning felt like a natural evolution from a lean mindset.
Conclusion
It has been a really interesting nine months and we have managed to hit a goal that seemed completely unachievable only 12 months ago. I think we managed to create a measurement mindset where everyone asks what we are measuring before starting development. This added to the power of everyone pointing in the same direction, having a process that allows constant improvement and promotes communication has produced some amazing results. You should give it a go.
- Start with one key metric
- Break metric down to provide relevance
- Measure everything
- Create dashboards to distill data
- Make the data visible to everyone
- Regular meetings (short and often) to create common language
- Try running some A/B tests
- Keep changes small and regular
- Communicate, communicate, communicate
- Enjoy your successes
Further Reading
This blog post was picked up by The Media Briefing who further interviewed myself and Jamie Walters for a slightly different angle on the above. The post How the Metro uses metrics to create flexible and effective digital products is well worth a read.
Music to write this code to
Nice bit of LTJ Bukem style drum and bass to get your head nodding to a new process.