Lessons from virtualising local development environments

its complicated

While working at ANDigital we used multiple languages/frameworks/web servers for both internal projects and external client work. Our goal was to be able to continuously deploy each of the services that we were involved with. I came to the conclusion that scripted virtualisation for local development put us on the right track to achieving this goal. The below outlines my learning from getting this process working.

Vagrant wrapped VirtualBox for local virtualisation

Vagrant is a handy wrapper for VirtualBox which allows you to programmatically setup underlying OS, mapped drives and define provisioning scripts. It also configures your local networking so you can see it as localhost. This can be checked into to your source control for ease of distribution as well as inbuilt change control.

Ansible for provisioning

Ansible is a configuration management tool that connects via SSH and programatically runs scripts for you. You use a simple language to define what each step will do and has default helpers for a lot of different tools such as Docker. The great thing about Ansible is that you can use it for both provisioning of servers and deployment of code. This is also checked in to a repository for distribution and change control.

Same scripts locally and at all stages

The advantage of using Ansible is that you can utilise very similar scripts for provisioning across every environment. The task based approach allows you to construct subtle differences but increases confidence of consistency. Automation of deployment also helps ensure that from your local environment through each stage that code lives in the same places executed in the same way.

Package, build and tests management

Ensuring that your development machine has the correct versions of packages, build and test tools can be quite a challenge. Especially when working on multiple projects with subtle differences. Having a script that configures these takes away a lot of the upfront setup and ensures you are running the same versions as your continuous integration servers.

It works on my machine

This is a such a common cry from developers and something we are slowly moving away from. As our local servers run a VM which is almost identical to development and production testing locally is much more realistic. It also forces developers to consider that there code will be running on a server. This is an important mindset that helps move away from this issue.

My IDE does that for me

The main pushback I have encountered is from people using a fully fledged IDE which contains a web server. I think that these two approaches can work in tandem which a push to your local server before checking code in as an extra step. I have also put an IDE within the server for an even higher level of consistency.

Cost and productivity boosts

The quicker that you can find a problem the easier and cheaper it is to solve. The lack of context switching required as you are awaiting feedback from test and regression is also a real bonus.

No longer can it be the operations teams responsibility to push code to production, developers should comprehend the impact that their code makes on all environments and local virtualisation really helps this mindset. Being able to switch between different environments with a simple vagrant up is definitely a future I want to be part of.

What 275 days of intensive care taught me about managing complex projects

Jack Jensen

For 275 days after his birth my son Jack lived in the intensive care units of various London hospitals. A large team of consultants, doctors, nurses and cleaners worked together 24/7 to support Jack’s daily needs whilst solving his long term health problems. Managing complex projects is a large part of my job so being a geek I couldn’t help but analyse the key mindsets and approaches that positively contributed to Jack’s journey. I believe that the concepts below can be applied to managing any complex long term problem and I hope you find them useful.

Set the right long term goal to give context to all decisions

In Jack’s case this was the ability for him to go to a normal school without any assistance. This context provided assistance to making harder short term decisions.

Complex questions rarely have a 100% answer

Doctors are rarely able to give you a 100% answer to complex questions. I have come to appreciate this way of thinking as it avoids setting you up for disappointment. This reflects the reality of the unknown and changable nature of environments.

The more people involved the harder consistent communication becomes

Communication between all of the parties involved in 24/7 care is a constant challenge. This can be helped by writing things down, putting them on walls and doing as much as possible face to face.

Start from the worst case scenario and work backward

Planning for the worst ensures you really think about all options. This is a great mind hack to be happier with outcomes that aren’t the best case scenario.

Capture as much information as possible

Over longer periods of time it is essential to write down decisions and observations so anyone can revisit the context and data around decisions if they weren’t involved in them at the time.

Establishing baselines and thresholds help autonomous decisioning

Every baby is unique and collecting data is a great way to understand their current state compared to their history. Once you have established a baseline it is easier to empower people to act if thresholds are broken. Overall population baselines are also useful over a longer term view.

Monitoring should be visual and constant

All monitors should be highly visible and when something deviates from the established baseline then they should alarm. Alarms should have clear levels between their various states.

Daily stand ups are essential

A daily conversation with all of the people that are going to be involved in the care of the child are essential. This coupled with data enables distributed decision making. Face to face conversation ensures everyone gets the chance to contribute.

Choosing the right option when many are available is difficult

There is a decent amount of trial and error in solving complex problems. There are standard approaches which give you options for the next step but only by trying and measuring will you actually find out how effective they are.

A clear path of escalation is essential

Knowing who to ask if you are blocked or have an emergency is essential. This coupled with having access to people with greater levels of experience can really help move things forward.

The last 8 months have been an incredible journey and I am unbelievably grateful for everyone who has helped us along the way. This process has broadened my approach, understanding and mindset for managing complex projects. I am thankful to the systems that have enabled Jack to have the smile that now warms my heart on a daily basis.

Scarcity and the trap of the daily deadline


For the past four years I have been working in an editorial environment Metro, the third largest daily UK newspaper. Over this time I have been amazed at the number of times that serious change has been attempted and failed. This seems to be a common problem with newspapers. Initially it confused me as there are a lot of very clever, passionate and motivated individuals involved. Over time however I have come to believe that there are three main components behind this.

Having to fill a set number of pages by a daily deadline creates scarcity of time. The focus required to achieve this creates tunnel vision that both helps and hinders. It helps editorial climb their daily mountain but the bandwidth tax of doing this reduces their cognitive capacity for change. This theory formed after reading Scarcity by Sendhil Mullainathan and Eldar Shafir. The Guardian review sums up the main premise of the book well:

“Scarcity captures the mind,” explain Mullainathan and Shafir. It promotes tunnel vision, helping us focus on the crisis at hand but making us “less insightful, less forward-thinking, less controlled”. Wise long-term decisions and willpower require cognitive resources. Poverty (the book’s core example) leaves far less of those resources at our disposal.

The editorial process is driven by risk aversion, due to a lack of ability to change a printed product after the deadline. Banks of subs check and recheck work and a single person runs each section as well as an overall editor. These create multiple bottlenecks which adds significant overhead, coupled with low levels of autonomy which increases the queuing further. Most of the paper remains a work in progress until the last possible minute.

Many of the systems that enable editorial processes are very old and not built for the current world we live in. Complex to change and expensive to run, this is the final piece holding back progress. Change requires running multiple concurrent systems. Each of these is tightly coupled with other complex systems and risk aversion is high. Cost to achieve this change is very high in monetary terms, training and complexity.

Complex process coupled with complex systems and a reduced cognitive capacity for anything outside of their deadline has been holding back editorial progress for years. I believe this is one of the reasons why newer entrants without this legacy have fared so well. They don’t have any of these previous constraints. Change is possible but usually underestimated due to the multiple layers of complexity and need to continue the existing process whilst building the new one.

21 product development tips from the trenches


Over the past four years at Metro we have delivered one replatform, four redesigns, multiple native apps and built and sold an online casino. From these experiences we have iteratively built a process and environment that aids product development. An Agile mindset has helped the development team achieve a consistent output. This coupled with Lean thinking delivered growth that convinced the business to fully embrace our process. The below are 21 product development tips that were hewn in the trenches of failure we call learning.

1: Ensure everyone has a clear vision of the end goal

This needs to be concise, measurable and most importantly achievable. A strong reason behind why that goal was chosen will help motivation. Everyone should be clear on what they can do to affect the goal. This should be the main job of leadership.

2: Use small cross functional, self organising teams

Should have as much autonomy as possible in how they affect their goal. This is key to enabling faster decision making which increases learning velocity. Proximity is the best hack to maximise face to face communication. This is the most effective way to ensure a common understanding.

3: Timing is everything

The biggest challenge is building the right product/feature, at the right time, using the right technology. The biggest waste is building things that people don’t want or need now.

4: Focus on figuring out the next releasable step that takes you closer to your goal

Ensure it is small, releasable and gets you both closer to your goal and provides valuable feedback.

5: Prototypes are a great way to improve early and ongoing feedback

Paper/whiteboards are a great place to start as they allow the quickest iteration. Later prototypes are most effective when viewed in the medium they will be delivered in e.g. In browser/device.

6: Project plans should as high level as possible

They are great for making a high level view of major deliverables visible. If they constantly need updating they are too granular.

7: UX/Design should be 2-4 weeks ahead of development

Designing and building prototypes with a goal of getting as much feedback as possible before development begins.

8: What is designed and what is built are two separate things

Both should inform the other but there is no master due to constraints on both sides.

9: Just in time is the best approach to detailed planning

Any earlier can be wasteful due to risk of new data from earlier releases or prototypes. Pair up on ticket writing using face to face communication and attach any prototypes/mockups/wireframes.

10: Less is more with process once you have a mature team

An agile journey must start somewhere and a fixed process like Scrum/Kanban is a great place to begin. However as the team and process matures your aim should be to reduce this to the minimum possible for your environment.

11: Centralised communication is best done outside of email

Slack/Trello are great examples of products that allow a participatory conversation without the cognitive overload of email.

12: Evolutionary architectural approach works best, complexity shows where work is needed

The simpler you start the quicker you can get real feedback. Avoiding over architecture allows you to combat scaling issues when required, usually by following established patterns. This avoids adding unnecessary complexity early on which can seriously hamper your ability to learn fast.

13: Micro services are a great pattern for a service based architecture

The ability to pick up, modify and release a service with complete confidence that it won’t impact anything else helps you move faster. They also allow you to prototype new technologies in a production environment. Focus on automation up front to minimise overheads of running multiple services.

14: Focus

The less we build the better we build what we do. Cognitive overload of working on multiple things at a time has a huge impact on quality.

15: Limiting work in progress is the best way to speed up delivery

Queues are very ineffective, the more queues you have and the more items they contain the worse they perform.

16: Product feedback should happen on a regular basis and have all stakeholders attend

Feedback should be constructive, open and honest. As we release everyday we have two demos a week and our Slack channel is constantly open for feedback. This is where work is prioritised, discussed and tweaked.

17: Data wins arguments

It’s ok to loose a battle based on opinion to come back and win the war with data. Make sure you are measuring the right things, this takes time but is worth the investment. Then look for anomalies or what I call “data smells”. Following these to their cause will give you great insight into your product.

18: Innovation needs time and space to happen which initially needs to be forced

Hacks days/afternoons are a great way to kick start this process. Give people the a fixed time with a clear direction and see what they come up with.

19: Beware of the curse of knowledge

Don’t get frustrated, embrace the fact that some people aren’t as far along the journey as you and help them take those next steps.

20: People should have strong opinions that are weakly held

An opinion is always useful to speed up the ideation phase. Logic around environmental constraints should shape the final decision.

21: Embrace your constraints

Each environment has a unique set of constraints. Use these to aid quick decision making. This should give you time to focus on what you need to change long term to be most effective.

During my time at Metro the only constant was change. We were able to embrace this and use it to our advantage. Iterative learning based approaches helped us maintain consistent growth. Delaying every decision until we had the best data possible kept our failures small and valuable. Building great products is about forming a team around an achievable goal and iterating based on the best feedback available at each stage.

Evolution of the Metro.co.uk homepage over the past four years

This slideshow requires JavaScript.

How software ate manual content placement on Metro.co.uk

Trending and Newsfeed Automatic Placements

Trending and Newsfeed Automatic Placements


The majority of content placement on metro.co.uk is now managed by software. This has been a long journey based on real world feedback and incremental addition of complexity. My goal has always been to take a developer’s view of the editorial process and optimise where possible. Looking at the numbers it became clear that for large areas of the site a disproportionate amount of time was spent on content placement for the value it returned. My previous post covered the first part of the journey and now I will explain how we extended this to run the majority of the site.

We gather a lot of information from WordPress, Facebook, Twitter and Omniture into a MySQL database. This data is passed through a stored procedure to return five different sorts:

Score

((Tweets + Facebook Interactions) * Social Multiplier) + Views

Trending

Score Now – Score 30 Mins Ago

Social

Tweets + Facebook Interactions

Date

Date descending

Coefficient

((((Tweets + Facebook Interactions) * Social Multiplier) + Views + Editorial Boost) * Hour Coefficient) + Tag Boost

We also pass in the following to modify the results depending on the channel, e.g. news, that the data sits within.

Hours to return

e.g. From 24-336 since publishing

Filter Subcategories

e.g. Football,Oddballs,Weird,Food

Boost Tag

e.g. arsenal-fc

Social Multiplier

e.g. 10

Content Type

e.g. Blog or Post

Remove Tags

e.g. tottenham-hotspur-fc

Coefficient

e.g. 0-4 * 3, 4-12 * 1. 13-24 * 0.5, 25-48 0.3

The coefficient sort now takes input from the articles that the editors place at the top of each of the channel pages of the site. This editorial boost allowed us to keep everything feeling much fresher until the data catches up. We have also built our own real time page view tracking system with Node and Redis to get around the time lag in Omniture of 30-60 mins.

We recently centralised all of the settings so that they are easy to view and change. I focused on optimising the cluster of content returned and timeframes to retrieve within to get them working with publishing patterns per channel. The ability to cluster content of similar channels has helped ensure we offer a wider variety of content at different stages of the user’s journey.

Using different sort methods coupled with this clustering has reduced duplication. The design has also helped by using different image ratios and colours for different sections of that page that may contain the same content. We have standardised the bottom of all pages to be the same. This means if we are able to improve performance then it is felt across the entire site.

The last addition was the ability to boost by tag. This has enabled the article based algorithm to be much more relevant. At this level of granularity we decided context is much more important than freshness. Moving the tag boost outside of the coefficient enabled this to be clustered at the top but we limit this to the five most recent related articles.

Our API is able to deliver everything that the front end needs for rendering including title, image and excerpt. This has also enabled us to use this data in multiple places such as the native Tablet and Phone Editions and the glance journalism experiment Metro10 and MetroVision. Our feeds finally go through an additional layer that allow us to add sponsored content at fixed positions for traffic driving purposes.

The great part of all of this is that the maths is still very simple and can be explained to anyone who is interested. Having a set of values to tweak per channel has enabled us to have enough options to slice the data for use in multiple contexts. It has taken 12 months and a full redesign to really see this come to life but I hope it will be a part of Metro for years to come.

Home Page

Metro Homepage Placements

Metro Homepage Placements

Article Page

Metro Article Page Placement

Metro Article Page Placement

My talk at the WordPress VIP Big Media Meetup on this.

21 product guidelines forged while growing Metro.co.uk 400%

Metro Quarterly Traffic Growth

For the last two years I have been focused on the design, build and growth of Metro.co.uk utilising the WordPress VIP platform. Our approach consists of constant experimentation with both product and content which has returned a large set of data mixed with editorial feedback. This has been refined into a list of product guidelines to help us remain focused on growth. These are based on my experiences and our audience so yours may differ.

1: Good editorial content will deliver more growth than any product based approach

With a single well written/planned/timed story able to deliver millions of page views and course through the veins of social networks for weeks this should be the number one focus.

2: Good UX turns the dial more than any product hacks

The better the experience of product and content the more likely people are to visit your site, share your content and form habits around its consumption.

3: The closer to the main content area of the page the more related the content should be

Our data has shown that the closer to the article body or top of channel pages the better contextually related content perfoms. Once you are below these areas users are more open to a wider set of content to continue their journey.

4: Where content is placed on the page is almost as important as the content that is placed there

Our testing revealed content placement is almost as important as content selection (as long as it is relevant and recent). This is one of the reasons we have moved to an algorithmic approach for large areas of the site.

  • Nothing beats the value of an editorially selected contextual link within the article body
  • The area just after article delivers a lot of value as users have finished reading and can be easily tempted into something else
  • Sidebars aren’t shown on mobile and banner blindness often turns them off for desktop users so they are not an area we focus on

5: Fill dead space with content, people like to scroll it’s the natural behaviour of the web

Our newsfeed delivers over 10% of the page views of our site, this is pretty impressive considering it used to be blank space at the bottom of every article and channel page.

6: Don’t mess with the natural way that the web works

We tried and failed with this during our swipe phase. 5-7% of users delivered 20% of our page views but that didn’t increase their overall time on site. However it complicated everything we built hampering our ability to learn fast. It also didn’t quite fit into commercial or editorial strategies. This frustration/learning was what inspired the algorithm and scroll based newsfeed you now see.

7: Algorithms are great but need help from humans to perform at their best

Simple algorithms are a great way to optimise editorial workflows especially around content positioning. However these are only as good as the data behind them. Often you have to wait for this to be gathered before acting on it. Using editorial intuition is a great way to shortcut this process. Especially if you can make it run off existing priorities then process change isn’t required to participate.

8: Whatever Google/Facebook ask you to do just do it

They deliver so much of your traffic don’t question, just do what they recommend.

9: Feed the beast

Google and Facebook are always hungry for quality content. Gaining momentum requires constant feeding. They both have overall scores for domain as well as article urls so focusing on keeping this high means a better chance to gain and then maintain momentum.

10: Think of every page as a funnel, you loose users as they scroll but the lower they get the more open to their next engagement they become

The higher up the page something is placed the more people will see it. However the lower down the page someone is the more open they are to being tempted by some more content, advertising or interactions (e.g. poll vote, comments)

11: A mobile first approach is a great way to approach product prioritisation

Most of our traffic comes from mobile rather than desktop so it is logical to prioritise. This has formed a major part of our growth strategy.

12: Goals need to be concise, measurable and focus on why

The more people understand the goal and are able to affect it the more powerful it is. A goal that contains a why will always beat a goal that just contains a what.

13: Product specific performance should be broken down to actions per daily active users for comparison

This gives a much better overview of actual performance. Allows you to take out traffic fluctuations, just make sure you have enough data.

14: A week seems to be the minimum amount of data required to see if a feature has worked

Due to fluctuations in traffic and browsing habits. Also good to look at monthly and quarterly trends over longer periods as quite often they exhibit patterns that aren’t found at lower levels. It was asking questions around unexpected trends/data that helped teach me most around product growth.

15: Distribute weekly reports to show trends and give your stakeholders an overview of how the product is performing

Have these scheduled to your team and stakeholders via email. Also very useful if you break something when fixing something else. Great safety net to minimise impact and spot any unexpected growth.

16: Any new feature needs to be taken in context of how it fits in the editorial work flow. The closer it is to the existing process the more likely it will be adopted.

The best way to change a habit is build off an existing trigger. New features that leverage existing habits will get much higher adoption than building new habits/process.

17: Consider the users current journey and their emotional state in all features

Segmenting users based on mindset is a great way to understand data. e.g. Social browsers are likely on a multi site journey in a chromed browser on a mobile device. So they are only looking for a single story from your site so optimise for that. No point in worrying about pages/visit focus on getting more return visits via a social follow.

18: When coming from social users are often looking to enhance their social status

Our top share buttons get clicked on 4 times more than our bottom share buttons. Social proof around number of others already shared also promotes more sharing.

19: When coming from search users are usually in a topic based mindset

More likely to click on related, in article links and masthead channel links. Continue to deliver great content around a niche to form habits. Particularly useful around passion centres e.g. Premier League clubs.

20: It’s better to have 100 amazing tag pages that look and feel like a destination than 10,000 that feel like they were made for Google

Quality trumps quantity every time, Google knows if you users are clicking through.

21: People click on headlines 4x more than they click images

This is why A/B testing headlines is a great idea. It is the single piece of the editorial process that can have the biggest impact on growth. We also have SEO and socially optimised headlines to ensure we cater to both needs.

These are the principles that I have applied to the product development of metro.co.uk over the past two years. The key takeaway is that constant experimentation is a great way to unlock growth if your environment supports it. The hard part is achieving that without adding too much complexity. Complexity inhibits your ability to learn and learning is central to any successful product growth strategy. Building a set of guidelines has enabled us to move faster and helped foster our continued growth.

One for the future.

Micro interactions help drive habitual use

We don’t have a lot of data on this yet but there seems to be a correlation between micro interactions such as poll votes and habitual use. My theory is that by engaging different parts of the brain you become more memorable. These simple actions form the basis of new habits around content consumption. I think this is a major opportunity for future growth.

The thoughts and process behind Metro10

This slideshow requires JavaScript.

Metro 10 was born out of a desire to experiment with native mobile news reading experiences that solved a different problem to our already fully responsive website. We had an algorithm that allowed customisation and decided to use this as our data source. The concept of restricting the volume of content would help us differentiate from our infinite web experience. We also wanted our users to engage with each article fully before moving on and not just get into a state of skimming which we had seen with infinite lists online.

Our goal was to minimise the time to launch so we decided a hybrid approach would help this. Utilise the native elements for caching and navigation but a web view for the article body. As our CMS stores all of the HTML and other markup in a single blob stripping this out was too much work for the first iteration of an idea that was unproven. Our initial ideas were based on simple mobile interaction patterns. Swipe to dismiss, interaction based personalisation and full bleed images as our tablet edition had experimented successfully with this. We decided on an Android first approach as we had Java skills in house and we also wanted to iterate quickly once it had launched. The turnaround in the Play store would allow us to do this.

I hadn’t written any code in four years and getting back in the saddle was a rewarding experience. Leaning a new language and platform at the same time took a little time but the tools and ecosystem are pretty mature. Initially I got out my little black Moleskin notebook and started sketching. Putting something down on paper helped me to start to visualise flow and basic function. I am no UX designer but I find sketching a really valuable first step in any design process.

Once we had a basic idea we jumped into a rapid prototyping phase. This involved a lot of copying and pasting from Stack Overflow and some working from home to get some focus to get the bare bones of an app together. It was buggy as hell but being able to show people on a phone and see there reaction was priceless. I then involved Matt our UX designer and he started to iterate on the design. We were lucky that the concept was simple and allowed very quick iterations. The design quickly morphed from a list that looked very web based to a full bleed image that felt more mobile first.

It was at this stage I got a few more of the team involved. When rapid prototyping it was good to be solo and really iterate quickly but once we were locked on an idea we needed stability and to start on a refactoring process to move towards a proper architecture. I believe in emergent design where you don’t over architect things up front but as you learn more of the systems and domain build what is most appropriate. This can save a lot of time as you only build what is necessary and refactor once you are sure of its value.

The next phase of development to our first alpha was frustrating at times but also very rewarding. We kept the requirements process very fluid as things would change on a daily basis. This was a challenge but having written user stories which were quickly out of date and fought with automation of testing we decided keeping things simple was the best approach. Guerrilla usability testing has been key to shaping the UX. As the initial idea was pretty out there putting it in real peoples hands in the real world was priceless. We quickly learnt that people liked the limited number of stories, enjoyed the full screen design but having to dismiss every story Snapchat style wasn’t something that they felt comfortable with. The paradox of choice was blinding them. With 10 possible stories to read and only one visible at a time and only one chance to read it they couldn’t decide whether it was interesting enough.

We decided to strip the user experience back to its most basic and only give people 10 stories at at time that they can scroll through left and right. With this approach people were quickly able to grasp the concept and understand the navigation. Calling it Metro10 ensured that we kept to our goal of just the 10 most popular stories right now. Throughout this process there have been a lot of tension between what constitutes our MVP. Due to a fixed timeframe we had to make some compromises around fluidity of design. The biggest was to leave all interactions via gestures rather than fluid animations. The level of complexity rose exponentially with these.

We were able to get validation of our ideas without them. Learning if the concept works is the most important and all of the underlying data, service and business layer need to work no matter what the front end looks like. Overall it has been a great experience in taking a concept from paper to reality in less than two months. There have been lots of zigs and zags along the way but that is what makes this job interesting.

MVP

This slideshow requires JavaScript.

Later iteration

This slideshow requires JavaScript.