How to launch a scalable optimisation programme for digital products

Key considerations for successfully launching and running a digital product optimisation programme.


Launch a scalable programme for digital products | Red Badger

Picture this. You’ve spent months toiling away, building a fantastic new product. You’ve identified your users, you have your value proposition nailed, you’ve conducted extensive research into user needs and feature ideas, have built a scalable technical infrastructure that is secure and future-proofed and you’ve now launched your baby into the wild (figuratively mind, I’m definitely not condoning you literally launch your baby. There are laws against that).

A job well done. So, what comes next? 

According to Gartner,  73% of buyers claimed the experience was the number one factor to loyalty, not price or brand. If continually improving product experiences is key to customer retention and loyalty, how do companies look to address this after the launch of their products? Over the years I’ve witnessed three main scenarios happen:

Scenario 1: Project Thinking

Some companies believe this is where it ends. The project has been completed and released, they now have a BAU support structure in place and therefore move on to other top priorities. These companies still operate with a “project-thinking” mindset and haven’t yet explored or adopted a “product-thinking” mindset.

This approach relies on you getting it 100% right in the planning phase, leading to a product 'big bang' if you like. It also means that your product will never be better than the day it launches. And with competitors ever improving their experiences, your product will immediately start to depreciate in value.

The added downfall with this scenario is that technologies can quickly go out-of-date, and without continually monitoring, analysing or improving your product you’ll find yourself having to completely re-invent the wheel again in three to five years' time, mainly to keep up with your competitors and ever-evolving technologies.

In the meantime, your users may have been feeling increasingly frustrated with a degrading user experience, and have therefore decided to move their loyalties elsewhere meaning a higher churn rate and lost potential revenues for your company.

Scenario 2: Feature Shopping List

More customer-focused companies do see the benefit in continually adapting and building on their initial product and therefore have a team, or multiple teams, with a backlog of feature improvement ideas.

These teams will work tirelessly to prioritise and build their features to improve the customer experience. Sometimes this might be done via “gut-feel” or because the CEO has requested a particular feature and made this a priority.

More evolved companies will conduct user research and have tested their new feature ideas with a sample of end users before prioritising and releasing.

More often than not, teams will then rely on a centralised analytics team or even a sole analyst in smaller companies to monitor the feature release over time and report back on its success.

This is definitely a step ahead of scenario 1, however, this way of working isn’t without it’s pitfalls:

  1. Releasing a new feature to 100% of your traffic is risky. Whilst you may have great qualitative data to back up the desirability of a feature, there’s no telling how it may perform when it’s put in front of a much larger audience. What if you eventually see a drop in conversion? By then, it’s almost too late. You have already invested the time and resources to get this new feature out, and now have to spend more time trying to figure out what went wrong and how to rectify the situation.

  2. Not every day, week or month of the year is the same. If your analyst is comparing user activity before and after a new feature is released to determine its success, this can be littered with ambiguity. Outside influences such as events, campaigns, peak trading times or even the weather can hugely affect user behaviour on site, and if you’re comparing the success of a feature update from month X to month Y, you may as well be comparing Trump to Obama (or as the old saying goes, apples with oranges. I’ll let you guess which one might be which here). This is quite a weak method of analysis, as the success metrics aren’t contained within a controlled environment and therefore can be complicated to unpick and can’t really be trusted. The sentiment is there, the science, however, is not.

  3. Don’t get me wrong, having an analyst provide data on feature success metrics is a fantastic, data-driven approach. However, if you’re counting on an analyst in another department to produce results there can be two main pitfalls here: time and context. 
    1. You are not that analyst’s only priority. They may have a backlog as long as their arm themselves, therefore will you be able to get feedback on your feature release in a reasonable timeframe? What if your feature is performing poorly? If the analyst doesn’t have the time or the capacity to prioritise your work, that under-performing feature may be out there for a while before you have the data or the knowledge to do anything about it. Or worse, you then only find out about a problem when you’re contacted by angry customers. 
    2. Also, through experience, if that analyst isn’t embedded as part of your team on a day-to-day basis, the context of what you’re trying to achieve may get lost in translation. Often requests or briefs come through as part of a ticketing system and the analyst won't speak to the product team, meaning you may only receive back a report without recommendations on what and how to improve. This will be mainly because that analyst doesn’t have the context or be living and breathing the same goals as your team.

Scenario 3: Test & Learn cycle

So now we come to scenario 3. This scenario mainly occurs after a company has been operating within scenario 2 for some time, yet is now getting frustrated by the lack of feature success, lack of meaningful or trust-worthy data, or simply because they want to ensure that they are more data-driven in all they do.

These companies want to know for certain something is going to be a success before they even start to build it. In turn reducing up-front time, investment and resources and ensuring they’re doing the right thing for their customers.

This is where an optimisation strategy comes into play. Adopting a “test-and-learn” approach to improve any product is a path to major success, and even better when conducted scientifically.

A scalable optimisation programme, done right, is a sure-fire way to achieve the dual goal of increasing customer satisfaction whilst improving business performance. Sounds good, right? But where do you even begin?

Starting an optimisation programme from scratch

Initial hurdles

To get an optimisation programme off the ground, one of the first main hurdles to overcome is getting “buy-in” from senior stakeholders.

Two main concerns they have are mainly around cost: “Why do I need to spend money on a new tool or a new team, what will we get out of it?”, and fear of change: “Our teams already have a backlog of features that are prioritised and we’ve been releasing features fine up until now, why add a new process in? Won’t it just slow us down?”.

Both are valid concerns, and ones I’ve heard many times. Without knowing or understanding the benefits of an optimisation strategy, it can seem an unnecessary cost and a hindrance to delivery.

However, what if you could tell your boss (or the person who holds the purse strings) that you could give them hard, quantitative evidence that the features you’re looking to build will improve conversions and perhaps revenue before they’re even released, whilst at the same time saving time and money by not investing in ideas that aren’t going to work.

Couple this with qualitative research findings (to ensure that customers are not being disadvantaged by any improvements in conversion), and you have very powerful evidence to support prioritising any backlog.

Ownership is usually a hurdle as well. Who owns a testing programme? Is it Marketing, Product or Tech?

These teams will normally want to have their say, which is why it’s good practice to establish clear ways of working, communication and visibility from the very beginning.

This can be done by the use of visible Kanban boards, sharing your roadmap, or perhaps holding regular CRO forums with key stakeholders from multiple departments to present results and give visibility on what’s coming up next.

This is also a good opportunity to discuss and understand whether running a test at a certain time may affect other activities on the site, such as a marketing campaign, for example, therefore gauging the business viability of each test.

Team & Tools

It can be relatively easy to start testing your website, you just need the right mixture of skills and tools. You don’t have to go all guns blazing, you can start relatively small and once the strategy has been crafted and the programme has found its groove, then you can look to scale.

From the outset, your optimisation team and toolkit might look a little like this:

Tools

There is a myriad of A/B testing tools out there on the market, and most offer the ability to create simple tests without having any coding experience, although some do come with a hefty price tag.

If you’re just starting out on your CRO (conversion rate optimisation) journey, you might be considering the most cost-effective solution in order to first prove the benefits testing has to offer your business and to get that initial buy-in from your stakeholders.

As an example of cost-effectiveness Google offers Optimize, a free tool as part of it’s stack, where you can set up desktop and mobile tests using a WYSIWYG interface. It also offers some integrations, such as with Google Analytics, to help you expand your analysis.

Team

When it comes to CRO testing, most businesses tend to start out by trying out a few ad-hoc tests in perhaps one area or team.

More often than not, this responsibility is handed to someone who has the groundings of being a Conversion Optimisation specialist (this person could be an analyst, marketing specialist or digital strategist for example) and will normally do this activity part-time, amongst their other day-to-day tasks.

This is what I would classify as the “Initiate” phase; the initial step to start trialling out how A/B testing can work as part of your business. 

This initial start is a great opportunity to catch people's attention internally and start the snowball effect of being able to grow an optimisation strategy as one of your core digital functions over time.

Whilst performing say 1-2 simple tests a month may not sound ground-breaking, if you’re choosing the right problems to solve, and designing solutions that could have the biggest impact then you’re already well on your way to proving how powerful A/B testing can be.

However, and here’s the controversial bit, expect around half of your tests to fail! It would be weird if they were all positive really because the point of testing is to learn, not to be right all the time. “Failure” is as healthy as a win, because in essence you’ve learnt what doesn’t work for your customers, and therefore you’re not going to be providing them with features they don’t want. 

Growing your programme over time

If you’ve been conducting ad-hoc tests for a while and have presented some stellar wins to your senior stakeholders, you’re probably now in a good position to expand and mature your testing programme into the “Accelerate” phase.

Team

If you’re looking to accelerate your testing capabilities then one lonely, part-time resource isn’t going to be sustainable any longer. Now is the time to galvanise a team of experts (insert cheesy Avengers reference here if you wish!).

To kick-start a regular testing programme, it helps to have a full-time, cross-functional team who can strategise, plan, design, build, execute and analyse. I’ve worked within various sized optimisation teams and they can vary based on the size of the business and the opportunity that testing can add to an overall digital strategy.

Here’s one example of what a full-time team can look like:

  • Optimisation Manager (Product Manager)
  • Delivery Lead
  • Analyst(s)
  • UX / UI Designers
  • Developers
  • QA

As mentioned, this can be scaled up or down depending on demand. Ideally, if your programme is fully embedded then this cross-functional team works best when they’re all fully dedicated resources.

However, that sometimes isn’t possible and there are pros and cons to borrowing part-time resources from other departments. You may work with certain UX / UI designers when testing on different parts of your website or you may feed into another team's development backlog, however, the latter can have its drawbacks when it comes to prioritisation and speed of delivery.

In any case, one key element of scaling your optimisation programme is having the right team working together to deliver the best tests, at the right time.

Tools

Now is a good time to add more tools to your testing arsenal. This doesn’t necessarily mean you have to invest in a shinier testing platform, although if you want one which offers more integrations or more custom code building options then it’s time to splash some cash.

Just looking at the “what happened” with your testing results doesn’t always give you the full story. By adding in the element of “why it happened”, you can create richer narratives and more compelling outcomes. 

Heatmaps and session recording tools are great for identifying where your users are experiencing pain points in your product and highlighting problem areas that can lead to a test hypothesis.

Where they can also add value is if they’re embedded as extra elements as part of the test itself, by giving a visual representation to the success of each variant. These can also be linked to your product analytics tools allowing you to drill down on specific groups of users.

You may also be thinking about expanding your testing capabilities beyond just your website. You may have an app that you want to apply the same testing strategy to. Testing on apps can come with a slightly different set of rules and tools, and it pays to look into tools that either cater solely for apps or that at least have app friendly testing capabilities. 

A/B/n vs. MVT 

All the letters and acronyms here! If you’re growing your testing strategy then you’ll want to diversify the types of tests that you’re running. Simple A/B tests where you test just one element on a page of your website can evolve to testing multiple elements on a page, or even across multiple pages. It’s about choosing the right method, at the right time.

In summary, starting an optimisation programme from scratch doesn’t have to be difficult. The most important thing is that you’re continually learning, always being curious and putting users and data at the heart of everything you do and decide.

By adopting this mindset, expansion and velocity of an optimisation programme will grow as a result, alongside increased satisfaction for the users and to the benefit of your business. 

Similar posts

Add a Comment:

Are you looking to build a digital capability?