UX Design: Getting user feedback into the backlog

As UX designers we are always aiming to balance the needs of the user with the requirements of the business and with what’s technologically feasible.


Resources Featured Image Cards (11)

As UX designers we are always aiming to balance the needs of the user with the requirements of the business and with what’s technologically feasible.

A key element of this is understanding and addressing the user needs, which can be done by talking to them, testing prototypes with them and giving them the opportunity to give feedback.

Within large product teams, the likelihood is that there will be an abundance of feedback collected from a range of sources, which is great as feedback from our users is essential to creating products people want and can use. But what do we do when this feedback is scattered across a team of 40 or 100 people in multiple teams? How do we get the generic feedback that isn't directly related to a current piece of work, back into the backlog?

In this post, I will run over the types of feedback that you can get from users and give you a framework that can help you turn this data into small pieces of work.

What types of feedback are there?

Firstly, I just want to clarify what we mean by feedback and how we categorise it. When I talk about feedback I am referring to any kind of response that happens as a direct result of a user using a product.

We can separate the feedback we get from our users into two categories, qualitative and quantitative feedback/data.

Qualitative data

Insights in the form of opinions, feelings and behaviours (e.g. “I didn’t expect that to happen when I clicked there, I feel slightly disorientated”).

Usually gathered through observational research methods that require some form of direct communication with a user and used to help to define a problem or to explore an idea/concept.

These kinds of insights are generally harder to get hold of as it requires more effort to gather. However, there are a number of ways to find these insights whilst still keeping your process lean, here are some examples:

  • Cafe/Guerrilla testing
  • Feedback forms
  • Social media channels (Twitter, Facebook)
  • Reviews (trip advisor, app store)
Quantitative data

Insights that can be depicted by numbers (e.g. 80% of users bounced on the signup page). These sorts of insights are gathered through measurement methods including surveys and analytics tools. We often use quantitative research methods as a way of quantifying a problem, in order to validate its severity/prevalence. (1)

So…how do we turn all this data into something that we can actually work on?

1. Firstly, gather all your data in one place

Round up all the data that you have, this includes data from analytics tools and comments from user testing. All the most recent feedback that you have. (Grab some post-its too).

Ensure the data is summarised in a bite-size format with one insight per post-it, focussing purely on observations with no solutions at this stage.

This is important as the purpose of this exercise is to translate random pieces of data into actionable pieces of work, therefore it’s essential that the data you have is in a short, simple language that you can communicate easily to the rest of your team. Find a large wall that you can use to pin up your data and then gather your team.

2. Secondly, identify themes

What you’ve got so far is a big pool of data, and in order to have something actionable and the end of this session what you need is some way of filtering down this data.

A great way to do this is by affinity mapping (sorting into groups) this data/feedback. As a group, sort the data into similar categories and name each category once all the data has been categorised.

3. Thirdly, write user stories/actions

Once you’ve got your main themes, you can then begin to write actions based on each of the themes.

The most likely output from this would be user stories but it most certainly isn’t limited to this. It is likely that the data you have gathered might provoke more questions that require some form of validation, therefore a user story that is purely focused on exploring or experimenting might be another great outcome from this exercise.

Feedback wall | Red Badger
 
Here’s an example of a space we helped create, where we gathered insights, identified themes and created actionable user stories.
 

4. Last but not least, prioritise within the backlog

With the Product Owner or whoever organises each team’s backlog, go through the user stories that you have created and prioritise these alongside the other work that is the backlog. There are a couple of ways you can do this:

The MoSCoW (Must, Should, Could, Would) method whereby you gather the user stories that you have created and vote on which category each story falls into. Differentiating between each category is vitally important, I find using these definitions a great starting point:

Must – are features that must be included before the product can be launched.

Should – are features that are not critical to launch, but are considered to be important and of a high value to the user.

Could – are features that are nice to have and could potentially be included without incurring too much effort or cost. These will be the first features to be removed from the scope if the project’s timescales are later at risk.

Would – are features that have been requested but are explicitly excluded from the scope for the planned duration, and may be included in a future phase of development.  (2)

5. Alternatively, you can use the prioritisation matrix mentioned in Jeff Gothelf’s Lean UX.

Our goal is to have a prioritised list of things to work on. This matrix helps us create that list, based on how risky a piece of work is. If a piece of work is not well understood and is high risk/expensive we want to find out earlier rather than later how it will affect us, therefore any user stories that are sorted into the top right (hatched area) should be worked on first. (3)

This way of gathering data, sorting, defining and prioritising is just one method that we have experimented with, but we’d love to hear from you if you’ve had any experiences both good and bad about how you’ve gone about getting user feedback back into the backlog.

References

1. http://www.surveygizmo.com/survey-blog/quantitative-qualitative-research/

2. http://www.allaboutagile.com/prioritization-using-moscow/

3. Lean UX - Jeff Gothelf

Similar posts

Add a Comment:

Are you looking to build a digital capability?