The Magic Formula: Gather -> Understand -> Model -> Track
User feedback in the form of open-ended comments is a critical part of a product’s success, yet it’s often under-utilised due to the tricky nature of getting insights from the data. In this article we elaborate on the types of user feedback a business should be keeping track of, the typical applications of the insights that come from understanding the feedback and a magic formula for going from raw feedback to consistently actionable insights.
Here will specifically refer to user feedback as open-ended comments left by users about a product. A simple example of a piece of user feedback is “I enjoyed the cake but was disappointed I didn’t get to keep it too.” — Bob. Note here that we’re excluding other types of user feedback such as numerical data (e.g. thumbs-up and thumbs-down) as these are generally less common and significantly easier to work with.
User feedback can come in many shapes and sizes, some of the key ones that are common across technology products include:
- In-product bug reporting: many products give users a way to leave feedback if the system crashes or they navigate somewhere unexpected.
- In-product feedback: products can ask for more general feedback in specific locations or always give users an option in site menus.
- Social media: thanks to the wide penetration of social media, it’s common to find feedback about products online. You can track specific tags on Twitter, monitor your company’s Facebook or Instagram profiles or check your executive’s posts.
- Internal feedback: many people working internally in a company will have a view of how the product behaves and capturing feedback from these people is also important.
- User research: companies can actively seek user feedback through user research which often includes recruiting individuals and asking questions directly.
It can be seen there are many different types of feedback. These vary in volume and usefulness, with those at the top of this list generally being the most actionable.
In general, feedback should be considered in the context of the team’s goals. For example, if your team is building a new checkout experience, you’ll want to focus on feedback around the checkout funnel, this is an example of a practical question. Alternatively, you can also look at feedback that might change the way you think about your product, such as general feedback about a shopping experience, an example of a strategic question. You should think about both practical and strategic types of feedback.
Steve Jobs famously said that “People don’t know what they want until you show it to them. That’s why I never rely on market” (source), so why should we care about user feedback? While it may be true that users don’t always know what they want, there are often many, many useful nuggets of information hidden away in the noise.
These insights can:
- Inform product direction: if users are consistently asking for a specific feature or requesting you to add more types of content, then the product may very well benefit from following their advice. Steve Jobs famously said that “People don’t know what they want until you show it to them”, but there are definitely situations when listening to user feedback has shown great product gains.
- Identify bugs: when something goes wrong, users are often happy to complain about it. Monitoring user feedback is a great way to get quick insights into potential problems with your product.
- Understand product sentiment: product teams will often ask themselves, how are we doing? There are many ways to answer this question including benchmarking against similar products and looking at retention metrics, but an equally effective method is to understand product sentiment through user feedback. This can help you know when you may need to invest more in building a better product, or if you can go heavily into marketing spend.
Some key outcomes from investing in user feedback can include:
- Self-service dashboards: users across a company can use the dashboard to get insights on feedback.
- Regular reporting: having a good understanding of the feedback data can lead to automatic generation of reports which can drastically reduce the time to actually understand feedback and get insights out of it.
- Scalable datasets: datasets which can be used by employees across the company.
In order to turn raw data into insights and add business value, we want to follow this magic formula:
Gather -> Understand -> Model -> Track
First, we need to gather the data.
- Identify the key data sources you want to use, some of these are described above.
- Before even touching any of the data, ask yourself some user privacy questions: do we have good reason to be looking at the user data? Would we be violating any user privacy? In modern technology, user privacy is at the forefront so this should always be your first thought.
- Get all the data into one place, ideally with automated pipelines. For internal product data, this should be relatively easy, but for external data can mean more complex data imports with lots of data checks.
Next, we should begin to understand what the feedback looks like and how we can use it.
- For your product area, identify the keyword/s that users are likely to use, for example if you’re running a clothing e-commerce store, they might be: “fit”, “style”, “expensive”, “ugly”, etc. Try to keep the total number to less than ten.
- Sample at least 100 pieces of feedback from the total population and do a simple text match with the terms identified in the step above.
- Manually scan through the sampled feedback. Did they all fall into the categories already thought of? Does the category actually match the content? For those missing a category, should there be a new category added to capture this?
- You can iterate on the three steps above until you’re happy with how the feedback looks.
- Plot the volume of reports matching your keyword/s, do you see a consistent pattern, are there increases around any product launches? Correlation with product launches and known bugs can help confirm the reports correlate with what you’re interested in.
- Check the volume is high enough to be useful. You should aim to find a few hundred reports per day in order to get meaningful feedback. Of course, you can work with fewer if that’s all you have, but it will be harder to detect quick changes.
Modelling is an optional step. With the steps above we’re already well on our way to getting good insights, but we can get even more insights by applying some classical data science techniques:
- Create some wordclouds as an alternative way to get a quick visual on the feedback being received.
- Perform sentiment analysis to understand if users are positive or negative towards your product.
- Run topic modelling algorithms to have a more flexible understanding of the topics users are giving feedback on.
Finally, we want to make sure we’re tracking our feedback:
- This should include taking the analyses completed above and making timeseries to add to dashboards.
- Set up anomaly detection to be alerted if there are significant movements in the metric. This can be simple percentage movements in the key metrics that you identified above in the understand and modelling phases, or you can use more sophisticated algorithms such as ARIMA or Meta’s Prophet.
- It’s also good to create automated reports where possible, to summarise longer term trends, such as quarterly reports. You’ll always require some manual work, but setting up the queries and making it as easy as possible to action is important.
With that, we now have a well-oiled insight-generating user feedback machine!
We’d be remiss without talking about some of the difficulties of working with user feedback data:
- User feedback is typically biased: if people are unhappy with something, then more likely to shout about it, then if they’re not fussed by it. This can lead to the feedback you actually receive being weighted heavily to the extremities, both users complaining about negative experiences and also those strong advocates who loved your product. There’s no silver bullet for dealing with this, so it’s important that all analyses done be caveated with this.
- User feedback is noisy: feedback comes from people, and people have very different circumstances when coming into contact with your product. A feature might be loved by one user and hated by the next. To this end, it’s important to try to get as much data as possible, and from different sources. Another important approach is to look deeper than the averages, rather than just the mean, look at the actual distribution of reviews — are there a few people with very negative ratings changing the average?
If that has got you all excited, but you don’t currently have access to any customer data, you can play around with this dataset on Kaggle which has over 20K sample customer reviews from a women’s e-commerce clothing retailer. There’s also a great introductory notebook which takes you through many of the steps described above.
Thanks for reading, and of course, any feedback is welcome in the comments!
How to Get Actionable Insights from Customer Feedback Republished from Source https://towardsdatascience.com/how-to-get-actionable-insights-from-customer-feedback-a922ec5b37e1?source=rss----7f60cf5620c9---4 via https://towardsdatascience.com/feed