Get my ecommerce book (and 20 others) for just $0.99 on Wed 12 and Thu 13 May only!

☰ View all evidence-based methods

User Feedback

What you can learn

This method of evidence involves making use of the unsolicited user feedback a company receives. These pieces of feedback could come from sources like contact forms, emails, social media, phone calls, or in-person conversations. This is feedback that doesn't just exist on live chat or feedback you have sought out in surveys—it can arrive at any time.

With this kind of user feedback you often learn the things that truly bother people, as they have taken the time to seek you out and write to complain or suggest something. They haven't just been asked a question in a survey, where they may have come up with something for the sake of giving an answer. It's powerful stuff, and I know websites that have been developed using this as pretty much their only evidence method, which have resulted in products that users absolutely love.

It is important to have an approach for collecting this evidence as it is the kind of thing that can get lost if there is no system for flagging and storing it.

How to do it

Setting up a system to record this feedback will usually involve a bit of manual work. The majority of your feedback will come into customer service teams or from customer-facing staff. You will need them to log every time a customer complains about something or makes a suggestion for an improvement or new feature.

This can be done via CRM (Customer Relationship Management) software if you have the budget or it can simply be a shared spreadsheet where staff can enter a summary of the issue and an identifier for the user that raised it. The feedback this doc gathers should be more long-term and focussed around improving features. These customer service staff need to be able to distinguish when something isn't a request but is flagging something that is just plain broken—as this means a bug ticket needs raising with the development team.

Once you've got a solid system for logging ad-hoc feedback, someone (usually a product manager) will need to check it regularly and classify the different issues to make spotting similar ones easier. Either use tags in a CRM or add an extra column of data in a spreadsheet where you specify what category issue each thing is, for example 'password reset emails slow', ‘wants wishlist functionality', 'bigger upload capacity' etc.

When you group issues like this you can keep a running total of the most requested or complained about features. This leaderboard can be a part of your suite of evidence for deciding the priority order for areas of your product to redesign. By checking it every week or so you can see if there are sudden spikes in certain issues or if there’s a leading problem that needs addressing more urgently.

Watch out for

Negativity bias

When dealing with this kind of feedback there will always be a bias towards the negative. As humans the frustrations or the bad things that happen stand out more than the smooth easy experiences, and as online user experiences improve people get more accustomed to everything working really well first time.

Just be aware that you're mostly going to see problems highlighted in this feedback so be careful not to throw out everything that is working when redesigning to solve these issues.

Fear of change

Generally, people don't like change. A common time for a raft of complaints is if you release a big redesign of a major part of your website. If it's something lots of people use every day then don't be surprised to see confused and sometimes angry feedback when you first launch. This is to be expected as they adjust.

However if you get an avalanche of negative feedback or users are still making the same complaints a few weeks after launch then you might have a real problem. At this point you should try to understand why the complaints are coming in (user testing can help you see it in context) so you can fix it.

Level of feedback

Not all complaints are created equal. Someone quickly sending a moaning Tweet is not as meaningful as someone taking the time to phone up and explain their problem. You might want to weight your feedback to reflect the source it came from.

Interpreting feedback

Be careful not to misunderstand the feedback. It can be hard to truly understand what someone means if they've only written a few sentences. People can have all sorts of odd ways of phrasing online behaviour and probably won’t know the correct technical terms.

Flag any feedback you're not 100% sure about and try and clear it up with the person that gathered it. Even better, if you have contact details for the user that raised it, then get in touch with them to get clarity.

Look for problems

Customers and users don't always know what they need so focus more on their problems than suggestions for new features. They may think they need a big all-singing, all-dancing feature but a simple tweak may be just as good. It's your job to define the solution, don't just implement what they ask for.

Tools (and cost)

There are lots of different CRMs out there, and they’re generally not that cheap to implement. Some focus more around sales but usually incorporate customer service too and a few of the most well-known are: Salesforce, Zoho, and Sugar.

The most basic free shared log you can use would be a Google Spreadsheet (a tool I've mentioned a few times in this guide and something I’ve seen many startups be built on!).

How long does it take?

To check on and classify 50-100 pieces of feedback a week should take a couple of hours.

How often should you use it?





Last updated on 14 September 2020

☰ View all evidence-based methods