A UX competitor analysis means assessing competitor sites to see how they design for their users, as they are likely to be solving similar user needs. They can be directly competing companies operating in the same sector or they could share features, for example a high-end jewellery brand competes with other high-end jewellers but also might have customisation options similar to premium technology products.
If you’re new to working with a client a competitor analysis is good for giving you the market context that the company operates in. It can also tell you what users will expect if they’ve used similar sites before. If you’re working in-house you might be very clear on who the competitors and influences are, but a proper analysis allows you to build up a deeper body of research that you can refer to in future projects.
A decent analysis will help you gain an objective overview rather than get fixated on specific features. You should first be very clear on the issues your site has and the problems you're looking to solve. You can then go to the competitors to understand *how* they have tackled these problems and can assess how well their solutions would help your users.
You might think you don't need to look at competitors as you want to solve your users' problem in a completely original way. However this can mean users will need to work harder to understand your site. People learn patterns through browsing several sites: a smart application of existing approaches will help users more intuitively know what to do.
First you need to decide what it is you want to find out about your competitors. This is defined by the challenges you’re dealing with in the project and what you want to improve—you should have learned about these from interviews, user testing, or visitor recordings. An example of a challenge would be getting users to sign up to a service.
It might be that you can look at the same competitor sites for solutions to every challenge, in which case you should aim for at least 6 to study. If your project involves several quite disparate features then you should look at the most relevant sites for each challenge—aim for 3-4 for each one.
When assessing how sites work you should screengrab or record your journeys to get a record of what you've found. Don't forget to do it for both mobile and desktop, as they will be quite different.
When you've gathered your raw materials, choose a document format you prefer for recording your findings: I like a simple slide presentation. Then create your report by going through each of your categories and writing notes backed up with visual evidence of how others are solving that problem. Cover the approaches that impressed you as a user the most, and the approaches you think should be avoided.
Finally, I like to summarise my recommendations for the most effective features I think my project could incorporate. This should provide plenty of inspiration so you can start designing solutions.
You should never just study only one competitor because a) this is ripping someone off and b) you're missing the opportunity to learn a lot more. It’s easy for companies to get fixated on a market leader and want to copy them to bring success but this is focussing on yesterday’s solutions rather than today’s user problems.
Clients and stakeholders can often feel the need to follow the pack and say things like: “We should have feature x, everyone else has so it must be good”. Maybe this is true but it’s possible everyone else just copied the market leader without thinking (see above).
First understand the needs your users have, then assess the possible solutions and determine which best solves your problem. If your users regularly use other sites then there would still be a strong reason to consider similar functionality, so they don’t have to learn something new.
It’s easy to fall into the trap of thinking “we don’t want to copy anyone, we’re unique”. Working on a completely new concept is rare—there’s usually someone out there doing something similar even if it’s not directly competing. Even if you intend to stand out, an analysis of others can at least help a company position itself and be clear on how it differs.
Don't study websites just because they are big players or you like them. For example just because Apple are the richest company in the world doesn't mean they're the right inspiration for your project. Your users could be completely different and have very different motivations. Make sure there is a solid reason for each website that you look at.
The main two tools you'll need are something to screengrab with and something to record your findings. My current screengrabbing extension of choice is called FireShot on Chrome (I've used various others in the past but they all seem to stop working). The good thing about Chrome is you can also easily spoof mobile devices and screenshot those.
For reporting findings I find Google Slides is good for this as they allow for sharing with others to comment on (free). You could also use more visual approaches like InVision (free and from $13/month) and add your notes to that.
To do a complete competitor analysis and report normally takes 1-2 working days.
(+VAT in EU)
A template for a lightweight UX competitor analysis. 12 pages including report introduction; contents list; competitor site list; section introduction; sheets for key findings within each section, with space for screenshots; and client recommendations.
Audience data is quantitative data about the users of your website. The best-known and most easily-accessible repository of this can be found in Google Analytics (GA).
Due to their massive reach, Google can put together some pretty accurate information on who makes up your audience. They do this through a mixture of inferences and real data about users.
The inferences come from knowing what people are searching for and clicking on, while the real data is personal details they have from users who are logged into Google services (like GMail) while browsing. It's anonymised so you can't tell who the individual users are (which gives some amount of privacy).
GA’s audience data gives you some basic demographic information, such as age, gender, and location, which can help you build a picture of who your users actually are. You can then use this data to segment your audience or decide who you interview or user test with. You can build on it by surveying your users in more detail.
For the GA approach you’ll first need to turn the demographics option on, to confirm you’d like to collect this information about your users. Once it’s being collected, you then just fire up GA and head to the Audience section, where you can find the following:
You can then either summarise this data to give an overview of who your users are (with something like a user statement) or you can create personas that have come from user interviews. These personas also help you view your audience as three or four types of person, rather than just thinking of them as a single entity.
Another fruitful area to combine into your personas is the acquisition channel—does one audience tend to reach you from social media, while another from Google searches? This can shape how you speak to your audience on those different channels.
On GA, in the top right of the age, gender, and interests sections it tells you what percentage of your users it has the data for. The larger that number, the closer it is to the whole of your audience and thus reality. Small sizes on low traffic websites can mean you’re getting a skewed representations of your audience.
Don't look at too short a time period or that can also twist your data. I like to look at the last three months worth of data when assessing any GA data, so it can balance out any random fluctuations of traffic. Many sites see the make-up of their audience change at different times of the year, with big events like Christmas holidays.
It's worth checking this data every few months to see if it has shifted (once per quarter is about right). Then update the summary or outline personas you have of your audience and circulate it around your team. You should flag any notable changes.
Just because you now know more about who your users are, be careful not to make big assumptions of them, e.g. "Most of our users are young so they can figure out complex functionality". You won't know that until you get more detailed knowledge by user testing or interviewing them.
As I've focussed on here, Google Analytics (free) offers this data for your site—though you'll need to turn on the Demographics option to capture all of it. In addition, online marketing platforms like Google Adwords and Facebook Ads give you very detailed audience information about who clicks on your adverts.
It takes only an hour or two to study the data, and record a summary user statement.
Net Promoter Score or NPS is a popular method for measuring customer satisfaction with your products or service and is something used across many industries. This allows for comparisons between very different businesses. It has become hugely popular in recent years and you’ve probably found yourself answering the NPS question for several companies.
At it’s heart is a simple one-question survey asking 'How likely is it that you would recommend [brand] to a friend or colleague?'. The user is given a scale of 0-10 to answer. A score of 0-6 is considered negative or a detractor, a score of 7-8 is neutral, and 9-10 is positive or a promoter. A simple formula is applied to the results: percentage of promoters minus percentage of detractors to give a total score between -100 (very bad) and 100 (excellent).
The reason this score is seen as a good metric for understanding whether your customers like your service is because they are being asked whether they would put their reputation on the line and promote you to their nearest and dearest. Potentially it’s a good indication of how you compare to competitors (if you can get that data) and tracked over time it can tell you whether you're improving things or things are getting worse for your customers.
However it is a bit of a strange system and does have its flaws (covered in more detail below). On it's own it doesn't tell you much but like most quantitative metrics it can give cause for investigation if things change. To be truly effective it should be part of a customer satisfaction survey that also gathers more detail on the reasons for the score.
There are a few ways that companies tend to ask this question of their users, the main ones I've experienced are:
To use NPS data to inform the design process you can keep track of the scores in a spreadsheet with columns for the written feedback to go alongside it. The score would just act as a rough positive/negative indicator but the written feedback is where you should pay real attention.
Individual pieces of feedback aren't much use but as the spreadsheet grows you can categorise the feedback, for example with labels like ‘struggles with search filters', ‘wants bigger product images', ‘stuck on sign up'. I'd then keep count of how often these issues appeared and can focus attention on those that cause the most problems.
Combining this with live chat and general customer feedback helps give a sense of scale to issues on a website. Of course they would need investigating further with user testing to truly understand why people were struggling.
There are many problems with an over-reliance on NPS, with these being just some of them:
There are even more reasons to be cautious covered in this article.
You can just use your normal email provider to send out the question to users or Google forms (free), Wufoo (free & from $15/mo), or Survey Monkey (free & from $30/mo) to do something more comprehensive.
Setting up the method of gathering will probably take about half a day. Checking the results relevant feedback should be a job that takes about an hour per week.
User testing involves watching people use your site or app to see what difficulties they have. Guerrilla user tests can take a few different forms (covered below) but what they have in common is that they are relatively unplanned and quick to conduct. They're traditionally done in-person.
Guerrilla tests are good for checking a user journey in the form of a clickable/tappable prototype. They sit in-between design testing, which I would use for checking individual screens or elements, and a longer form of user testing such as remote testing, which is better for live websites.
The main purpose of these tests is for sense-checking your work as you go along and for testing out how a flow works together. You'll discover usability issues with your work and whether people understand what they're interacting with. It's missing some of the scientific rigour of other user testing methods but it certainly beats sitting around and theorising or guessing at what users are going to do.
First off, a bit of preparation: think about what you're testing and decide the tasks you want people to carry out on your design (two or three tasks is about right for this type of test). If you're just showing a single screen then decide what your primary question to them will be (and make sure it isn't a leading one).
It’s worth noting down in a simple script what you want to say so you're consistent with each person. Also make sure you have a way of recording the feedback: write it down immediately after or make a video recording with your phone.
The main challenge for this type of test is finding relevant people and getting them to try out your design or prototype. Five users should be enough to give you useful feedback but if it's quick to get a couple more then do so.
Here are some options for finding people to test with:
After testing you should always spend a bit of time going over your results and writing them up in some form. You might think you can remember what the issues are but it's surprisingly easy to forget one or two a few days later.
Sort the usability issues into a rough priority order of severity, and also include the things that users liked about the site. You can put your test results in a lightweight testing report so you have a record for yourself and others to refer to in future.
Testing with other people who work on your project or in your team is fairly useless. They are going to have such a clear idea of what you're doing that they won't ask the ‘obvious questions' or test your assumptions.
Don't develop long tests with multiple tasks to go over. People's attention will wander pretty quickly and if you’re testing a prototype it will probably lack the realistic detail to sustain a long test.
You should be aiming to take about 5-10 minutes of their time. If you've got a big prototype then focus on the things you're most unsure about or run multiple guerrilla tests.
It can be hard to do it all on your own (especially in public) so you might need someone else to help. They can note-take or record while you engage with the user and build a bit of rapport. If there’s just one of you having to do both tasks then you can miss important things (and it can look like you’re ignoring the user).
A test that isn't recorded may as well have not happened, as you’ll have no evidence to point to if people challenge your findings. By getting it written down and having video clips you can share your findings with others. Definitely don't write a long report though or you're defeating the objective of speed.
You’ll need something to test on (your laptop, tablet, or phone) and you’ll need something to record with (another phone, QuickTime screen recording). Obviously these cost money but you’re likely to have access to them to design with so it shouldn’t cost any extra.
It can be as quick as an hour or two to run five tests but I recommend spending half a day to properly record and analyse the results.
Unless they're self-motivated projects, all design work has either an external client or an internal stakeholder commissioning or owning it (for the purposes of this guide I'll use the term client for both). As a designer it can be easy to be dismissive of them and think you know more about what needs doing. However if they are good at their job, the chances are they know more about their product and users than you do.
A good client is a valuable source of information who can save you time and energy on a design project. You should utilise their knowledge early in a project to understand the problems and issues that they hear about most from their users.
You can learn about the demographics of their users and their common behaviours—which you can double-check with audience data. On top of this they'll be able to tell you who their competitors are, so you can carry out a competitor analysis.
I find the best time to get a download of client knowledge is in a chunk at the start of a project, rather than in bits and pieces as the project goes on. It's important to make sure you and the client are on the same page and it's also important to dig into understanding what the problem is that they really want solving.
It's common for a client starting point to be a request for a certain solution when after a bit of exploration it turns out they actually need something else. Just like interviewing users it's worth questioning until you get to the real problem.
To find real issues, I make sure my client discussions focus mainly on two areas:
The discussion should be problem and user-focused at this stage, not about diving into solutions. What you take from them should be statements about current behaviour and issues (e.g. 'Users mostly complain about feature x') or questions that need further research (e.g. 'What is the conversion rate of page y?').
During the meeting I make quick notes of every relevant nugget of knowledge and after the meeting I type this up into a Google Doc to share around with the client and their team. This states the main problems we're looking to solve with the new design and gives people another chance to chime in and clarify. We should then be agreed on what the problems are, what needs more investigation, and what we're looking to design.
The above outlines the main time I use client knowledge as a piece of research evidence. Of course they are also there throughout the design process to input into designs but it is important to keep everyone focused on the originally agreed issues that need solving.
Halfway through a project it's possible that a client will decide there’s a bigger issue to tackle and will move the goal posts. This is why you need agreement and sign-off on the real problem you're looking to solve at the beginning: get it written down.
Make sure the key decision-maker for your project is in the room at the beginning, otherwise everything you discuss could be scrapped if they haven't had their say. If it's tough to organise a group, you might need to run a separate meeting to get their input.
"We just want you to design one of these"—be wary of clients who come fully armed with a solution. They might be right but some clients see everything as a chance to design a brand new shiny thing, when in fact their users might just require a small fix to the existing product. Try and understand why they think this solution is so necessary and get to the underlying problem.
Be careful of client meetings that ramble and cover every idea they've ever had. Try and keep the session focused on one primary objective and don't get drawn into solving hypothetical future situations.
It's possible that by asking lots of questions you might expose that the client team aren't in agreement about what is wrong with their product and what needs solving. It can be worth stepping back or even pausing the project while they discuss further and perhaps do research of their own. It’s far better to wait than to do work that doesn’t get used.
To gather client knowledge you'll need something to take notes with (notebook, post-it notes, laptop, or tablet) and it could be worth recording on your computer or phone too. Then you'll need something for sharing these notes around, I like Google Docs (free).
Keep each client meeting focussed and take no more than an hour. Writing up and sharing for input should take about half a day.
An expert audit or review involves getting an expert in a particular field to assess and report on how your website or app is working based on their experience and a set of criteria. For the purposes of this guide we’re talking about experts in the fields of UX/usability, product, branding, copy, or conversion rate optimisation.
Someone who knows their stuff can tell you how your product stacks up against competitors, best practice, and user expectations. They should be able to tell you what data is important to watch and maybe even give an idea of what the conversion rates of similar websites look like. It should be a quick way to get lots of issues picked up and new ideas to try.
Whenever you bring an outside expert onto your projects you gain something very important: a fresh pair of eyes who can see things that you may have become blind to, and ask the obvious (and possibly awkward) questions.
Once they've found problems they can also save you a lot of time by helping you prioritise which issues need most effort. They should also be able to offer you suggestions for ways to improve your product and give names of people or software that can help further.
The most important part is finding your expert. You should look for someone who specialises in the area you work in—if you're an ecommerce site you'll want an ecommerce expert, if you're a financial app you'll want someone who knows finance. They'll be able to bring experience of what works in that sector and will understand what users would look for in your product.
Someone who comes recommended is always a good idea: ask around your community, for example if you're a startup try your investors or other companies they’ve invested in. If you’re a business who operates locally, ask similar sized businesses in your area.
Failing that, search online and look out for someone who can write or talk about their area of expertise. Do they keep a regular blog? Have they written books on the subject? Do they talk at conferences? Or teach what they know? These are good signs they will be able to explain things clearly.
Once you've found your person, you should talk to them about their experiences and explain your business to them. If they seem switched on and have a few good references then agree a fee for the work. The amount will vary depending on the size of your site or section of site of it but this shouldn't be something that is charged by the hour/day or they are incentivised to drag things out. If they've done this a lot before they may well have a fixed price for the service, which is a sign they know what they're doing.
When the audit is complete they should be able to supply some kind of report (it doesn’t have to be long, look for actionable content). It’s also a good idea to get them to present this to you in a session where you can ask plenty of questions and get the most out of their knowledge.
This isn't something that you should need to use often. A good expert audit should leave you with plenty of things (6-12 months worth) to go off and design and put into development.
Experts who promise incredible results might sound impressive but anyone with decent experience should be pragmatic about what can be achieved and how change is dependent on the client’s actions afterwards. They should be able to talk you through the nuance of what they look for rather than speaking in vague terms.
Be worried if they don't ask questions or for supporting information when you first talk with them. They should want to get a clear sense of the business and the target audience. If they aren’t then it might be worth looking elsewhere as they’re probably going to give you a generic report that misses some of the context of your website.
Request that they share their findings in an easily accessible form with you. I like online documents that are easy to refer to. Don’t just let them present to you on a call—get them to share their presentation and any supporting findings or documentation, so you can use it afterwards.
Experts who just tell you to do something (often because it’s best practice) but can’t explain why aren’t much help. It suggests they don’t really know the subject area too well. They should be able to back up their assertions with some evidence.
In this case they should bring the tools, though they might want access to any quantitative data you've been gathering or any previous research reports.
Cost is going to massively vary but ask yourself how much is it worth to find areas for improvement and potentially big revenue gain? This ‘worth’ will vary from depending on if you are a tiny company or a massive one.
It's going to depend on just how deep they go, but expect a standard expert audit turnaround time to be a week or two.
There are three main types of heatmap that show where users spend their attention on a web page. The name heatmap comes from the colour-coding to show which areas are getting user attention (generally dark red for the most, light blue for the least). Each type will tell you different things:
Click heatmaps visualise where users click or tap on a web page. They may also give you a number showing the percentage of users visiting that page who engaged with a given link.
Some older types of click heatmap only record clicks on interactive elements that trigger an action, so if users click an image that doesn't do anything that click would not be tracked. It’s more informative to see all the places that users are clicking, as it’s very helpful to know if they are clicking things that look like links but aren’t. This can represent an opportunity to make something interactive.
On the other hand, if an important link is only receiving a few clicks, then you have to question if it is well designed. You'll learn if what you consider to be the most important link on the page is being seen that way by your users.
These tell you how far your users are moving down your web pages (either by scrolling or swiping). They display horizontal bands showing the percentage of users that reached each part of the page.
They can give you an indication of what content is going down well with users and what content is being skipped over. In my experience most pages will just show you that the longer a page goes on, the fewer people stay on the page. This is normal and you'd expect a fairly even drop-off rate as the scroll continues.
Where scroll heatmaps are most useful is when they show a sudden drop in the percentage of users at a point near the top or the middle of your page. This means that a combination of content and design has caused users to stop scrolling and could be the sign of a 'false floor’, where the design makes it look as if the page has finished.
It can also mean a link is directing people elsewhere and so they are exiting the page rather than continuing. Either way it’s often reason for further investigation through user testing to understand why this is happening.
This type of heatmap shows which parts of the page users are most hovering their mouse over and thus which elements are getting the most interest. As touch-screen devices can’t (yet) detect where fingers are hovering above the screen this is not data you can get from phones or tablets.
Mouse movement is a good proxy for eye-tracking as research has shown the user’s attention tends to be where the cursor is, so you can learn what content users are reading and it can save on potentially expensive eye-tracking studies. This offers more precision than a scroll heatmap so you can see exactly what areas users are being drawn to in a block of content.
You'll need to put a code snippet on your site for heatmap software to track your users visits to your pages. There are a few options for this, explained in the tools below.
Once you've left it to gather some data (possibly for a few weeks to get something meaningful) then you can check in on your pages and look for the stories the patterns might tell you. If you have a lot of pages then focus on ones where conversion rate is lower than you'd wish or check pages you've just launched to see how users are reacting to them.
Heatmap data can help you think about whether your designs are correctly focussed. For example, if you have important information but only a small number of people are seeing it then you'll probably want to move or redesign it.
In my experience most heatmaps don't tend to change much unless you change the design, so it's not something you need to be constantly checking. They can be useful to revisit if if management/stakeholders want quant data to understand the size of any problems: you can use your heatmaps to back up findings in user tests.
Don't read too much into scroll behaviour on its own, it only gives you part of the picture and like all quant data only gives you the 'what' rather than the 'why'. Just because users aren't reaching a part of a page doesn't mean they aren't going on to have a successful journey.
Clicks are arguably the most important interaction that users carry out on your site as it shows a high engagement and desire to progress further. However it doesn't quite tell the whole picture on its own. Did the user take ages to find that link to click? Did the page actually match their expectations? Like a lot of web tracking data, in isolation it's just a clue to find out more.
Like all quant data, sample size is important here. If you're only tracking a few users then one user's eager clicking of every link available on the site can warp your metrics. Ideally make sure you have at least 100 users in your heatmap sample size.
If your website is responsive (as it should be!) then this needs taking into account. Links and page sections can move position or disappear altogether on certain devices, so make sure you look at desktop, tablet, and mobile data separately. Also if you can segment by traffic source this can reveal differences: search traffic might be looking for very different things to direct traffic.
There are several pieces of software out there that offer a suite of heatmap tools together (and often include session recordings too). Some of the popular ones are Hotjar (free & from $29/mo), Crazyegg (from $24/mo), Mouseflow (from $29/mo).
Once the software is setup, it only takes about an hour to assess each heatmap type for each device on a page.
Live chat is the little messenger windows that sit in the bottom corner of a website and are particularly popular on ecommerce and online services. They allow the user to chat directly to customer service teams and ask questions about things they may not understand.
It is much like a phone helpline but it can be turned on and off by the company at will (and when no-one is operating it from the company side they usually become email message boxes). They offer a useful insight into the problems that real website users and customers have as you can pinpoint the place where they are stuck and turn to help (although not always, see watch out for).
You should be able to spot if things like shipping costs or sign up instructions are not clear and are preventing some users from converting by themselves. Just the fact that they are looking for help rather than completing the task by themselves is a good indicator that something can be improved.
By looking at what users say on live chat you can also get a sense of whether they understand broader things, like what the company actually offers, or if they have found themselves on a site that isn't suitable for them. This can help you identify whether your marketing efforts are working to bring in the right kind of users.
You'll first need to get the live chat function set up on your site. Luckily there are lots of third party services to choose from, which require you to just put a snippet of code in the pages that you want the live chat to appear on. If you have a big site but not many staff then you don't need to put it everywhere; focus on landing pages or key conversion pages.
The company will then need someone to staff the live chat. If you are a small startup this could be your job but at a company with a customer service team, it should be them. It's best if it is someone who knows the product well and is used to answering customer queries so they can promptly respond without having to constantly find out what they should say!
This job tends not to be as intense as answering help lines as users only ask a question or two and can be quite slow in their responses. From what I’ve seen customer service teams can usually handle three or four users at a time.
It's not something you have to commit to for a long time, as live chats can easily be turned off. You might only have it on for a few hours per day or you might want to only gather feedback for a week and then assess it before running another week a few months later. It's a flexible tool.
You can then use it as an evidence source by analysing the transcripts. Going through written feedback can be time-consuming but if you dedicate a bit of time every week it shouldn’t be too hard. It’s a good idea to do a first pass to weed out any chats that are irrelevant or don't go anywhere (which can be quite common) and then a second one to categorise the feedback you get by sentiment, much like with other unplanned feedback.
With this document you can keep track of the most common issues that users have and can create a record of which areas of your site are causing the most problems. The transcripts may immediately tell you what is required to make or fix or it could be a starting point to gather more evidence. Not all users will be able to identify why they are having a problem but if you see repeated live chats being triggered on a certain page it suggests something on there isn't working as well as it could be.
When you give users a window into which they can type anything you’re going to get some odd comments in there from people who have no intention of using your site. Everything from 'what is this site?' to 'what are you wearing?' Hence why it's worth filtering out the chaff before your analysis.
You can also find lazy users who don't want to work anything out themselves and use the chat to just ask for someone to find products for them. The presence of the chat window means they don’t behave as they normally would. These are probably ones to ignore but if you're getting a lot of them it could tell you that your search isn't intuitive or that it could be worth investing in a customer service phone line.
You should think about how and when your live chat appears to users. Be careful of having it automatically pop up and hassling everyone as soon as they arrive on the site. This will cause people to immediately close it before realising what it is. It's better to have it on screen in a minimal state for the user to choose to interact with—at most only expand it when someone has spent a long time on a particular page.
As ever when taking feedback directly from users you should focus on their problems rather than whatever solutions they may think they need. Only by gathering a few different sources of feedback will you be able to find the right fix for everyone.
There are a whole host of tools offering live chat from the expensive like Bold Chat (from $599/year) which offer video chat and other features. As well as the simpler and more startup-friendly like Olark (from $15/mo) and Zendesk (from $9/mo) and even tawk.to (from free), who allow you to hire the chat operators.
Set up should be a very quick dev task. You should then gather feedback for at least week before dedicating half a day to sorting through it.
A conversion funnel shows the rate that users complete each step of a user journey to reach an overall goal. It is an important part of understanding how a website is performing and should be one of the core elements of measuring user interactions with your site. It is known as a funnel because it tends to start with a large number of users at the top, tapering to a smaller number at the bottom (though your aim is to get it to look less tapered).
It will show you over time whether users are doing what you want them to do. This usually means reaching a goal that is important to your business, like signing up to a form, downloading some content, making a purchase, etc.
Importantly it will show you where they are having difficulties on the way to reaching that goal, by showing you the steps that convert at the lowest rate.
It may also give you details of where users are going instead of your intended next step in the funnel. Depending on the software you can set it up to measure how many users are going onto different pages/URLs or you can measure different events that have been triggered, such as button clicks/taps.
Before getting into the temptations of picking your tool, you should define the user journey you want to track. This can be as simple as sitting down with a pen and paper and working out the ideal user journey you want someone to go through to reach your business goal. You can have a few of these per site/product for each different goal you want users to accomplish.
If you are in the very early days of a website this might mean you are deciding the shape of your entire product at this stage. If you have a site that already is up and running, you probably have a clear idea of the steps a user goes through. Either way, this journey will form your funnel.
To make sure you're not including unnecessary stages, it's a good idea to start at the goal itself and work backwards, defining the fewest steps required to reach it. This represents the ideal journey of a user, sometimes known as the ‘happy path’.
Then it's a case of picking your software, which will be dependent on what you're looking to track (see below). It will likely be the same tool you're using to gather page data and audience data. You install the code tag for this on your website so it is present on every page, which should be a quick dev task. Once you've checked this is up and running properly you can then set up your funnel to collect your data.
In several pieces of software the funnel will only gather data from the day it is set up, so it's a good idea to get it up and running as soon as you know what you want to track. You'll want to gather data for a few weeks to get a sense of what is 'normal' on your site (a.k.a. your baseline).
Once you have data you can look at improving your user flow by starting redesign efforts on the steps that have the lowest conversion rate.
Almost all software tracks user journeys and funnels in a different way (some look at sessions, others at users, some at events, and there are other variants besides). Thus it is quite common to have different funnels giving you different numbers for your conversion rate. You should learn how different software tracks users and what is most important to you and then stick to one.
For designers, conversion rates (the number of converters divided by number of visitors) are better to follow than the absolute numbers completing your goal. This is because in an ideal world, as traffic goes up and down on your site, a well-functioning design should still be converting at a consistent rate. When the rate fluctuates its worth checking if the following has changed:
It is pretty common for different types of traffic to warp your conversion rates. If it suddenly drops, a good first port of call is to check with your marketing team if they have been buying in or gaining social media traffic that behaves differently (new users are often less likely to convert).
Be aware of seasonality, as pretty much all businesses are affected by it, in particular ecommerce ones. There are times of the year when people are less likely to buy and it can be hard to know that if you are a startup who is just setting out.
Once you've got a year's worth of data, it can be a good idea review it to see if there are any patterns you should keep an eye on in future. You can then compare it against future years.
As with all quantitative data it is just going to tell you what is happening on your website but it is never enough information to make design decisions. You're going to need to use other pieces of evidence like session recordings and user tests to learn why users are behaving that way, before you are in a position to make the right changes.
There are many pieces of software that offer conversion funnels. The classic Google Analytics (free & paid) is best for measuring URL visits at different steps and is often a good starting point for web projects.
Mixpanel (free & from $150/mo) measures user events that you specify, like clicks and taps, making it better for apps and non-URL based funnels. Hotjar (free & from $29/mo) also offers funnel tracking functionality as well as the likes of Kissmetrics (from $120/mo) and many more.
Setting up your funnel should only be a few hours work—the tools will all have help pages/videos to guide you. After that I find checking the data on a weekly basis works well.
This is a method of evidence gathering that I've included more as a warning. It’s a popular method that you’re going to come across when you work on a product, it just isn't a very good one.
It's also the feedback that many designers fear: "I've just shown this to my husband/wife/mother/son and they think it could be improved by doing x". Where x often involves re-working the whole project but the client values this opinion so much that they insist on it, trumping any rational, carefully gathered evidence that you might present.
It's not just something that comes from design-illiterate clients though. I've been in meetings where well-informed management have suggested changes based on ideas from someone in their family or an old friend. Sometimes they might even be right and at its best it could be an outside opinion that inspires great ideas. However they could be missing a vital bit of context that means it isn’t much help.
Importantly, this is not a method you can repeat reliably. It's a lottery that you can’t bank on: you might get something great but it could easily lead you nowhere.
Of course we all ask our partner, friends, or housemates for quick feedback from time to time. However in general, the opinions of friends and family shouldn’t be a part of your formal evidence-gathering process. It's the laziest and weakest form of research and there are plenty of other methods for evidence-gathering out there (check out the rest I've written about here).
To be honest family will often give opinions to you whether you want them or not. Alternatively if you do push them into giving an opinion they'll probably just say something positive to shut you up and not hurt your feelings. Neither of these things are very helpful.
If you do come across someone else using these opinions in a meeting (usually when you’re least expecting it) I recommend saying something neutral like "that idea has potential, I'll look into it" or "I'll be sure to incorporate that feedback into the rest of our research". If you can, try and gather this kind of feedback early in the design process during the research phase, and make it clear that late feedback and changes will involve the project taking much longer.
Having explained why you should generally ignore this kind of evidence, there are a few times you can pay more attention to a family/friend opinion that comes your way:
Feedback from friends and family who have actually experienced the product as a normal user or customer should be taken onboard like any other customer complaints or suggestions. If they aren't a customer then their issues possibly aren’t real and nowhere near as valuable a someone who wants your product/service and has been willing to pay for it.
If they're exactly the kind of people you're aiming at with your product then that can be worth incorporating with other customer feedback. Though it's not quite as valuable as a paying customer's thoughts, if they match your target audience then it’s useful to know if things appeal to them or not.
If the friend or family member has spotted something that is broken and you can recreate this error then you'd better fix it. It doesn’t matter where you find out about bugs from: their feedback is as good as anyone else's.
Whilst you should always be designing for end users not investors, if they have a fair chunk of money in the company, it can be worth considering what they say to keep them onside. This is especially true if it’s something small: save your battles for the big decisions.
There are no specialist tools you need to use here and the opinions are all (too) free. Ideally get the to demonstrate any problems they think exist, as you might be able to find workarounds for them.
If you’re going to request this feedback anyway try to keep their thoughts very short and focussed on things you can action.
A/B tests are often seen as the ultimate method for an evidence-based and data-driven approach to designing websites and apps. This is because they are the purest form of scientific testing: a straight comparison of one design against another. It’s not always the right solution though, as I'll explain.
The theory behind them is that you take a new web page design (version B) and serve it up to to some of your users whilst showing the rest of your users the original design (version A). The differences could be anything from a new version of a button to a complete page redesign. You then measure to see which version provides a better rate of conversion, and the winner is put live on the site to all of your users.
You can also measure secondary goals and other interactions to see if the new version has had an effect on more than just your main conversion rate. Something might not increase page conversions but might improve another desirable metric. You can also do multi-variate testing where you test out more than just two options.
In principle this type of testing allows you to measure the success of your designs in the real world and with your actual users. In practice it is somewhat more complex than that (see the 'watch out for' section) and isn't something that should be undertaken lightly. To do so risks getting inaccurate results and can cause you to make the wrong decisions for your product so it's well-worth getting a professional data analyst to do the work.
You're going to need to install some code from your chosen testing tool and, as with analytics, it's a pretty straight-forward task of copy and pasting. Once installed you can then use the software to set up your A/B tests.
Define the hypothesis for your test. What are you changing and what do you think it will do? What is the primary metric you are looking to change? Are there any secondary metrics you’d be happy with improving?
Work out how much traffic you're going to need to get a result, and thus how long you need to run the test for. There are calculators to help you do this. This is important as many websites won’t actually have enough for this (see below) and you could discover that A/B testing is an impractical choice.
Check that the test works on a few different browsers and is being shown to the right subset of users. You often don’t want everyone to see a new variation—a lot of the time it makes sense to show changes to new users and not change the experience for returning ones.
Set the test running and try and leave it alone for the duration of the test. It's worth checking every so often to be sure it hasn't been a disaster and doesn't need stopping early. Otherwise let it go and don't be fooled into thinking you've got a result until you've had the required number of people go through the test (the number your calculator indicated).
When you get a result, roll out the winner, in the exact form it was tested. Sometimes this will mean sticking to the existing design. Quite often there will be no meaningful difference between the two versions, so in theory it's your choice as to which you go forward with.
This is the biggest problem for a lot of startups and small sites and it's not as simple as knowing how much traffic you get to the website overall. Even with 100,000 users a month you may not have enough traffic to run the test you want in a reasonable time. Let me explain through an example:
Let's say you want to get more users to reach checkout from your product page and so you redesign it. Your current conversion rate of that page is 5% and to consider this a success you want that to increase by 10% to 5.5%. This means with a statistical significance of 90% (which isn't amazing, 95% is more commonly used) you need 30,000 individual visitors to go through your test per variation to be sure you know whether it's 10% better.
In an A/B test you'll need 60,000 users in total to go through your test. If you have decent traffic of 100,000 users visiting your site per month and your product pages only get about a third of that traffic, then you're going to need to run that test for two months before you have a result.
Here's the problem. Two months is a long time for a lot of companies and they would probably be better off gathering several other forms of evidence (such as visitor recordings, guerrilla user testing, conversion funnels etc) in that time, which will give lots of areas for improvement.
The biggest problem with A/B testing is that people use it at the wrong time. Too often they have already redesigned their site and built it and then are just testing to see how much better it is than the current version.
A/B tests can be used at the very end of a project when a company has already put the time and money in and are not interested in knowing whether it is actually worse. They just want a number to boast about how much better the new one is.
Ultimately, to properly run A/B tests involves a good knowledge of statistics and an experience in doing it before. There are lots of things to understand like sample sizes, statistical significance, statistical power, one/two-tailed tests and more to know if you're doing the right things.
Taking a punt and doing it on your own almost guarantees that you'll make mistakes. I know, I've been there. Some software can be very reassuring and make you think you're getting great results but when you come to launch them you're left with something that doesn't work.
There are many other things to watch out for in A/B testing, which are solved when you get an A/B testing pro to help you out.
There are a lot of tools out there now for running A/B tests at many different price points. I used to use Optimizely but that has moved to be a more enterprise solution and not so budget friendly. I’d recommend digging into an article like this to find the right tool for you.
Once your designs are ready, setup should be a matter of an hour or two. Running a test can take a long time (often several weeks).
Visitor recordings or session recordings display the movements, clicks, scrolls, and other interactions of a real user's visit to your website. These are saved as video files, which you can play back later. It's akin to watching back a remote user testing video without the audio.
However this isn’t a user test that you have set up, it is a user going about their business because they’ve arrived on your site through their own choice (and are presumably interested in what you are offering).
It is a window onto how your real users behave when they visit and enables you to see how many of them are able to reach your goals, along with the pages that they go through to get there. Impressively you can even see what they are entering in form fields (not including passwords and credit card numbers) and how many times it takes them to get this right.
If you're wondering how it's done, it's not actually a screencast, just a recording of clicks, movements, and key stroke data overlaid on a snapshot of your website. Also not every single user gets recorded—for example, those using private browsing won't get tracked.
Once installed, you can then tell the software that you want to record and specify any details (perhaps you only want to see user journeys that visit a certain URL). You then leave it to gather the data, and within a few days you should have some sessions to take a look at.
Even if you don't have a lot of traffic it won't be long before you have a few hundred sessions to watch, which can be a bit intimidating. You can either be very diligent and check daily to watch the latest videos and keep on top of them, or you can wait and watch a bundle at a time (my preference).
It’s a good idea to filter them to look at those with certain characteristics (especially if you've got a lot to get through). For example, try watching all the sessions that make it to your checkout and see what the common factors are. Or maybe look at all the ones that land on a particular page and try to work out what is causing them to bounce or continue.
When looking through the videos, the aim is to build a picture of the common user behaviours that you witness. When you see something interesting happen (like a user clicking an element) make a note of it and then tally up each time you see that again in future. After watching about 50 videos I usually have a set of common actions for a page that will give a strong idea of what users want from it.
These recorded sessions can lack context and whilst in some cases it might be obvious why a user is getting stuck (if they are struggling with a form perhaps), you won't always know what they are looking for or what they are thinking, so be careful about attributing causes.
Some behaviour might need further investigating with a user test to understand why it is happening. For instance you'll often see users repeatedly jumping between the same two pages, which suggests they're looking for a piece of information, but it won’t tell you what that information is.
Quite a few of the sessions may have no useful information in, especially the ones with a single pageview: if they land, scroll and leave, you wouldn't know what it was they couldn't find or didn't like. It's usually worth disregarding the very short sessions.
When assessing visitor recordings you should focus on behaviours that you can’t tell from quant analytics. Don’t just look at pages people click on, but consider how long it takes them to find links, and what parts of the page they seem to engage with.
One thing to remember with this kind of evidence is to record what users don’t do. For example when watching for actions that users take it can be easy to ignore that no users played a video (and thus you may not need that video).
These recordings are good but can have issues. Occasionally modal windows or burger menus can block the rest of the recording by staying overlaid on the video and not clearing when the user has moved on. And some drop down menus or hover effects won't appear at all.
Also sometimes if the CSS changes and you come back to watch a video it can look wrong, so it's best to watch them fairly soon after they have been recorded.
My preferred choice for session recording is Hotjar as it gives you this ability along with heatmaps for a very reasonable price (free & from $29/mo). Inspectlet have a very similar set of tools and are equally competitive in pricing (free and from $39/mo).
Watching a chunk of about 50 recordings takes 2-4 hours (I watch at normal speed with the 'skip pauses' in user interactions turned on).
For the purposes of this guide, 'page data' refers to the metrics describing user visits to any individual website page. A few of these numbers are covered by your conversion funnels (users, sessions, and the calculated conversion rate) but with page data we can find metrics that give more detail than just whether a user was present or not.
Whilst a conversion funnel should represent your primary metrics, these other numbers can form your secondary metrics. If you make a change that doesn’t improve conversion but it does lower bounce rate, you’ve seen a beneficial secondary effect.
These secondary metrics are worth studying to build a better picture of user behaviour on your website and can help you define what to look for in research such as user testing. For example, if you find a key information page has a high exit rate, it should be a task in your next user test to try and understand why.
To learn what is happening on your web pages the easiest free option is to install Google Analytics, which is by far the most popular tool for tracking this kind of data. Once installed, the following are a few pieces of key page performance data to consider:
Defined as how many separate sessions of browsing a user has had on your page. A user visiting your page multiple times in a session would only record one unique page view—a session of browsing is reset after 30 minutes of inactivity.
Each session represents a period of intent for a user to achieve something on your site and isn’t the same as an individual user (this is something Google Analytics can only guess at).
Defined as how many times a page (defined as a URL loading) has been viewed.
If this is a lot higher than your number of unique page views then you'll know that users are looking at that page many times per visit. This could suggest that the content on the page is so great they keep returning to it or that they can't work out where to go next.
Defined as the average time a page view lasted. This is often used as a measure of engagement with a web page but it will depend on the type of page as to whether you want this to be long or short.
If it's a long blog article or 'about us' page you'll be hoping for users to spend several minutes on it, whereas if it's a checkout page, you'll be wanting people to whizz through in seconds. If it's the other way around then users aren't being intrigued by your content in the former and they're probably getting stuck working out how to enter payment details in the latter.
Defined as the percentage of sessions that saw someone land on this page and then leave without visiting another page on your site.
This is almost always seen as a ‘negative’ metric that you want to reduce and is most applicable to landing pages. If bounce rate is high on a homepage or landing page then your entrance experience to your site is turning people off, which is a strong sign that you should change something.
Defined as the percentage of sessions that saw someone leave your site on this page. Not to be confused with bounce rate, this is a bit more ambiguous, as the user could have visited several other pages before their exit and they may have found everything they needed. After all every journey has to end somewhere.
Obviously you won’t mind if the exit rate is high on pages that appear after a goal (like a post-purchase page). If it's high on a critical page in the middle of your flow, it's worthy of further investigation.
This is tracking that you manually set up for non-URL based interactions and tells you whether or not an event has been triggered (such as clicking a button or page element). Whilst it is a binary metric you can attach meta data to each event to give you more details, such as the name and type of button if there are multiple on a page.
It can be hard to find benchmark metrics for what represents a ‘good’ number of users or bounce rate, so take it with a pinch of salt when someone makes a blanket declaration that you should be targeting a certain figure. It’s more reliable to use this data to judge your pages in relation to each other or themselves over time. Use it to help you prioritise which pages need fixing before others and for spotting outliers and problems.
Looking at the raw metrics is a fine starting point for discovering website issues but to get more actionable data you need to segment your results. Try segmenting by device or by traffic source to see if users behave differently depending on where they’ve come from and how they’re viewing the page.
Be careful if you’re using regular expressions to track groups of pages via the Google Analytics API and looking at the totals. Due to the way users and sessions are counted by URL there may be some duplication in there because people who visited several pages may be counted multiple times. Use it as an indicative measure rather than a precise one.
Don’t make assumptions on what a piece of page data in isolation might mean. As outlined above, other than bounce rates, most metrics could be positive or negative depending on the context.
My standard disclaimer with quantitative data applies: it doesn’t tell you ‘why’ something is occurring. Always investigate further with qualitative evidence such as heatmaps, session recordings, and remote user tests, in the correct order to build a more complete picture.
As mentioned above, I'd recommend Google Analytics (free) for gathering this data, because it's comprehensive, free, and hugely popular around the world (so you'll be able to compare your stats across different sites and find plenty of help guides). If you're tracking a product where individual page views aren't so important, then a lot of the above metrics won’t apply and you might want something more event-based like MixPanel. Beyond this there are plenty of other paid analytics apps out there with similar feature sets.
Once your tracking is setup, checking the data for a page takes only minutes. If you regularly track the same few stats then I recommend pulling it into a dashboard.
Surveys promise you the opportunity to gather the thoughts of lots of users without too much effort: you can potentially reach thousands of people with just a single form. You can ask them almost anything (though some types of question are better than others, as explained below).
They also offer the opportunity to quantitatively assess behaviours by asking how many people do certain things, which often appeals to the analytical folk (managers). This is something you should be very careful with, as a good survey is better as a qualitative tool and a starting point for further investigation than a quant one that gives you absolute truth. Unless you deeply understand the subject it's best for asking open-ended questions and finding out issues and thoughts that users are having.
There are lots of types of survey out there, including short website feedback pop-ups. For the purposes of this I will cover the one-off type that you might run to research a subject or potential design project.
One of the things surveys are often used for is Net Promoter Score, which I cover separately.
There are plenty of tools that make creating online surveys easy and give you all the different field types you might need (I cover some below). Distribution shouldn't be a problem either: you can send it out to an email list, or share on social media, or put a link on a site/forum. Of course the users you choose to distribute to will affect your results but the actual practicalities for running a survey are fairly straight forward.
Most of the work is actually in the planning and preparation. You need to know what you want to ask and to determine if a survey is the right approach to take. If you're after detailed behavioural understandings then perhaps an interview or user testing is a better bet. A survey can be a good starting point for research, which gives you ideas of the issues to investigate further with interviews.
When writing the questions, it will depend on the type of question as to whether you either keep them specific (for multiple choice answers) or open-ended (for answers where you want the user's free text). Open-ended questions are more useful for getting to the heart of real issues as the user isn't limited by what they can say.
If all of your questions are multiple choice, then it's a lot harder to find out what you don't know. However if you want to survey a large number of people then using multiple choice questions will make life easier when it comes to analysing the results.
Unless you're paying your participants well, keep the survey on the shorter side (fewer than 10 questions) to maximise your response rate. Just because you have the opportunity to ask people anything doesn't mean that you should—try and keep questions on your particular subject of interest. This will help users stay focussed and go into more depth.
Finally, giving users the opportunity to answer anonymously can help them feel confident about opening up so they might tell you things they wouldn’t if the answers were attributable to them.
When it comes to how to analyse your results, it's going to depend on the size of your survey. If it's a small one (fewer than 50 respondents), you can read all of the answers and potentially act on them too.
If you have open-ended questions, it’s helpful to group responses by sentiment: go through each answer and try to categorise them. For example if you're asking people about their problems with a site then you should be able to group them into things like 'navigation', 'search', 'payment' etc. This should help you order the key things that need to be solved with any new design, and you can delve back into the written answers to get quotes and more detailed requirements.
If it’s a big survey (100+ respondents) then you’ll have to focus on doing quantitative analysis of the results. As well as seeing which answers performed better than others, you can dig deeper and segment your results to see which types of user were more likely to answer which way, and find patterns in the data.
As tempting as it may be, try not to turn naturally qualitative questions into quantitative ones. For example, avoid questions that ask how much people 'like' something (often on a scale of 'strongly like' to 'strongly dislike'). As this is all so subjective it can be pretty meaningless. You could get back a survey with people saying they love your site but they still may not be buying your product and you wouldn't know why. Erika Hall writes well about this here.
You can of course quantify quantitative data with a survey, so asking how old people are, how much they earn, or whether they like x over y is good material for charts and graphs.
Just like when interviewing, try not to ask users to predict future behaviour. Don't ask 'how many times will you go to the gym in the next month?' because you'll just get back their ideal answer or one to impress you, whereas the reality is likely to be different. For more solid results ask about actual past behaviour instead ('How many times did you go to the gym last month?').
When declaring any survey results, make sure you explain who you surveyed, especially if they're not representative of your actual user base. Be careful of taking your results out of context and declaring that 'all users think this'.
These days people are pretty over-surveyed and have inboxes packed with requests for feedback. To stand out you should offer some kind of benefit to people for completing the survey, otherwise they’re just not going to do it.
Do be careful about the level of reward however. If you offer too good a prize then you're likely to get people rushing through to complete it and not caring about what they write.
There are lots of tools for making forms and surveys out there. I've used several including Google Forms (free), which is fairly basic but collects your results in a spreadsheet for analysis. Others include Typeform (free and from $35/mo), which is arguably the best-looking form website out there, and Wufoo (free and from $14/mo), which I've used to create fairly complex forms with their conditional rules. Finally SurveyMonkey (free and from $29/mo) has some great analysis tools.
When it comes to surveying tool is generally less important than the content—as long as it's usuable, people don’t really care how the survey looks.
Writing a good survey isn't the quickest of tasks—it will depend on length but expect to spend half a day at least. Getting results can take up to a week.
User testing is arguably the most useful evidence gathering method of all. In my experience, the number of ideas for site improvements that come out of a session of user testing surpasses any other method.
If you've followed the process of gathering quantitive data first and you know you have conversion issues on your site, this can tell you why issues have been happening. You'll be able to watch users go through your flow and (providing your test is well designed) you'll see where they get stuck, and hear them tell you why they don't like or can't find something.
You can gain this knowledge from approaches such as lab user testing or even guerilla user testing. However I find remote testing has a few big advantages:
There’s no excuse not to set up a quick unmoderated remote test with a few users for each major feature you redesign or as a regular monthly/quarterly check-in.
There are three main methods of remote user testing: 1. facilitated & moderated by you; 2. facilitated (and possibly moderated) by someone else; 3. facilitated by you but unmoderated. Each of them works a bit differently, which I’ll explain here.
For the moderated & facilitated by you approach, this obviously involves the most work for you but can potentially cost nothing. You'll need to find the users, organise a time to have a video call, record the call, and then write up notes. I will ask clients to suggest users for me to contact, and will email them to book in a time for a call using the handy Calendly.
The call itself consists of using Skype or Zoom so they can share their screen with me. I then share a link to a prototype or website and can see and hear them as they navigate it. The call can be recorded with screen recording software (on Mac QuickTime is handy for this) and immediately after I write up my main observations.
The facilitated by someone else approach means hiring a company to set up and run your user test, which may or may not be moderated by them as well. Moderation is generally useful when you're testing a prototype or early version of something that requires a bit of explaining or isn't fully working.
Either way you’re role is to specify what you want to test, and liaise with them as they develop the test. They'll then run and analyse it so you get a report at the end with the findings.
The unmoderated option consists of you setting up the test and putting it out to a panel of users who are ready to go. You then get back the videos of the users navigating the site for you to analyse and draw insights from. If you're testing a live website I think unmoderated is the best way to go as this is closer to the reality of how users actually browse the web.
You will need to develop some skills in putting together a decent test and you’ll need the patience to watch videos of people going through your site. As painful as this can be at times, as a UX designer or product manager, there are few better ways to understand what your users face.
An important part of writing a user test is to make sure you're not putting leading instructions in there. Like leading questions when surveying, you don't want to be pushing the users to do certain things or you'll never learn what they would naturally do. Keep tasks simple by saying things like 'show how you would search' rather than 'click the search button in the top right and fill out your dates and location'.
Some people simplify the whole test by only setting users one task like 'show how you would buy a product'. The danger here is that users whizz through the process and you don't get to see them interact with all parts of your site, hence why I prefer a bit of guidance with a task per step of the flow I’m testing.
Make sure you recruit accurately for your tests. You'll want people who match your actual users (you can use your audience data to discover this). It’s very rare that a website is designed to appeal to absolutely everybody so you want users who are going to provide authentic feedback.
Unless the flow is very complex, I've found testing with five users gives plenty of feedback and improvement ideas—adding more just tends to see repeated behaviours. Make sure you have five per major device category though, as people can behave very differently on each them. For example, I most commonly test with five on desktop and five on mobile.
When it comes to analysing your own tests try and stick to recoding observed behaviours. Users might *say* that they don't like a feature (especially if it is new) only to be perfectly competent at actually *using* it. Quotes are useful to put in reports to explain behaviours but shouldn't be used if they don't reflect what actually happened.
Aim to watch all your tests through and annotate them first so you have a good sense of events, before summarising the repeated insights and critical issues in a lightweight report.
When it comes to unmoderated testing platforms who recruit for you, there are several pay-as-you-go options with different features to suit all budgets. Here are a few I've used:
Unmoderated testing can be done in a matter of 2-3 days for 5-10 user tests. If you're moderating it yourself then the extra organising tends to mean it takes about a week.
A decent portion of any designer's knowledge will probably come from blogs, videos, and content created by others. This is no bad thing as each person is limited in what they can work on and by sharing learnings we can all benefit from each other's experiences.
In recent years many companies—from startups to big corporations—are good at writing up what they've discovered in the process of researching, designing, developing, and launching their products. Often they can be useful for preventing you from making mistakes that others have already made. However you do have to exercise caution as not all content is created equal and you can use this learned knowledge inappropriately.
There are obviously several types of content but I’m referring to those that describe a recommended design, a successful experiment, or offer guidelines that you could cite as evidence for making a design decision.
This is one method where there’s no specific process. You can build an RSS feed; create a list of trusted sources on Twitter; sign up to a selection of mailing lists; subscribe to podcasts or Youtube channels; even subscribe to magazines (very old school). Whatever your method, by setting up a regular content delivery mechanism you save yourself from going hunting for it and it instead finds its way to you.
I'm a big fan of the serendipity of Twitter and the chance to get content from publications you wouldn't otherwise look at in your stream. The content on there is also very current but the downside is you'll have to sift through a lot of noise.
When it comes to the content itself, there are some things to think about before deciding whether to apply the learnings to your product or company. You don't want to go telling everyone in your company you should do something and citing an article as evidence if it doesn't apply to your situation.
How solid is their evidence for what they are recommending? When blogs make big claims about how they 'increased conversion by 50% with one simple change' it's always worth asking how they measured that before rushing off to apply their findings. You may not get the full raw data due to privacy concerns but you should be able to get a sense of it.
Were their results from a focus group of 5 people in their office, or was it done as an A/B test with tens of thousands of users? If the former, take with a heavy pinch of salt and if the latter you might decide it's worth looking into. It doesn't just have to be based on large sums of quantitative data, if it has been learned through several rounds of user testing, that is also high quality research.
Is the business recommending this change in a completely different sector, do they have a different target audience, or are they at a different stage of growth to yours? If so then what they describe may not apply to your product at all. A video from a startup aimed at millennials that describes the perfect mobile navigation may not apply to enterprise software for the financial sector.
Plenty of companies just share what they've learned as a matter of giving back to the community but you'll come across some articles that are actually companies trying to push their software or service. In which case their software usually turns out to be the perfect tool for the job or the hero of their story. It's usually fairly obvious but can be done subtly using paid writers on third party sites.
Generally be very wary of content that makes broad sweeping statements about one design being better than another. I've lost count of the number of people who have asked me what the best colour for a button is because they read that "green increases conversions". Of course there is no such thing as a 'best colour' for conversions as it is going to depend so much on where it is positioned and the colours used around it.
Ultimately there are so many variables that it's hard to carry across statements about what would work on your site. But this doesn't mean you shouldn't keep reading, watching, listening, and getting inspiration for ideas.
One tool that is highly useful for getting through lots of articles is Pocket. Install the plug-in on your browser and the app on your phone then when you come across something interesting you can easily save it to a list on the app for you to digest a quiet moment.
Reading an article doesn't tend to take long – this is something you can dip into at any time.
The following websites feature generally well-researched articles, useful for UX decisions:
First of all, a definition: what is design testing? I use it to mean running quick tests on designs that are still in progress and before they’re linked together as a prototype. This usually means showing screens individually rather than as a sequence or user flow, as you might with guerrilla user testing. This can be done by printing them out on paper or it can involve sharing the designs online.
Design testing offers a chance to gather user feedback early in the process and shape your design decisions with evidence before committing to building anything. It allows you to quickly test out a couple of options and solve arguments if it’s not clear what design would be best or your team can't agree on a way forward.
It's a lightweight method and won't give you lots and lots of insight but it is suitable for answering targeted questions and helping you course-correct. If you ask the right questions of a design you can save a lot of time in the long run.
I'm a big fan of UsabilityHub for carrying out this method (see tools), and for that reason I'll use their test types to cover the different ways to do design testing.
To use Usabilityhub you need to upload your exported design and then create a simple test around it with a few tasks before putting it out to testers. If you're doing this offline yourself then you can just show people the design and ask the questions verbally.
In all cases I'd recommend testing either a whole page design or a section of a page that would be visible within a viewport. Don’t test just individual elements like buttons without the context of the page around them.
Here are the types of test you can create with Usabilityhub and what I recommend using each for. You can combine these together in a single test if you want test a few things like first impressions (five second test) and whether people find your main button (click test).
When you get your results back, it’s worth doing a bit of analysis yourself—I find the ‘word clouds’ provided by default on UsabilityHub aren’t that useful. I export the results into a spreadsheet and do a bit of sentiment analysis. For example, if I’ve asked what people think of a certain design, I’ll classify the results as positive, negative, or neutral. This helps compare if I run the same test with another design.
The main challenges with running good design tests are to give users simple instructions and ask good questions. These are brief tests and to get useful data out of them you need to be specific, as you don't have the time of a full remote user test to cover a lot. With that in mind here are my tips for good task writing:
Before each test you get to set the scene for the user with an initial introduction. When showing them a webpage or app screen tell them what they would have just done to reach it (e.g. "searched for a bank loan", "shopping for jewellery").
One sentence is usually enough, don't go overboard with lots of detail about what they could be doing, as they’ll never manage to digest this and keep it all in their head when going through the test.
People can see design testing as a chance to 'prove' that their idea is right and have some data behind it before showing the solution to management. One way you can bias the results in your favour is to write leading questions that suggest a solution or leave the user only with yes/no answers.
Instead you should write questions that allow the user to express their thoughts about a design ("what do you think...", "how useful is...") and let them give honest answers. If you skew the results your design might ‘win’ in the short-run but if you come to release it and it fails, you’ll have to deal with a much bigger problem.
These are short, simple tests and in the case of five second tests, the user barely gets a chance to see the design, so don't go overboard and ask too much. When you go into a design test you should have a single thing you’re looking to find out plus perhaps a related follow-up. Do the work to establish what this is and then stick to it in the test.
As mentioned above I find UsabilityHub is the perfect tool for testing at this stage, as the tests are super-quick to set up and quick to get responses to. It's also free if you have your own audience to distribute it to (perhaps on a mailing list). To use their panel, it costs $1 per random user or $3 per demographic-targeted user. I normally specify the demographics of my testers and run them past 25 people, which gives plenty of food for thought.
I'm not sure there are any alternatives other than creating your own design surveys. You can add images to most survey tools and get opinions from a group of users that way.
Setting up a test and getting back results from 25 users takes about half a day.
Whether over the phone or in-person, taking the time to interview your customers or potential users is a very useful method to truly understand their needs. The idea of just talking to people is a simple thing and can easily be overlooked. Very often it is the most powerful way to find a truth about their behaviour that you otherwise may not consider.
This behavioural truth is typically known as an insight. A good insight can be transformational in shaping how you present your offering or develop you service for users. The interviews can also give you real stories that you can use to illustrate those insights.
It's a chance to move away from your perception of the world and your company's internal views on how things are. Interviews give you the opportunity to see the reality of your customers' lives and how they think. Whilst the process isn't rocket science, good interviews do take a bit of time to organise.
There are a few stages to organising successful interviews:
It starts with deciding what you want to get out of the interviews as you can't cover every subject and you'll want to keep conversations on track. Define what the focus will be and the areas of behaviour you're looking to understand. It will usually be a particular process (e.g. building flatpack furniture) or thoughts about a product that you want to discuss.
Once the direction has been decided you should capture what you want to cover in a discussion guide. Essentially this is the list of questions that you want to ask your interviewees and the topics you want them to talk about.
Collaborate with others in your team to put this together and use it to get sign-off on the scope of the research. It will form the basis of your interviews but you don't have to follow it to the letter.
You'll also need to recruit your interviewees. You'll either want to find existing customers of your site or people you see as your target audience. For the former you should have contact details of people who have purchased from you or who have enquired, whom you can reach out to—though don't harrass them if they don't respond.
To get potential users you might have to put out adverts to find them or get help from a user recruitment company. You'll need to offer an appropriate monetary reward for their time.
Whoever you target you'll almost certainly need to contact more than you actually speak to, so expect to spend a bit of time on this. I'd aim for about 10 people in a set of interviews but if you've got the budget and time you can go for more.
When it comes to doing the interview, it can be done either in-person or over the phone/Skype/Zoom/whatever. Start the chat off with some small talk and try to build a bit of rapport before launching into the questions. Then once you're in the conversation you should mostly stay quiet and encourage them to speak, as it's their experiences you're interested in.
If you're on your own you could record the interview so you can play it back and capture anything you missed later. Where possible I prefer to have two people do the interview: one can focus their attention on the interviewee and hold the conversation while the other can capture notes of interesting points. This saves recording, which I think is best avoided unless really necessary.
After the interview is done you should spend a bit of time reflecting and analysing what was said. This can just take the form of the two interviewers chatting and comparing thoughts, before noting down key points of feedback. It's best to do this straight after the interview or things can be forgotten.
If you then want to disseminate this knowledge more widely, the key findings from all your interviewees can then be written up into a report.
Also don't forget to analyse the process of interviewing itself: what worked well, what didn't? Are there any questions you could tweak or drop altogther? A lot of it you'll learn through doing and is hard to get perfect first time.
Try and recruit a range of types of people that use your site, don't just settle for the easiest ones to find. Sometimes the best insights are to be found in the extremes and the users that do odd/unlikely things. You also don’t want to just have the same interview over and over again.
Be careful to avoid our old friend 'leading questions' (just like when running design testing and surveys), which mean the interviews can just confirm what you already think. This is a great opportunity to go in with an open mind and see what surprising ideas can come up (not that you have to use them all).
Some interviewees can dry up or give short answers. It's your job to encourage them out of their shell and make them feel comfortable to speak further. Try simple things like saying 'tell me more about that', 'what do you mean by...' and of course, 'why?'. Often asking why several times gets you to the true reason behind a person's behaviour and that magical insight, so don't just settle for their well-rehearsed initial answer.
Be aware that people are bad at explaining what they would like and predicting the future. Don't get them to tell you what they want, instead get them to give you concrete examples of actual past behaviours. It's your job after this to identify a proper solution that will help meet theirs and your goals.
Insights may not be all that revelatory when they appear so don't only be on the look out for something earth-shattering. They can be quite simple but if they contain a kernel of truth about how users actually behave/think they can shape the direction of your project. Something as basic as 'customers will only buy if there's free shipping' can cause a huge shift in the way you sell.
A big part of creating a good interview experience for your interviewee is being organised with your dates, times, and email notifications. This will help things run smoothly so don't ignore it—here's an example of when no attention was paid to this creating a poor interview experience.
Most of the tools for interviewing aren't going to be particularly fancy and are things you should feel comfortable using. It's a good idea to write your discussion guide on a collaborative writing tool like Google Docs.
Calendly is a very useful tool for allowing interviewees to book in times in your diary for the actual chat. The main tool for interviewing is just a way of recording it—these days your phone will do a decent job—and notes can obviously be taken on a device, or paper and post-its.
The online service User Interviews looks really useful, and will recruit people for you from $30 per person.
Writing and setting up the interviews is the time-consuming bit and can take a few weeks. Each interview should be 30-60 minutes.
This method of evidence involves making use of the unsolicited user feedback a company receives. These pieces of feedback could come from sources like contact forms, emails, social media, phone calls, or in-person conversations. This is feedback that doesn't just exist on live chat or feedback you have sought out in surveys—it can arrive at any time.
With this kind of user feedback you often learn the things that truly bother people, as they have taken the time to seek you out and write to complain or suggest something. They haven't just been asked a question in a survey, where they may have come up with something for the sake of giving an answer. It's powerful stuff, and I know websites that have been developed using this as pretty much their only evidence method, which have resulted in products that users absolutely love.
It is important to have an approach for collecting this evidence as it is the kind of thing that can get lost if there is no system for flagging and storing it.
Setting up a system to record this feedback will usually involve a bit of manual work. The majority of your feedback will come into customer service teams or from customer-facing staff. You will need them to log every time a customer complains about something or makes a suggestion for an improvement or new feature.
This can be done via CRM (Customer Relationship Management) software if you have the budget or it can simply be a shared spreadsheet where staff can enter a summary of the issue and an identifier for the user that raised it. The feedback this doc gathers should be more long-term and focussed around improving features. These customer service staff need to be able to distinguish when something isn't a request but is flagging something that is just plain broken—as this means a bug ticket needs raising with the development team.
Once you've got a solid system for logging ad-hoc feedback, someone (usually a product manager) will need to check it regularly and classify the different issues to make spotting similar ones easier. Either use tags in a CRM or add an extra column of data in a spreadsheet where you specify what category issue each thing is, for example 'password reset emails slow', ‘wants wishlist functionality', 'bigger upload capacity' etc.
When you group issues like this you can keep a running total of the most requested or complained about features. This leaderboard can be a part of your suite of evidence for deciding the priority order for areas of your product to redesign. By checking it every week or so you can see if there are sudden spikes in certain issues or if there’s a leading problem that needs addressing more urgently.
When dealing with this kind of feedback there will always be a bias towards the negative. As humans the frustrations or the bad things that happen stand out more than the smooth easy experiences, and as online user experiences improve people get more accustomed to everything working really well first time.
Just be aware that you're mostly going to see problems highlighted in this feedback so be careful not to throw out everything that is working when redesigning to solve these issues.
Generally, people don't like change. A common time for a raft of complaints is if you release a big redesign of a major part of your website. If it's something lots of people use every day then don't be surprised to see confused and sometimes angry feedback when you first launch. This is to be expected as they adjust.
However if you get an avalanche of negative feedback or users are still making the same complaints a few weeks after launch then you might have a real problem. At this point you should try to understand why the complaints are coming in (user testing can help you see it in context) so you can fix it.
Not all complaints are created equal. Someone quickly sending a moaning Tweet is not as meaningful as someone taking the time to phone up and explain their problem. You might want to weight your feedback to reflect the source it came from.
Be careful not to misunderstand the feedback. It can be hard to truly understand what someone means if they've only written a few sentences. People can have all sorts of odd ways of phrasing online behaviour and probably won’t know the correct technical terms.
Flag any feedback you're not 100% sure about and try and clear it up with the person that gathered it. Even better, if you have contact details for the user that raised it, then get in touch with them to get clarity.
Customers and users don't always know what they need so focus more on their problems than suggestions for new features. They may think they need a big all-singing, all-dancing feature but a simple tweak may be just as good. It's your job to define the solution, don't just implement what they ask for.
There are lots of different CRMs out there, and they’re generally not that cheap to implement. Some focus more around sales but usually incorporate customer service too and a few of the most well-known are: Salesforce, Zoho, and Sugar.
The most basic free shared log you can use would be a Google Spreadsheet (a tool I've mentioned a few times in this guide and something I’ve seen many startups be built on!).
To check on and classify 50-100 pieces of feedback a week should take a couple of hours.
A dashboard is a way of tracking your choice of quantitative data about your website. It is something I’ve found most useful for long-term projects or when working in-house with a company for an extended period of time.
It means you can define up the important metrics for your project just once and then let live updates come in. You won't then have to manually go searching for the data in analytics software each time you want to check it. This dashboard can be your high-level view of a website, which enables you to easily spot anomalies in performance.
It should be something that is easy to share so everyone in a team or company is on the same page. This can be especially useful for people who may not have access to your analytics software or may not have the time or capability to go rooting around in it for the data they need.
The first thing to do is define what is important for you to see in your dashboard. Don't go creating it without a plan or you can end easily up tracking things for the sake of it.
The most important thing to track is your key metric or goal which determines success for your website or part of a website. For some that could be sales while for others it could be sign ups—either way it's likely to be something related to making the business money.
The next thing to have on your dashboard is conversion rates for the steps in the user flow to reach your goal. This could be the same as any funnels you've set up elsewhere but you can set your dashboard to be more or less granular as required.
The other data to consider including on your dashboard are your important secondary or engagement metrics. These are ones that tell you a bit more about how a page is performing such as things like bounce rate, time on page, and in-page events.
These secondary metrics can be for just one or two key pages in the flow, depending on what you have learned is a useful indicator. It might be the case that different metrics matter for different steps in your flow. For example bounce rate will be important for landing pages while specific button clicks would matter more for form pages.
Exactly how you set up your dashboard will depend on the tool you use.
Once it is set up it’s just a case of checking back regularly and building up the data over time. One of the most useful features of a good dashboard is to easily be able to compare how a key metric has changed over weeks or months.
Don’t try to track too much detail with your dashboard. If you put in every stat you can get hold of for your website you might as well just use the standard interface for a web analytics package. The point is for it to give you key information at a glance.
Setting up a dashboard can take a bit of tweaking to get it performing correctly. Make sure the data that appears in your dashboard tallies with what's in your analytics tools, and that they're measuring the right things.
If you check your dashboard and spot anomalies or downturns in metrics, be sure that you are comparing like with like. If you see a conversion rate drop, check that the period you are comparing it against is the same length of time. You may need to check if it is the same time of year, as seasonality can be a big factor in conversions for many sites.
If your key conversion metrics really do drop then this should be the start of an investigation rather than a time to panic and make drastic changes. Look at page data to see what is happening on specific pages or browsers, view heatmaps, and check visitor recordings to see detailed user behavior.
In fact it can kick off a chain of different sorts of evidence-gathering as I explain through the framework in my redesign course. Ultimately it's a case of playing detective to get to the bottom of your issues.
To make one of your own for free you can use Google Analytics' own functionality under the ‘Customisation’ section, or the recently introduced Google Data Studio.
A tool I've used before is a plug-in such as Supermetrics (free and from $39/mo) to pull the data into Excel or a Google Sheet and manipulate the data here. This allows you to choose the exact resolution you want to see and means you can pull in lots of historic data.
Setting up the dashboard should be a task that you do once for about half a day, and then check back weekly.
A persona is a profile that describes a type of user that a business has. There will typically be several that cover the characteristics of different types of user or customer. They are more of a second-hand form of evidence, because they are often distilled from a piece of user research like customer interviews.
However it is common to be brought onto a project and to be given personas, which were the output of previous user research. At this point they can become your main form of evidence about an audience. This often happens if you are working on a budget and you don't have the time to do a new piece of research.
If they are put together well they can tell you a lot about the people you should be designing for, and should be combined with your other sources of evidence like audience analytics data and surveys. On the other hand, if they're missing key pieces of information or they are getting old, you’ll need to use them carefully.
Sometimes they might not be called personas, but could instead be described as 'segments' or ‘customer groups'. Either way, they fulfil the role of summarising the types of people who use your product or service.
In this piece I won't go too much into how to create personas as there are plenty of articles out there that explain this (see resources). Instead I'm more interested in how to use them as a piece of evidence for gaining insight and helping you design.
First you should check if your personas contain enough information: they should include the person's demographics (age, location, etc), their behaviours (particularly around technology); their motivations/goals; and their fears. It's important to have those fears and motivations in there to help you understand why people want your product or service. A good piece of original user research should have probed into this.
It’s good to critique them. Go through the information provided and see how it tallies with any other evidence you have—take notes or highlight sections. For example, demographic information should match what you can see on web analytics, and motivations should chime with survey responses. Where something seems wildly different it's worth flagging it and questioning it with your client or others on your team.
There should be more than one persona, so make sure to understand the differences. Whilst some products are laser-focussed for only one type of user, in reality most will appeal to a few different types of people.
One of the advantages of personas is they remind you that your audience consists of people with varying motivations. This helps you avoid the trap of designing for a non-existent perfect customer, or trying to design for 'everyone', which can mean you end up creating something for no-one.
Once you have understood the personas you should be able to create some goals that you need your designs to hit in order to satisfy these user types.
Whilst I do promote 'outline personas' as a quick way of summarising your audience analytics data, they aren't a replacement for full personas. They give you a sense of who your audience are and help you understand the different groups out there. They can be very helpful for knowing who to recruit for research or user tests and can tie groups of users to online behaviour. But without some qualitative research you'll always miss the important understanding of why people do what they do.
Be careful of excess demographic information in personas. It's useful to place people in your mind with an age, location, and job but any more than this and you can run the risk of drawing irrelevant conclusions.
Unless it's important to your product, ask yourself does it matter that your users have three dogs, speak Swedish, and exercise at 5am? As humans we can naturally lead ourselves to making stories out of these things and creating connections, even when a person's pets has nothing to do with them choosing a financial service (for example).
Watch out for being given too many personas or customer segments. If research was too broad a client can end up with ten or more personas, which can make it very hard to not design generically and create lots of features to satisfy everybody.
Also over time a company may have amassed several rounds of personas. You should try and understand which the most current and relevant ones are and trim it no more than four or five, which summarise the majority of the users.
Finally, do remember if you are given personas to work with, that they are second hand and rarely replace you actually talking to or watching users. If you can, try and speak to the people who created them and probe them on the details. It's usually worth you trying to do some of your own first hand research alongside to compliment or disprove them.
Personas take no specialist software to create. Whilst I'm sure specific tools exist, they can just be text documents with subheadings or simply formatted Powerpoints, Keynotes, or PDFs.
The process of reading through and understanding existing personas should be a quick one—an hour or two.
There are a lot of social media platforms available to communicate to and converse with your users: Facebook, Twitter, Instagram, Pinterest, and LinkedIn being a few of the big examples. Just like any online medium for communicating with people, the data it generates offers you opportunities to learn more about them.
Social media can very quickly give you plenty of quantitative data in the form of things like follows, likes, reactions, retweets, etc. A lot of companies will have one or more people watching these numbers to determine the success of campaigns and reactions to content.
It also provides a place to gather more qualitative feedback from individuals in the shape of comments, praise, and complaints. In both the qualitative and quantitative cases it’s easy to lose the signal amongst the noise, so you need a plan for digging through it all.
Just as there are lots of social media platforms there are also many tools for measuring impact. If you’re putting social content out daily it’s worth signing up to one of these in order to track your social performance (see tools below). Of course how you measure success on social media should be tightly related to what you are using it for.
However as a UX designer marketing metrics on their own don’t do much to help you understand your audience. In terms of gathering evidence of behaviour, social media can be better as another source of unfiltered user opinion and feedback.
Are people complaining about features or bugs on your website? Are people reacting (positively or negatively) to the titles of articles you post? Are people praising or telling of frustrations with a service? You should have a system to record the repeated ones, and combine them with other user feedback, to use when redesigning.
It can also be a good starting place to discover real-world problems people have that you otherwise weren’t aware of, as you can be sure people will moan about them online. For example if you’re looking to improve airport parking, then a search for that term along with words like ‘annoyed’ or ‘stressed’ will throw up plenty of real life problems. You can then take these specifics and research them further by surveying or interviewing more people.
Be careful not to track vanity metrics—defined as numbers that don’t help you improve your product. If you're trying to promote an article then the meaningful action would be clicks though to read it. It doesn’t matter how many likes and retweets you get if no one actually sees the content.
Social media is full of noise and it's easy to think things are going well just because there is a lot of activity. It’s easy to convince yourself that having loads of followers is a big success but they are far from the same thing as active uses or paying customers. It’s a better idea to tie this to your website analytics and track how many of those followers are going on to visit your site.
It’s also worth keeping in mind that what people say on social media is not necessarily the same as what they actually do. Take feedback on these platforms with a pinch of salt. It’s such an easy, low friction way to communicate that people can say things just to fill their timelines.
In addition the default on social media is for people to move to the extremes (like saying they’re super happy or very angry) and you tend to get much more negative emotions than you would from people on a phone call. If a user is complaining, try to move them to a more nuanced channel for feedback, where you can get to the bottom of their issue.
There are a lot of options for tracking social performance and I’ve only dabbled with a couple of them. Buffer (for decent analytics it starts at $99/mo) helps you learn what time of day your posts work and shows quant reaction data for each one. Hootsuite (from $25/mo) allows you to manage a dashboard of different social feeds and see analytics and reports on performance.
For finding comments you can always just use the search tools on social sites, which vary in quality. If you use Facebook and Twitter adverts you gain access a lot of demographic data on users who engage, which can help you learn more about your audience.
To just check social accounts for feedback and comments can be done in an hour a week.
A lot of these guides look at technology-enabled methods for gathering evidence but sometimes—no matter how many analytics tools you have—you need to go to the source and get some raw data. This means actually observing people and their behaviour ‘in the wild’.
This is particularly true if you are looking to design an online service that interacts with people's daily lives. If you're designing an app which helps people make healthier decisions when out shopping in supermarkets, you'd better go out and get a good understanding of how people actually shop first.
You can of course survey and interview people to find answers but if you're to truly understand what they actually do (rather than what they say they do), you should consider accompanying them when doing tasks. You can learn where they get stuck, why they have problems, how they find work arounds, which things they love, what they ignore, and more.
I’m calling this field research although you might also find this kind of thing being called ethnography.
The first challenge is going to be finding appropriate people to study. If you have existing customers you might be able to reach out to them and ask if they're willing to participate in research. If not you can always try putting out adverts in places like Gumtree or Craigslist. Try and make sure any non-customers match your audience.
You should pay people for their time, so come up with an appropriate sum for the duration of the task. If the people you need are just too specific or hard to find, then it might be worth using a recruitment company (more and more online ones are cropping up).
How many people to recruit is a question of how complex the product you’re looking to research is (the more there is to observe, the more people you’ll want) and how much budget you have. You need to build enough sample variety into your sessions so you reach the point where you see overlap with insights. 10-12 people is a decent target but just one or two is still better than none.
The next thing to be clear on is what you want to watch people doing. Is it a very specific task, like going to a doctor? Or is it a full process, from writing a shopping list to unpacking the contents of their shop into their kitchen. Once you've defined what you're interested in, don't try to structure it too much. It's your job to just observe and record what happens.
You then need to go our into the field and watch what they do. Make sure to have their permission to record the events. You might want to try and film the whole thing or just snap clips of key moments. Of course you can have a notepad with you to jot down questions or incidents to dig into later.
If it’s a task that they do a lot then asking questions as they go and getting them to think aloud is a good way to understand why they’re doing things. If you want to see them use a new product then try not to interrupt and influence their learning process but ask any questions at the end.
After the task is over you need to take a bit of time to record what happened. It’s worth setting up a report template with things like person description, stories, quotes, insights, highs, and lows. Fill this in for each participant, ideally with another person you researched with, so you can check you agree on what you saw.
The big thing to watch out for in field studies is being sure not to insert yourself into the research too much. Don't ask the participants lots of questions as you go along, or help them too much if they're struggling with something, or get in the way with recording things. You could end up biasing the results and missing out on opportunities to learn how they truly tackle problems.
If you do have lots of questions you want to dig into further then save them until the end of the observed period and ask them in a more formal interview then. Good questions to ask are 'why' they do things, especially if it isn't obvious from simply observing.
Don’t let perfect be the enemy of good in this kind of evidence gathering. If the task goes a bit off-course or the participant ends up doing something you didn’t expect it can still be a chance to learn. By its nature this kind of work can be messy but you can still spot very real insights.
You should also try to be in ‘observe’ mode during the research and note down everything without bias. Then take the time to reflect and analyse later, rather than jumping to immediate conclusions.
The tools for this kind of research are your classic 'reporter' tools of notebook and pen, camera, dictaphone, and practical clothing. A smartphone can contain all of thee tools but it's still a manual job to capture the events.
You might not be able to video the whole thing but it’s at least worth taking photos of key moments to help remind yourself how things happened later.
dscout is a powerful enterprise tool to remotely get videos of users going about their daily lives and completing tasks.
This is going to completely vary depending on who and what you're looking to observe but budget a few weeks to run the whole thing.
Thanks to Paddy Long for his help putting this post together.
This method is a more traditional marketing approach to gathering evidence. You bring customers or people from your target audience into a room and get their opinions on existing or new products. It promises the chance to easily understand what your customers want in one session, and gives you a chance to probe them on the areas you are interested in.
Ideally a focus group will provide you with feedback and reasoning, so you can go away and address the concerns. They may even give you quotes that describe how much they like your product, which you can use to drive marketing.
However when used poorly a focus group can become the justification for sweeping assumptions and overconfidence, based on a few throwaway comments. Focus groups can be a good place to start your research and help direct it, but they shouldn’t represent the only evidence you find.
This is another method where recruiting the right participants matters. You want people who are either your actual audience or who match them.
If you already have personas defined then it’s worth trying to get representatives from each of those groupings and not just populate your group with one type. If everyone is too similar then you’ll potentially only hear a chorus of identical feedback.
Just like when interviewing or user testing, it’s important to write some kind of script or discussion guide, which captures the questions you want to ask. You don’t need to stick rigidly to it—part of the benefit of focus groups is letting the group evolve the discussion to areas you hadn’t thought of—but it’s there as a structure to fall back on.
Like interviews or user tests it’s a good idea to have a couple of people facilitate: one can talk and engage people while another takes notes and records. If you’ve not run any focus groups yourself before it can be worth getting an independent agency with a good track record to do so (they’ll help you avoid the mistakes outlined below).
The feedback you get from a focus group shouldn’t represent the end of your evidence-gathering, as it’s easy for them to come to skewed conclusions. It is better to take the outcomes (particularly any insightful comments) and use these as starting points for further research.
One reason to be wary of focus groups is that the results can be so easily manipulated by either biased facilitators—who have a vested interest in the product being successful—or by the loudest voice in the room. This problem of biased facilitators can appear in other forms of research too but it’s made worse in focus groups by the power of ‘group think’ as once an idea is suggested to the group it can spread quickly so the group end up repeating back what they’ve been told.
The biased facilitators phenomenon is shown well on the TV show The Apprentice where it’s the only user research method they use in product-creation tasks (and then they often proceed to ignore what people say anyway).
‘Group think’ creates effects where people don’t always behave honestly. If one member of the group loudly declares that she dislikes something, quieter members who think the opposite may agree or stay silent to avoid conflict, or for fear of looking silly.
The group can also get sidetracked by one or two people’s opinions taking up all the time and the session can run out before everyone can have a say. You can miss out on the nuanced thoughts of some people, which you would be able to dig into when interviewing individually.
Another big problem with focus groups is that you’re placing a lot of weight on what people say rather than what they do, two things that are often quite different. This is illustrated well by the classic yellow Sony Walkman story, which is worth reading here if you haven’t heard it.
In customer interviews you should try to ask people about actual behaviour but this can be harder to do in a group setting, where people can say things to impress others or to match the group consensus.
The focus group is potentially a dangerous beast and not something I come across much today in tech product decision-making. It used to be the preserve of corporations and occasionally a client would present me with their main research from focus group feedback.
The downsides can of course be avoided but require careful moderation and analysis afterwards, and there just aren’t many people in the digital/startup space with that experience.
This isn’t really one that requires lots of tools and software, just a method for recording and reporting on it afterwards. You can do remote focus groups in online chat software like Slack—I’ve done this for new idea development.
The same rules apply online to doing it in-person. You tend to need to marshal users more to keep them on topic but it gives less vocal people a chance to be heard and you have the benefit of a written transcript to analyse at the end.
A focus group itself should last 1-2 hours to keep it, er, focussed. Set up and analysis requires a few days.