☰ View all evidence-based methods

Competitor Analysis

What you can learn

A UX competitor analysis involves assessing competitor sites to see how they design for their users—potentially solving for similar user needs. They can be direct competitor companies operating in the same sector or they can share features, for example a high-end jewellery brand competes with other high-end jewellers but also might have customisation options similar to premium technology products.

If you’re new to working with a client a competitor analysis is good for giving you the market context that the company are operating in. It can also show what users will expect if they’ve used similar sites before. If you’re working in-house you might be very clear on who the competitors and influences are, but a proper analysis allows you to build up a deeper body of research to be referred to in future projects.

A decent analysis will help you gain an objective overview rather than get fixated on specific features. You should first be very clear on the issues your site has and the problems you're looking to solve. You can then go to the competitors to understand *how* they have tackled these problems and can assess how well their solutions would help your users.

You might want to solve your users' problem in a completely original way but this can mean users will need to work hard to understand it. People learn patterns through browsing several sites: a smart application of existing approaches will help users more intuitively know what to do.

How to do it

First you need to decide what it is you want to find out about your competitors. This is defined by the challenges you’re dealing with in the project and what you want to improve—you should have learned about these from interviews, user testing, or visitor recordings. An example of a challenge might be getting users to sign up.

It might be that you can look at the same competitor sites for solutions to every challenge, in which case you should aim for at least 6 to study. If your project involves several quite disparate areas then you should look at the most relevant sites for each challenge—aim for 3-4 for each one.

When assessing how sites work you should screengrab or record your journeys, not forgetting to do it for both mobile and desktop, as they will be quite different. This way you have a record of what you’ve seen.

When you've gathered your raw materials, choose a document format you prefer for recording your findings: it’s probably going to be a text doc or presentation. Then create your report by going through each of your categories and write notes backed up with screengrab evidence of how others are solving that problem. Cover what approaches impressed you as a user most, and what approaches you think should be avoided.

Finally, I like to summarise my recommendations for the most effective features I think my project could incorporate. This should provide plenty of inspiration so you can start designing solutions.

Watch out for

Copying one

You should never just study only one competitor because a) this is ripping someone off and b) you're missing the opportunity to learn a lot more. It’s easy for companies to get fixated on a market leader and want to copy them to bring success but this is focussing on yesterday’s solutions rather than today’s user problems.

Matching everyone

Clients and stakeholders can often feel the need to follow the pack and say things like: “We should have feature x, everyone else has so it must be good”. Maybe this is true but it’s possible everyone else just copied the market leader without thinking (see above).

First understand the needs your users have, then assess the possible solutions and determine which best solves your problem. If your users regularly use other sites then that is a strong reason to consider similar functionality, so they don’t have to learn something new.

Not considering competitors

It’s easy to fall into the trap of thinking “we don’t want to copy anyone, we’re unique”. Working on a completely new concept is rare—there’s usually someone out there doing something similar even if it’s not directly competing. Even if you intend to stand out, an analysis of others can at least help a company position itself and be clear on how it differs.

Irrelevant sites

Don't study website because they are big players or you like them, for example just because Apple are the richest company in the world doesn't mean they're right to look at for your project. Your users could be completely different and have very different motivations. Make sure there is a solid reason for each website that you look at.

Example tools (and cost)

The main two tools you'll need are something to screengrab with and something to record your findings. My current screengrabbing extension of choice is called FireShot on Chrome (I've used various others in the past but they all seem to stop working). The good thing about Chrome is you can also easily spoof mobile devices and screenshot those.

For reporting findings I find Google Slides is good for this as they allow for sharing with others to comment on (free). You could also use more visual approaches like InVision (free and from $13/month) and add your notes to that.

How long does it take?

To do a complete competitor analysis and report on it normally takes 1-2 working days.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019

Save time with this

UX Competitor Analysis Report Template

UX Competitor Analysis Report Template

$5

(+VAT in EU)

For Keynote, Powerpoint & Google Slides

A template for a lightweight UX competitor analysis. 12 pages including report introduction; contents list; competitor site list; section introduction; sheets for key findings within each section, with space for screenshots; and client recommendations.

Guide to Audience Data for UX Design

☰ View all evidence-based methods

Audience Data

What you can learn

Audience data is quantitative data about the users of your website. The best-known and most easily-accessible repository of this can be found in Google Analytics (GA).

Due to their massive reach, Google can put together some pretty accurate information on who makes up your audience. They do this through a mixture of inferences and real data about users.

The inferences come from knowing what people are searching for and clicking on, while the real data is personal details they have from users who are logged into Google services (like GMail) while browsing. It's anonymised so you can't tell who the individual users are (which is fortunate if you care about privacy).

GA’s audience data gives you some basic demographic information, such as age, gender, and location, which helps you build a picture of who your users actually are. You can then use this data to segment your audience or decide who you interview or user test with. You can build on it further by surveying your users.

How to do it

For the GA approach you’ll first need to turn this option on, to say you’d like to collect this information about your users. Once it’s being collected, you then just fire up GA and head to the Audience section, where you can find the following:

You can then either summarise this data to give an overview of who your users are (with something like a user statement) or you can create personas that have come from user interviews. These personas also help you view your audience as three or four types of people, rather than just thinking of them as a single entity.

Another fruitful area to combine into your personas is the acquisition channel—does one audience tend to reach you from social media, while another from Google searches? This can help shape how you speak to people on those different channels.

Watch out for

A decent sample size

On GA, in the top right of the age, gender, and interests sections it tells you what percentage of your users it has the data for. The larger that number, the closer it is to the whole of your audience and thus reality. Small sizes on low traffic websites can mean you’re getting a skewed representations of your audience—as a rule of thumb I ignore it if the number shows less than 20%.

A long enough time period

Don't look at too short a time period or that can also twist your data. I like to look at the last three months worth of data when assessing my audience, so it can balance out any random fluctuations of traffic. Many sites see the make-up of their audience change at different times of the year, with big events like Christmas.

Check up

It's worth checking this data every few months to see if it has shifted (once per quarter is about right). Then update the summary or outline personas you have of your audience and circulate it around your team. You should flag any notable changes.

Avoid assumptions

Just because you now know more about who your users are, be careful not to make big assumptions of them, e.g. "Most of our users are young so they can figure out complex functionality". You won't know that until you get more detailed knowledge by user testing or interviewing them.

Example tools (and cost)

As I've focussed on here, Google Analytics (free) offers this data for your site—though you'll need to turn on the Demographics option to capture all of it. In addition, online marketing platforms like Google Adwords and Facebook Ads give you very detailed audience information about who clicks on your adverts.

How long does it take?

It takes only an hour or two to study the data, and record a summary user statement.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019

Save time with this

Behaviour Canvas + Audience Canvas

Behaviour Canvas + Audience Canvas

$2

(+VAT in EU)

Printable PDF templates

Two A4 printable canvases to help you make sense of all the research and data you gather on your website. Understand the core of how users behave on each page of your site and keep focussed on who your users are.

Guide to Net Promoter Score for UX Design

☰ View all evidence-based methods

Net Promoter Score

What you can learn

Net Promoter Score or NPS is a popular method for measuring customer satisfaction with your products or service and is something used across many industries, allowing for comparison between very different businesses. It has become hugely popular in recent years and you’ve probably found yourself answering the question for several services.

At it’s heart is a simple one-question survey asking 'How likely is it that you would recommend [brand] to a friend or colleague?'. The user is given a scale of 0-10 to answer. A score of 0-6 is considered negative or a detractor, a score of 7-8 is neutral, and 9-10 is positive or a promoter. A simple formula of percentage of promoters minus percentage of detractors is applied to the results to give a total score between -100 (very bad) and 100 (excellent).

The reason this score is seen as a good metric for understanding whether your customers like your service is because they are being asked whether they would put their reputation on the line and promote you to their nearest and dearest. Potentially it’s a good indication of how you compare to competitors (if you can get that data) and tracked over time it can tell you whether you're improving things or things are getting worse for your customers.

However it is a bit of a strange system and does have its flaws (covered in detail below). On it's own it doesn't tell you much but like most quantitative metrics it can give cause to investigate further if things change. To be truly effective it should be part of a customer satisfaction survey that also gathers more detail on the reasons for the score.

How to do it

There are a few ways that companies tend to ask this question of their users, the main ones I've experienced are:

To use NPS data to inform the design process you can keep track of the scores in a spreadsheet with columns for the written feedback to go alongside it. The score would just act as a rough positive/negative direction but the written feedback fields is where to pay real attention.

Individual pieces of feedback aren't much use but as the spreadsheet grows you can categorise the feedback and look for patterns. For example label up ‘struggles with search filters', ‘wants bigger product images', ‘stuck on sign up'. I'd then keep count of how often these issues appeared and could focus attention on those that caused the most problems.

Combining this with live chat and general customer feedback helps give a sense of scale to issues on a website. They would of course need investigating further with user testing to truly understand why people were having struggles.

Watch out for

There are many problems with an over-reliance on NPS, with these being just some of them:

There are even more reasons to be cautious covered in this article.

Example tools (and cost)

You can just use your normal email provider to send out the question to users or Google forms (free), Wufoo (free & from £12/mo), or Survey Monkey (free & from £26/mo) to do something more comprehensive.

There are purpose made plug-ins such as Delighted (free and paid) or Promotor.io (from $50/mo).

How long does it take?

Setting up the method of gathering will probably take about half a day. Checking the results relevant feedback should be a job that takes about 30 mins per week.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019
Guide to Guerrilla User Testing for UX Design

☰ View all evidence-based methods

Guerrilla User Testing

What you can learn

User testing involves watching people use your site or app to see what difficulties they have. Guerrilla user tests can take a few different forms (covered below) but what they have in common is that they are relatively unplanned and quick to conduct. They're also usually done in-person rather than remotely.

Guerrilla tests are good for checking a work-in-progress in the form of a clickable/tappable prototype. Of course you could test finished websites or apps but there are other methods for this, which can better reflect the user’s actual experience.

The main purpose of these tests is for sense-checking your work as you go along and for testing out ideas on people who are coming to the project afresh. You'll discover usability issues with your work and whether people understand what they're interacting with.

It's probably the most useful evidence method for ‘works-in-progress’ as it doesn't require a completed product to gather evidence on. You can even test with sketches or paper prototypes. It's missing some of the scientific rigour of other user testing methods but it certainly beats sitting around and theorising or guessing at what users are going to do.

How to do it

First off, a bit of preparation: think about what you're testing and decide the tasks you want people to carry out on your design (two or three tasks is about right for this type of test). If you're just showing a single screen then decide what your primary question to them will be (and make sure it isn't a leading one).

It’s worth noting down in a simple script what you want to say so you're consistent with each person. Also make sure you have a way of recording the feedback: write it down immediately after or—even better—make a video recording.

The main challenge for this type of test is finding relevant people and getting them to try out your design or prototype. As with most types of user testing, five users is a good number to aim for but if it's quick to get a couple more then do so.

Here are some options for finding people:

After testing you should always spend a bit of time going over your results and writing them up in some form. You might think you can remember what the issues are but it's surprisingly easy to forget one or two a few days later.

Sort the usability issues into a rough priority order of severity, and also include the things that users liked about the site. You can put your test results in a lightweight testing report so you have a record for yourself and others to refer to in future.

Watch out for

Don’t test within your team

Testing with other people who work on your project or in your team is fairly useless as they are going to have such a clear idea of what you're doing that they won't ask the ‘obvious questions', or test your assumptions. This is precisely the kind of thing you want to find out.

Keep it short

Don't develop long tests with multiple tasks to go over. People's attention will wander pretty quickly and if you’re testing a prototype it will probably lack the realistic detail to sustain a long test.

You should be aiming to take about 5-10 minutes of their time. If you've got a big prototype then focus on the things you're most unsure about or run multiple guerrilla tests.

Pair up

It can be hard to do it all on your own (especially in public) so you might need someone else to help. They can note-take or record while you engage with the user and build a bit of rapport. If there’s just one of you having to do both tasks then you can miss important things (and look like you’re ignoring the user).

Write up

A test that isn't recorded may as well have not happened, as you’ll have no evidence to point to if people challenge your findings. By getting it written down and having video clips you can share your findings with others. Definitely don't write a long report though or you're defeating the objective of speed.

Example tools (and cost)

You’ll need something to test on (your laptop, tablet, or phone) and you’ll need something to record with (another phone, QuickTime screen recording). Obviously these cost money but you’re likely to have access to them to design with so it shouldn’t cost any extra.

How long does it take?

It can be as quick as an hour or two to run five tests but I recommend spending half a day to properly record and analyse the results.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019

Save time with this

User Testing Report Template

User Testing Report Template

$5

(+VAT in EU)

For Keynote, Powerpoint & Google Slides

A template for creating lightweight user testing reports. 11 pages including test outline; test participant details; executive summary; bugs list; and sheets for key findings (including usability problems and positives).

Guide to Client Knowledge for UX Design

☰ View all evidence-based methods

Client Knowledge

What you can learn

Unless they're self-motivated projects, all design work has either an external client or an internal stakeholder commissioning or owning it (for the purposes of this guide I'll use the term client for both). As a designer it can be easy to be dismissive of them and think you know more about what needs doing. However if they are good at their job, the chances are they know more about their product and users than you do.

A good client is a valuable source of information who can save you time and energy on a design project. You should utilise their knowledge early in a project to understand the problems and issues they hear about most from their users.

You can learn from them the demographics of their users and their common behaviours, which you can double-check with audience data. On top of this they'll be able to tell you who their competitors are, so you can carry out competitor analysis.

How to do it

I find the best time to get a download of client knowledge is right at the start of a project rather than piecemeal throughout. It's important to make sure you and the client are on the same page and it's also important to dig into what the problem is that they really want solving.

It's common for a client starting point to be a request for a certain solution when after a bit of exploration it turns out they actually need something else. Just like interviewing users it's worth questioning until you get to the real problem.

To find real issues, I make sure my client discussions focus mainly on two areas:

  1. What problems have they learned about from their users/customers (get stats and stories);
  2. Who their users are and what are they trying to do with the site or app.

The discussion should be problem and user focused at this stage, not about diving into solutions. What you take from them should be statements about current behaviour and issues (e.g. 'Users mostly complain about feature x') or questions that need further research (e.g. 'What is the conversion rate of page y?').

During the meeting I make quick notes (often on post-its) of every relevant nugget of knowledge and after the meeting I type this up into a Google Doc to share around with the client and their team. This states the main problems we're looking to solve with the new design and gives people another chance to chime in and clarify. We should then be agreed on what the problems are, what needs more investigation, and what we're looking to design.

The above outlines the main time I use client knowledge as a piece of research evidence. Of course they are also there throughout the design process to input into designs but it is important to keep everyone focused on the originally agreed issues that need solving.

Watch out for

Changes of mind

Halfway through a project it's possible that a client will decide there’s a bigger issue to tackle and will move the goal posts. This is why you need agreement and sign-off on the real problem you're looking to solve at the beginning: get it written down.

Talking to the wrong people

Make sure the key decision-maker for your project is in the room at the beginning, otherwise everything you discuss could be scrapped if they haven't had their say. If it's tough to organise a group, you might need to run a separate meeting to get their input.

Early solutions

"We just want you to design one of these"—be wary of clients who come fully armed with a solution. They might be right but some clients see everything as a chance to design a brand new shiny thing, when in fact their users might just require a small fix to the existing product. Try and understand why they think this solution is so necessary and get to the underlying problem.

Too many ideas

Be careful of client meetings that ramble and cover every idea they've ever had. Try and keep the session focused on one primary objective and don't get drawn into solving hypothetical future situations.

Lack of agreement

It's possible that by asking lots of questions you might expose that the client team aren't in agreement about what is wrong with their product and what needs solving. It can be worth stepping back or even pausing the project while they discuss further and perhaps do research of their own. It’s far better to wait than to do work that doesn’t get used.

Example tools (and cost)

To gather client knowledge you'll need something to take notes with (notebook, post-it notes, laptop, or tablet) and it's worth recording on your computer or phone too. Then you'll need something for sharing these notes around, I like Google Docs (free).

How long does it take?

Keep each client meeting focussed and take no more than an hour. Writing up and sharing for input should take about half a day.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 17 July 2019

Save time with this

UX Design Project Proposal Template

UX Design Project Proposal Template

$5

(+VAT in EU)

For Keynote, Powerpoint & Google Slides

A template to create an effective proposal to land new UX design projects. 10 pages including scope page; about page; section for the phases of work; price options; and client testimonials.

Guide to Expert Audits for UX Design

☰ View all evidence-based methods

Expert Audits

What you can learn

An expert audit involves getting an expert in a particular field to assess and report on how your website or app is working based on their experience and/or a set of criteria. For the purposes of this guide we’re talking about experts in the fields of UX/usability, product, branding, copy, or conversion rate optimisation.

Someone who knows their stuff can tell you how your product stacks up against competitors, best practice, and user expectations. They should be able to tell you what data is important to watch and maybe even give an idea of what the conversion rates look like of others in the field.

Whenever you bring an outside expert onto your projects you gain something very important: a fresh pair of eyes who can see things that you may have become blind to, and ask the obvious (and possibly awkward) questions.

Once they've found problems they can also save you a lot of time by helping you prioritise which issues need most effort. They should also be able to offer you suggestions for ways to improve your product and give names of people or software that can help further.

How to do it

The most important part is finding your expert. You should look for someone who specialises in the area you work in—if you're an ecommerce site you'll want an ecommerce expert, if you're a financial app you'll want someone who knows finance. They'll be able to bring experience of what works in that sector and will understand what users would look for in your product.

Someone who comes recommended is always a good idea: ask around your community, for example if you're a startup try your investors or other companies they’ve invested in. If you’re a business who operates locally, ask similar sized businesses in your area.

Failing that, search online and look out for someone who can write or talk about their area of expertise. Do they keep a regular blog? Have they written books on the subject? Do they talk at conferences? Or teach what they know? These are good signs they will be able to explain things clearly.

Once you've found your person, you should talk to them about their experiences and explain your business to them. If they seem switched on and have a few good references then agree a fee for the work. The amount will vary depending on the size of your site or section of site of it but this shouldn't be something that is charged by the hour/day or they are incentivised to drag things out.

When the audit is complete they should be able to supply some kind of report (it doesn’t have to be long, look for actionable content). It’s also a good idea to get them to present this to you in a session where you can ask plenty of questions and get the most out of their knowledge.

This isn't something that you should need to use often. A good expert audit should leave you with plenty of things (6-12 months worth) to go off and design/put into development.

Watch out for

Over-promising

Experts who promise incredible results might sound impressive but anyone with decent experience should be pragmatic about what they can achieve and how change is dependent on the client’s actions afterwards. They should be able to talk you through the nuance of what they look for rather than speaking in vague terms.

A lack of interest

Be worried if they don't ask lots of questions when you first talk with them. They should be using these early opportunities to learn a lot about the business and they should be genuinely inquisitive to learn from the client side. If they aren’t then it might be worth looking elsewhere as they’re probably going to give you a generic report that misses some of the context of your website.

No takeaways

Request that they share their findings in an easily accessible form with you. I like online documents that are easy to refer to. Don’t just let them present to you but get them to share their presentation and any supporting findings or documentation, so you can use it afterwards.

Lack of explanation

Experts who just tell you to do something (often because it’s best practice) but can’t explain why aren’t much help. It suggests they don’t really know the subject area too well. They should be able to back up their assertions with some evidence.

Example tools (and cost)

In this case they should bring the tools, though they might want access to any quantitative data you've been gathering.

Cost is going to massively vary but ask yourself how much is it worth to find areas for improvement and potentially big revenue gain? This ‘worth’ will vary from depending on if you are a tiny company or a massive one.

How long does it take?

It's going to depend on just how deep they go, but expect a standard expert audit turnaround time to be a week or two.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019

Save time with this

Ecommerce Website Conversion Checklist

Ecommerce Website Conversion Checklist

$3

(+VAT in EU)

Printable PDF template

A shortcut to help you work out where UX and conversion issues are on any ecommerce website. Covers the full flow from landing pages to checkout.

Guide to Heatmaps for UX Design

☰ View all evidence-based methods

Heatmaps

What you can learn

There are three main types of heatmaps that show where users spend their attention on a web page. The name heatmap is because they are colour-coded to show which areas are getting user attention (generally dark red for most, light blue for least). Each one will tell you different things:

Click heatmaps

Click heatmaps visualise where users click or tap on a web page. They may also give you a number showing what percentage of users visiting that page engaged with a given link.

Some older types of click heatmap only record clicks on interactive elements that trigger an action, so if users click an image that doesn't do anything that click would not be tracked. It’s more informative to see everywhere that users are clicking, as it’s very helpful to know if they are clicking things that look like links but aren’t. Perhaps there’s an opportunity to make something happen when users engage.

On the other hand, if a link you regard as important is only receiving a few clicks, then you have to question if it is well designed. It can tell you if what you consider to be the most important link on the page is being seen that way by your users.

Scroll heatmaps

These tell you how far your users are moving down your web pages (either by scrolling or swiping). They display horizontal bands showing the percentage of users that reached each part of the page.

They can give you an indication of what content is going down well with users and what content is being skipped over. A lot of pages will just show you that the longer a page goes on the fewer people stay on the page. This is normal and you'd expect a fairly even drop-off rate as the scroll continues and in my experience most pages show this standard pattern.

Where scroll heatmaps are most useful is when they show a sudden drop in the percentage of users at a point near the top or the middle of your page. This means that a combination of content and design has caused users to stop scrolling and could be the sign of a 'false floor’, where the design makes it look as if the page has finished.

It can also mean a link is directing people elsewhere and so they are exiting the page rather than continuing. Either way it’s often reason for further investigation through user testing to understand why this is happening.

Mouse movement heatmaps

This type of heatmap shows which part of the page users are most hovering their mouse over to indicate which elements on the page are getting most interest. As touch-screen devices can’t (yet) detect where fingers are hovering above the screen this is not data you can use for phones or tablets.

Mouse movement is a good proxy for eye-tracking as research has shown the user’s attention tends to be where the cursor is, so you can learn what content users are reading and it can save on potentially expensive eye-tracking studies. This offers more precision than a scroll heatmap so you can see exactly what areas in a block of content users are being drawn to.

How to do it

You'll need to put a tag on your site for heatmap software to track your users visits to your pages. There are a few options for this, explained in the tools below.

Once you've left it to gather some data (usually for a few weeks to get something meaningful) then you can check in on your pages and look for stories the patterns might tell you. If you have a lot of pages then focus on ones where conversion rate is lower than you'd wish or check pages you've just launched to see how users are reacting to them.

Heatmap data can help you think about whether your designs are correctly focussed. If you have important information but it's at the bottom of the page and only a small number of people are seeing it then you'll probably want to move it.

In my experience most heatmaps don't tend to change much unless you change the design, so it's not something you need to be constantly checking. They can be useful to revisit if if management/stakeholders want quant data to understand the size of any problems: use your heatmaps to back up findings in user tests.

Watch out for

Not just scrolling

Don't read too much into scroll behaviour on its own, it only gives you part of the picture and like all quant data only gives you the 'what' rather than the 'why'. Just because users aren't reaching a part of a page doesn't mean they aren't going on to have a successful journey.

More than clicks

Clicks are arguably the most important interaction that users carry out on your site as it shows a high engagement and desire to progress further. However it doesn't quite tell the whole picture on its own. Did the user take ages to find that link to click? Did the page actually match their expectations? Like a lot of web tracking data, in isolation it's just a clue to find out more.

Enough people

Like all quant data, sample size is important here. If you're only tracking a few users then one user's eager clicking of every link available on the site can warp your metrics. Ideally make sure you have at least 100 users in your heatmap sample size.

Consider devices

If your website is responsive (as it should be!) then this needs taking into account. Links and page sections can move position or disappear altogether on certain devices, so make sure you look at desktop, tablet, and mobile data separately. Also if you can segment by traffic source this can reveal differences: search traffic might be looking for very different things to direct traffic.

Example tools (and cost)

There are several pieces of software out there that offer a suite of heatmap tools together (and often include visitor recordings too). Some of the popular ones are Hotjar (from free), Crazyegg (from $9/mo), Mouseflow (from $29/mo).

How long does it take?

Once the software is setup, it only takes about an hour to assess each heatmap type for each device on a page.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019

Save time with this

Website Analytics Research Template

Website Analytics Research Template

$5

(+VAT in EU)

For Keynote, Powerpoint & Google Slides

A template for creating a lightweight report on a website's analytics. 15 pages including sections for summarising the important findings from quantitative data, heatmaps, and visitor recordings.

Guide to Live Chat Transcripts for UX Design

☰ View all evidence-based methods

Live Chat Transcripts

What you can learn

Live chat is the little messenger windows that sit in the bottom corner of a website and are particularly popular on ecommerce and online services. They allow the user to chat directly to customer service teams and ask questions about things they may not understand.

It is much like a phone helpline but it can be turned on and off by the company at will (and when no-one is operating it from the company side they usually become email message boxes). They offer a useful insight into the problems that real website users and customers have as you can pinpoint the place where they are stuck and turn to help (although not always, see what to watch out for).

You should be able to spot if things like shipping costs or sign up instructions are not clear and are preventing some users from converting by themselves. Just the fact that they are looking for help rather than completing the task by themselves is a good indicator that something can be improved.

By looking at what users say on live chat you can also get a sense of whether they understand broader things, like what the company actually offers, or if they have found themselves on a site that isn't suitable for them. This can help you identify whether your marketing efforts are working to bring in the right kind of users.

How to do it

You'll first need to get the live chat function set up on your site. Luckily there are lots of third party services to choose from, which require you to just put a snippet of code in the pages that you want the live chat to appear on. If you have a big site you don't need to put it everywhere; focus on landing pages or key conversion pages.

The company will then need someone to staff the live chat. If you are a small startup this could be your job but at a company with a customer service team, it should be them. It's best if it is someone who knows the product well and is used to answering customer queries so they can promptly respond without having to constantly find out what they should say!

This job tends not to be as intense as answering help lines as users only ask a question or two and can be quite slow in their responses. From what I’ve seen customer service teams can usually handle three or four users at a time.

It's not something you have to commit to for a long time, as live chats can easily be turned off. You might only have it on for a few hours per day or you might want to only gather feedback for a week and then assess it before running another week a few months later. It's a flexible tool.

You can then use it as an evidence source by analysing the transcripts later on. Going through written feedback can be time-consuming but if you dedicate a bit of time every week it shouldn’t be too hard. It’s a good idea to do a first pass to weed out any chats that are irrelevant or don't go anywhere (which can be quite common) and then a second one to categorise the feedback you get by sentiment, much like with other unplanned feedback.

With this document you can keep track of the most common issues that users have and can create a record of which areas of your site are causing the most problems. The transcripts may immediately tell you what is required to make or fix or it could be a starting point to gather more evidence. Not all users will be able to identify why they are having a problem but if you see repeated live chats being triggered on a certain page it suggests something on there isn't working as well as it could be.

Watch out for

Time wasters

When you give users a window into which they can type anything you’re going to get some odd comments in there from people who have no intention of using your site. Everything from 'what is this site?' to 'what are you wearing?' Hence why it's worth filtering out the chaff before your analysis.

Behaviour change

You can also find lazy users who don't want to work anything out themselves and use the chat to just ask for someone to find products for them. The presence of the chat window means they don’t behave as they normally would. These are probably ones to ignore but if you're getting a lot of them it could tell you that your search isn't intuitive or that it could be worth investing in a customer service phone line.

Timing matters

You should think about how and when your live chat appears to users. Be careful of having it automatically pop up and hassling everyone as soon as they arrive on the site. This will cause many to immediately close it before realising what it is. It's better to have it there in a minimal state for the user to choose to interact with and maybe only popping it up when someone has spent a long time on a particular page.

Users with solutions

As ever when taking feedback directly from users you should focus on their problems rather than whatever solutions they may think they need. Only by gathering a few different sources of feedback will you be able to find the right fix for everyone.

Example tools (and cost)

There are a whole host of tools offering live chat from the expensive like Bold Chat (from $599/year) which offer video chat and other features. As well as the simpler and more startup-friendly like Olark (from $15/mo) and Zopim (from $11.20/mo) and even tawk.to (from free).

How long does it take?

Set up should be a very quick dev task. You should then gather feedback for at least week before dedicating half a day to sorting through it.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019
Guide to Conversion Funnels for UX Design

☰ View all evidence-based methods

Conversion Funnels

What you can learn

A conversion funnel shows the rate that users complete each step of a user journey to reach an overall goal. It is an important part of understanding how a website is performing and should be one of the core elements of measuring user interactions with your site. It is known as a funnel because it tends to start with a large number of users at the top, tapering to a smaller number at the bottom (though your aim is to get it to look less tapered).

It will show you over time whether users are doing what you want them to do. This usually means reaching a goal that is important to your business, like sign up to a form, download some content, make a purchase, etc.

It will also show you where they are having difficulties on the way to reaching that goal. It may give you details of where users are going instead of your intended next step in the funnel.

Depending on the software you can set it up to measure how many users are going onto different pages/URLs or you can measure different events that have been triggered, such as button clicks/taps.

How to do it

Before getting into the temptations of picking your tool, you should define the user journey you want to track. This can be just a case of sitting down with a pen and paper and working out the ideal user journey you want someone to go through to reach your business goal.

If you are in the very early days of a project this might mean you are deciding the shape of your entire product at this stage. If you have a site that already is up and running, you probably have a clear idea of the steps a user goes through. Either way, this journey will form your funnel.

To make sure you're not including unnecessary stages, it's a good idea to start at the goal itself and work backwards, defining the fewest steps required to reach it. This represents the ideal journey of a user, sometimes known as the ‘happy path’. You can have several of these per site/product for each different goal you want users to accomplish.

Then it's a case of picking your weapon in terms of software, which will be dependent on what you're looking to track (see below). You install the code tag for this on your website so it is present on every page, which should be a quick dev task. Once you've checked this is up and running properly you can then set up your funnel to collect your data.

In several pieces of software the funnel will only gather data from the day it is set up, so it's a good idea to get it up and running as soon as you know what you want to track. You'll want to gather data for a few weeks to get a sense of what is 'normal' on your site (a.k.a. your baseline).

Once you have data you can look at improving your user flow by starting redesign efforts on the steps that have the lowest conversion rate.

Watch out for

What are you tracking?

Almost all software tracks user journeys and funnels like this in a different way (some look at sessions, others at users, some at events and there are other variants besides). Thus it is quite common to have different funnels giving you different numbers for your conversion rate. You should learn how different software tracks users and what is most important to you and then stick to one.

Get a rate

For designers, conversion rates are better to follow than the absolute numbers completing your goal. This is because in an ideal world, as traffic goes up and down on your site, a well-functioning design should still be converting at a consistent rate.

Check the source

It is pretty common for different types of traffic to warp your conversion rates. If it suddenly drops, a good first port of call is to check with your marketing team if they have been buying in or gaining from social traffic that behaves differently (often this is less likely to convert).

Is it the season?

Be aware of seasonality, as pretty much all businesses are affected by it, in particular ecommerce ones. There are times of the year when people are less likely to buy and it can be hard to know that if you are a startup who is just setting out.

Once you've got a year's worth of data, it can be a good idea review it to see if there are any patterns you should keep an eye on in future. You can then compare it against future years.

A starting point

As with all quantitative data it is just going to tell you what is happening on your website but it is never enough information to make design decisions. You're going to need to use other pieces of evidence like session recordings and user tests to learn why users are behaving that way, before you are in a position to make the right changes.

Example tools (and cost)

There are many pieces of software that offer conversion funnels. The classic Google Analytics (free & paid) is best for measuring URL visits at different steps and is often a good starting point for web projects.

Mixpanel (free & from $150/mo) measures user events that you specify, like clicks and taps, making it better for apps and non-URL based funnels. Hotjar (free & from £29/mo) also offers funnel tracking functionality as well as the likes of Kissmetrics (from $120/mo) and many more.

How long does it take?

Setting up your funnel should only be an hour's work—these tools will all have help pages/videos to guide you. After that I find checking the data on a weekly basis works well.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019

Save time with this

Website Analytics Research Template

Website Analytics Research Template

$5

(+VAT in EU)

For Keynote, Powerpoint & Google Slides

A template for creating a lightweight report on a website's analytics. 15 pages including sections for summarising the important findings from quantitative data, heatmaps, and visitor recordings.

Guide to Friends & Family Opinions for UX Design

☰ View all evidence-based methods

Friends & Family Opinions

What you can learn

This is a method of evidence gathering that I've included more as a warning. It’s a popular method that you’re going to come across when you work on a product, it just isn't a very good one.

It's also the feedback that many designers fear: "I've just shown this to my husband/wife/mother/son and they think this could be improved by doing x". Where x often involves re-working the whole project but the client values this opinion so much that they insist on it, trumping any rational, carefully gathered evidence that you might present.

It's not just something that comes from design-illiterate clients though. I've been in meetings where well-informed management have suggested changes based on ideas from someone in their family or an old friend. Sometimes they might even be right and at its best it could be an outside opinion that inspires great ideas. However they could be missing a vital bit of context that means it isn’t much help.

Importantly, this is not a method you can repeat reliably. It's a lottery that you can’t bank on: you might get something great but it might lead you nowhere.

How to do it

Of course we all ask our partner, friends, or housemates for quick feedback from time to time. However in general, the opinions of friends and family shouldn’t be a part of your formal evidence-gathering process. It's the laziest and weakest form of research and there are plenty of other methods for evidence-gathering out there (check out the rest I've written about here).

To be honest family will often give opinions to you whether you want them or not. Alternatively if you do push them into giving an opinion they'll probably just say something positive to shut you up and not hurt your feelings. Neither of these things are very helpful.

If you do come across someone else using these opinions in a meeting (usually when you’re least expecting it) I recommend saying something neutral like "that idea has potential, I'll look into it" or "I'll be sure to incorporate that feedback into the rest of our research". If you can try and gather this kind of feedback early in the design process, during the research phase, and make it clear that late feedback and changes will involve the project taking much longer.

Watch out for

Having explained why you should generally ignore this kind of evidence, there are a few times you can pay more attention to a family/friend opinion that comes your way.

Paying customers

Feedback from friends and family who have actually experienced the product as a normal user or customer should be taken onboard like any other customer complaints or suggestions. If they aren't a customer then their issues possibly aren’t real and nowhere near as valuable a someone who wants your product/service and has been willing to pay for it.

Your target audience

If they're exactly the kind of people you're aiming at with your product then that can be worth incorporating with other customer feedback. Though it's not quite as valuable as a paying customer's thoughts if they match your target audience then it’s useful to know if things appeal to them or not.

Bugs

If the friend or family member has spotted something that is broken and you can recreate this error then you'd better fix it. It doesn’t matter where you find out about bugs from: their feedback is as good as anyone else's.

Investors

Whilst you should always be designing for end users not investors, if they have a fair chunk of money in the company, it can be worth considering what they say to keep them onside. This is especially true if it’s something small: save your battles for the big decisions.

Example tools (and cost)

There are no specialist tools you need to use here and the opinions are all (too) free. Ideally get the to demonstrate any problems they think exist, as you might be able to find workarounds for them.

How long does it take?

If you’re going to request this feedback anyway try to keep their thoughts very short and focussed on things you can action.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019
Guide to A/B Tests for UX Design

☰ View all evidence-based methods

A/B Tests

What you can learn

A/B tests are often seen as the ultimate method for an evidence-based and data-driven approach to designing websites and apps. It’s not always the right solution though, as I'll explain.

The theory behind them is that you take a new website design (version B) and serve it up to to some of your users whilst showing the rest of your users the original design (version A). The differences be anything from a new version of a button to a complete page redesign. You then measure to see which version provides a better rate of conversion, and the winner is put live on the site to all of your users.

You can also measure secondary goals and other interactions on the website to see if it has had an effect on more than just your main conversion rate. Something might not increase page conversions but might improve another desirable metric. You can also do multi-variate testing where you test out more than just the two options.

In principle this type of testing allows you to measure the success of your designs in the real world and with your actual users. In practice it is somewhat more complex than that (see the 'watch out for' section) and isn't something that should be undertaken lightly. To do so risks getting inaccurate results and can cause you to make the wrong decisions for your product so it's well-worth getting a professional data analyst to do the work.

How to do it

You're going to need to install some code from your chosen tool and, as with most other things like this, it's a pretty straight-forward task of copy and pasting. Once installed you can then use the software to set up your A/B tests.

Set the goals

Define the hypothesis for your test. What are you changing and what do you think it will do? What is the primary metric you are looking to change? Are there any secondary metrics you’d be happy with changing?

Work out how much traffic you're going to need to get a result, and thus how long you need to run the test for. There are calculators to help you do this. This is important as many websites won’t actually have enough for this (see below) and could discover that A/B testing is an impractical choice.

Run it

Check the test works on a few browsers and is being shown to the right subset of users. You often don’t want everyone to see a new variation, e.g. a lot of the time it makes sense to show changes to new users and not change the experience for returning ones.

Set the test running and try and leave it alone for the duration of the test. It's worth checking every so often to be sure it hasn't been a disaster and doesn't need stopping early. Otherwise let it go and don't be fooled into thinking you've got a result until you've had the required number of people go through the test (the number your calculator indicated).

Go live

When you get a result, roll out the winner, in the exact form it was tested. Sometimes this will mean sticking to the existing design. Quite often there will be no meaningful difference between the two versions, so in theory it's your choice as to which you go forward with.

Watch out for

Enough traffic

This is the biggest problem for a lot of startups and small sites and it's not as simple as knowing how much traffic you get to the website overall. Even with 100,000 users a month you may not have enough traffic to run the test you want in a reasonable time. Let me explain through an example:

Let's say you want to get more users to reach checkout from your product page and you redesign a section of it. Your current conversion rate of that page is 5% and to consider this a success you want that to increase by 10% to 5.5%. This means with a statistical significance of 90% (which isn't amazing, 95% is more commonly used) you need 30,000 individual visitors to go through your test per variation to be sure you know whether it's 10% better.

In an A/B test you'll need 60,000 users to go through your test. If your product page only gets about a third of the total users to your site (of 100,000 users) then you're going to need to run that test for two months before you have a result.

Two months is a long time for a lot of companies and they would probably be better off gathering several other forms of evidence (such as visitor recordings, guerrilla user testing, conversion funnels etc) in that time, which will give lots of areas for improvement.

Confirmation only

The biggest problem with A/B testing is that people use it at the wrong time. Too often they have already redesigned their site and built it and then are just testing to see how much better it is than the current version.

A/B tests can be used at the very end of a project when a company has already put the time and money in and are not interested in knowing whether it is actually worse. They just want a number to boast about how much better the new one is.

Statistical knowledge

Ultimately, to properly run A/B tests involves a good knowledge of statistics and an experience in doing it before. There are lots of things to understand like sample sizes, statistical significance, statistical power, one/two-tailed tests and more to know if you're doing the right things.

Taking a punt and doing it on your own almost guarantees that you'll make mistakes. I know, I've been there. Some software can be very reassuring and make you think you're getting great results but when you come to launch them you're left with something that doesn't work.

Get help

There are many other things to watch out for in A/B testing, which are solved when you get an A/B testing pro to help you out.

Example tools (and cost)

There are a lot of tools out there now for running A/B tests at many different price points. I used to use Optimizely but that has moved to be a more enterprise solution and not so budget friendly. I’d recommend digging into an article like this to find the right tool for you.

How long does it take?

Once your designs are ready, setup should be a matter of an hour or two. Running a test can take a long time (often several weeks).

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019
Guide to Visitor Recordings for UX Design

☰ View all evidence-based methods

Visitor Recordings

What you can learn

Visitor recordings or session recordings are when you get a visual of the movements, clicks, scrolls, and other interactions of a real user's visit to your website. It’s saved as a video file, which you can play back later. It's akin to watching back a remote user testing video without the audio.

However this isn’t a user test that you have set up, it is a user going about their business because they’ve arrived on your site through their own choice (and are presumably interested in what you are offering).

It is quite simply a window onto how your real users behave when they visit and enables you to see how many of them are able to reach your goals, along with the paths through pages that they take to get there. Impressively you can even see what they are entering in form fields (not including passwords and credit card numbers) and how many times it takes them to get this right.

If you're wondering how it's done, it's not actually a screencast, just a recording of clicks, movements, and key stroke data overlaid on a snapshot of your website. Also not every single user gets recorded—for example, those using private browsing won't get tracked.

How to do it

You’ll need to choose some software that can record visits to use this method (see below). Installing the software is a case of picking your tool and tagging up your pages with their lines of JavaScript.

Once installed, you can then tell the software that you want to record and specify any details (perhaps you only want to see user journeys that visit a certain URL). You then leave it to gather the data, and within a few days you should have some sessions to take a look at.

Even if you don't have a lot of traffic it won't be long before you have a few hundred sessions to watch, which can be a bit intimidating. You can either be very diligent and check daily to watch the latest videos and keep on top of them, or you can wait and watch a bundle at a time (my preference).

It’s a good idea to filter through them to look at only a few with certain characteristics (especially if you've got a lot to get through). For example, try watching all the sessions that make it to your checkout and see what the common factors are. Or maybe look at all the ones that land on a particular page and try to work out what is causing them to bounce or continue.

When looking through the videos, the aim is to build a picture of the common user behaviours that you witness. When you see something interesting happen (like a user clicking an element) make a note of it and then tally up each time you see that again in future. After watching about 50 videos I usually have a set of common actions for a page that will give a strong idea of what users want from it.

Watch out for

Careful with explanations

These recorded sessions can lack context and whilst in some cases it might be obvious why a user is getting stuck (if they are struggling with a form perhaps), you won't always know what they are looking for or what they are thinking, so be careful about attributing causes.

Some behaviour might need further investigating with a user test to understand the why. For instance you'll often see users repeatedly jumping between the same two pages, which suggests they're looking for a piece of information, but it won’t tell you what that information is.

Short sessions

Quite a few of the sessions may have no useful information in, especially the ones with a single pageview: if they land, scroll, and leave you wouldn't know what it was they couldn't find or didn't like. It's usually worth disregarding the very short sessions.

More than clicks

When assessing visitor recordings you should focus on behaviours that you can’t tell from quant analytics. Don’t just look at pages people click on, but consider how long it takes them to find links, and what parts of the page they seem to engage with.

What don’t you see

One thing to remember with this kind of evidence is to record what users don’t do. For example when watching for actions that users take it can be easy to ignore that no users played a video (and thus you may not need that video).

Imperfect recordings

These recordings are good but can have issues. Occasionally modal windows or burger menus can block the rest of the recording by staying overlaid on the video and not clearing when the user has moved on. And some drop down menus or hover effects won't appear at all.

Also sometimes if the CSS changes and you come back to watch a video it can look wrong, so it's best to watch them fairly soon after they have been recorded.

Example tools (and cost)

My preferred choice for session recording is Hotjar as it gives you this ability for free along with heatmaps for a very reasonable price (from €29/mo). Inspectlet have a very similar set of tools and are equally competitive in pricing (free and from $39/mo).

I’ve also used FullStory, Mouseflow, and Lucky Orange. They’re all pretty similar so take your pick!

How long does it take?

Watching a chunk of about 50 recordings takes 2-3 hours.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 17 July 2019

Save time with this

Website Analytics Research Template

Website Analytics Research Template

$5

(+VAT in EU)

For Keynote, Powerpoint & Google Slides

A template for creating a lightweight report on a website's analytics. 15 pages including sections for summarising the important findings from quantitative data, heatmaps, and visitor recordings.

Guide to Page Data for UX Design

☰ View all evidence-based methods

Page Data

What you can learn

For the purposes of this guide, 'page data' refers to the metrics describing user visits to any individual website page. A few of these numbers are covered by your conversion funnels (users, sessions, and the calculated conversion rate) but with page data we can find metrics that give more detail than just whether a user was present or not.

Whilst a conversion funnel should represent your primary metrics, these numbers can form your secondary metrics. If you make a change that doesn’t improve conversion but it does lower bounce rate, you’ve seen a beneficial secondary effect.

These secondary metrics are worth studying to build a better picture of user behaviour on your website and can help you define what to look for in research such as user testing. For example, if you find a key information page has a high exit rate, it should be a task in your next user test to try and understand why.

How to do it

To learn what is happening on your web pages I recommend installing Google Analytics, which is by far the most popular tool for tracking this kind of data. Once installed, the following are a few pieces of key page performance data to consider:

Unique page views

Defined as how many separate sessions of browsing a user has had on your page. A user visiting your page multiple times in a session would only record one unique page view—a session of browsing is reset after 30 minutes of inactivity.

Each session represents a period of intent for a user to achieve something on your site and isn’t the same as an individual user (this is something Google Analytics can only guess at so doesn’t display).

Page views

Defined as how many times a page (defined as a URL loading) has been viewed. If this is a lot higher than your number of unique page views then you'll know that each user is looking at that page many times each time they visit. This could suggest that the content on the page is so great they keep returning to it or that they can't work out where to go next.

Average time on page

Defined as the average time a page view lasted. This is often used as a measure of engagement with a web page but it will depend on the type of page as to whether you want this to be long or short.

If it's a long blog article or 'about us' page you'll be hoping for users to spend several minutes on it, whereas if it's a checkout page, you'll be wanting people to whizz through in seconds. If it's the other way around then users aren't being intrigued by your content in the former and they're probably getting stuck working out how to enter payment details in the latter.

Bounce rate

Defined as the percentage of sessions that saw someone land on this page and then leave without visiting another page on your site. This is almost always seen as a ‘negative’ metric that you want to reduce and is most applicable to landing pages. If bounce rate is high on a homepage or landing page then your entrance experience to your site is turning people off, which is a strong sign that you should change something.

Exit rate

Defined as the percentage of sessions that saw someone leave your site on this page. Not to be confused with bounce rate, this is a bit more ambiguous, as the user could have visited several other pages before their exit and they may have found everything they needed. After all every journey has to end somewhere.

Obviously you won’t mind if the exit rate is high on pages that appear after a goal (like a post-purchase page). If it's high on a critical page in the middle of your flow, it's worthy of further investigation.

Events

This is tracking that you manually set up for non-URL based interactions and tells you whether or not an event has been triggered (such as clicking a button or page element). Whilst it is a binary metric you can attach meta data to each event to give you more details, such as the name and type of button if there are multiple on a page.

Watch out for

Knowing what is 'good'

It can be hard to find benchmark metrics for what represents a ‘good’ number of users or bounce rate, so take it with a pinch of salt when someone makes a blanked declaration that you should be targeting a certain figure. It’s more reliable to use this data to judge your pages in relation to each other or themselves over time. Use it to help you prioritise which pages need fixing before others and for spotting outliers and problems.

Segment

Looking at the raw metrics is a fine starting point for discovering website issues but to get more actionable data you need to segment your results. Try segmenting by device or by traffic source to see if users behave differently depending on where they’ve come from and how they’re viewing the page.

Careful with groupings

Be careful if you’re using regular expressions to track groups of pages via the Google Analytics API and looking at the totals. Due to the way users and sessions are counted by URL there may be some duplication in there because people who visited several pages may be counted multiple times. Use it as an indicative measure rather than a precise one.

It's what not why

Don’t make assumptions on what a piece of page data in isolation might mean. As outlined above, other than bounce rates, most metrics could be positive or negative depending on the context.

My standard disclaimer with quantitative data applies: it doesn’t tell you ‘why’ something is occurring. Always investigate further with qualitative evidence such as heatmaps, session recordings, and remote user tests, to build a more complete picture.

Example tools (and cost)

As mentioned above, I'd recommend Google Analytics (free) for gathering this data, because it's free and hugely popular around the world (so you'll be able to compare your stats across different sites and find plenty of help guides). If you're tracking something that isn't based on page views, such as a native app, then a lot of the above metrics won’t apply.

How long does it take?

Once your tracking is setup, checking the data for a page takes only minutes. If you regularly track the same few stats then I recommend pulling it into a dashboard.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019

Save time with this

Website Analytics Research Template

Website Analytics Research Template

$5

(+VAT in EU)

For Keynote, Powerpoint & Google Slides

A template for creating a lightweight report on a website's analytics. 15 pages including sections for summarising the important findings from quantitative data, heatmaps, and visitor recordings.

Guide to Surveys for UX Design

☰ View all evidence-based methods

Surveys

What you can learn

Surveys promise you the opportunity to gather the thoughts of lots of users without too much effort: you can potentially reach thousands of people with just a single form. You can ask them almost anything (though some types of question are better than others, as explained below).

They also offer the opportunity to quantitatively assess behaviours, by asking how many people do certain things, which often appeals to the analytical folk (managers). This is something you should be very careful with, as a good survey is better as a qualitative tool and a starting point for further investigation than a quant one that gives you absolute truth. Unless you deeply understand the subject it's best for asking open-ended questions and finding out issues and thoughts that users are having.

There are lots of types of survey out there, including short website feedback pop-ups. For the purposes of this I will cover the one-off type that you might run to research a subject or potential design project.

One of the things surveys are often used for is Net Promoter Score, which I cover separately.

How to do it

There are plenty of tools that make creating online surveys easy and give you all the different field types you might need (I cover some below). Distribution shouldn't be a problem either: you can send it out to an email list, or share on social media, or put a link on a site/forum. Of course the users you choose to distribute to will affect your results but the actual practicalities for running a survey are fairly straight forward.

Preparation

Most of the work is actually in the planning and preparation. You need to know what you want to ask and to determine if a survey is the right approach to take. If you're after detailed behavioural understandings then perhaps an interview or user testing is a better bet. A survey can be a good starting point for research, which gives you ideas of the issues to investigate further with interviews.

When writing the questions, it will depend on the type of question as to whether you either keep them specific (for multiple choice answers) or open-ended (for answers where you want the user's free text). Open-ended questions are more useful for getting to the heart of real issues as the user isn't limited by what they can say.

If all of your questions are multiple choice, then it's a lot harder to find out what you don't know. However if you want to survey a large number of people then using multiple choice questions will make life easier when it comes to analysing the results.

Unless you're paying your partipants well, keep the survey on the shorter side (fewer than 10 questions) to maximise your response rate. Just because you have the opportunity to ask people anything doesn't mean that you should—try and keep questions on your particular subject of interest. This will help users stay focussed and go into more depth.

Finally, giving users the opportunity to answer anonymously can help them feel confident about opening up so they might tell you things they wouldn’t if the answers were attributable to them.

Analysis

When it comes to how to analyse your results, it's going to depend on the size of your survey. If it's a small one (fewer than 50 respondents), you can read all of the answers and potentially act on them too.

If you have open-ended questions, it’s helpful to group responses by sentiment: go through each answer and try to categorise them. For example if you're asking people about their problems with a site then you should be able to group them into things like 'navigation', 'search', 'payment' etc. This should help you order the key things that need to be solved with any new design, and you can delve back into the written answers to get quotes and more detailed requirements.

If it’s a big survey (100+ respondents) then you’ll have to focus on doing quantitative analysis of the results. As well as seeing which answers performed better than others, you can dig deeper and segment your results to see which types of user were more likely to answer which way, and find patterns in the data.

Watch out for

Don’t quantify the qualitative

As tempting as it may be, try not to turn naturally qualitative questions into quantitative ones. For example, avoid questions that ask how much people 'like' something (often on a scale of 'strongly like' to 'strongly dislike'), as it's all so subjective it can be pretty meaningless. You could get back a survey with people saying they love your site but they still may not be buying your product and you wouldn't know why. Erika Hall writes well about this here.

You can of course quantify quantitative data with a survey, so asking how old people are, how much they earn, or whether they like x over y is good material for charts and graphs.

Don’t ask for predictions

Just like when interviewing, try not to ask users to predict future behaviour. Don't ask 'how many times will you go to the gym in the next month?' because you'll just get back their ideal answer or one to impress you, whereas the reality is likely to be different. For more solid results ask about actual past behaviour instead ('How many times did you go to the gym last month?').

The who matters

When declaring any survey results, make sure you explain who you surveyed, especially if they're not representative of your actual user base. Be careful of taking your results out of context and declaring that 'all users think this'.

Incentivise

These days people are pretty over-surveyed and have inboxes packed with requests for feedback. To stand out you should offer some kind of benefit to people for completing the survey, otherwise they’re just not going to do it.

Do be careful about the level of reward however. If you offer too good a prize then you're likely to get people rushing through to complete it and not caring about what they write.

Example tools (and cost)

There are lots of tools for making forms and surveys out there. I've used several including Google Forms (free), which is fairly basic but collects your results in a spreadsheet for analysis. Others include Typeform (free and from £28/mo), which is arguably the best-looking form website out there, and Wufoo (free and from £12/mo), which I've used to create fairly complex forms with their conditional rules. Finally SurveyMonkey (free and from £34/mo) has some great analysis tools.

When it comes to surveying tool is generally less important than the content—as long as it's usuable, people don’t really care how the survey looks.

How long does it take?

Writing a good survey isn't the quickest of tasks—it will depend on length but expect to spend half a day at least. Getting results can take up to a week.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019
Guide to Remote User Testing for UX Design

☰ View all evidence-based methods

Remote User Testing

What you can learn

User testing is arguably the most useful evidence gathering method of all. In my experience, the number of ideas for site improvements that come out of a session of user testing surpasses any other method.

If you've followed the process of gathering quantitive data first and you know you have conversion issues on your site, this can tell you why issues have been happening. You'll be able to watch users go through your flow and (providing your test is well designed) you'll see where they get stuck, and hear them tell you why they don't like something or can't find things.

You can gain this knowledge from approaches such as lab user testing or even guerilla user testing. However I find remote testing has a few big advantages:

There’s no excuse not to set up a quick unmoderated remote test with a few users ahead of each redesign project or as a regular monthly thing.

How to do it

There are three main methods of remote user testing: 1. facilitated & moderated by you; 2. facilitated (and possibly moderated) by someone else; 3. facilitated by you but unmoderated. Each of them works a bit differently, which I’ll explain here.

Facilitated & moderated by you

For the moderated & facilitated by you approach, this obviously involves the most work for you but can potentially cost nothing. You'll need to find the users, organise a time to have a video call, record the call, and then write up notes. I will ask clients to suggest users for me to contact, and will email them to book in a time for a call using the handy Calendly.

The call itself consists of using Skype so they can share their screen with me if on desktop or I’ll use a tool like Validately to get access to their screen if they’re on mobile. I then share a link to a prototype or website and can see and hear them as they navigate it. The call can be recorded with screen recording software (QuickTime is handy for this) and immediately after I write up my main observations.

Facilitated by a third party

The facilitated by someone else approach means hiring a company to set up and run your user test, which may or may not be moderated by them as well. Moderation is generally useful when you're testing a prototype or early version of something that requires a bit of explaining or isn't fully working.

Either way you’re role is to specify what you want to test, and liaise with them as they develop the test. They'll then run and analyse it so you get a report at the end with the findings.

Facilitated by you and unmoderated

The unmoderated option consists of you setting up the test and putting it out to a panel of users who are ready to go. You then get back the videos of the users navigating the site for you to analyse and draw insights from. If it's a real live website I think unmoderated is the best way to go as this is closer to the reality of how users actually browse the web.

You will need to develop some skills in putting together a decent test and you’ll need the patience to watch videos of people going through your site. As painful as this can be at times, as a UX designer or product manager, there are few better ways to understand what your users face.

Watch out for

An important part of writing a user test is to make sure you're not putting leading instructions in there. Like leading questions when surveying, you don't want to be pushing the users to do certain things or you'll never learn what they would naturally do. Keep tasks simple by saying things like 'show how you would search' rather than 'click the search button in the top right and fill out your dates and location'.

Some people simplify the whole test by only setting users one task like 'show how you would buy a product'. The danger here is that users whizz through the process and you don't get to see them interact with all parts of your site, hence why I prefer a bit of guidance with a task per step of the flow I’m testing.

Make sure you recruit accurately for your tests. You'll want people who match your actual users (you can use your audience data to discover this). It’s very rare that a website is designed to appeal to absolutely everybody so you want users who are going to provide authentic feedback.

Five users is usually fine for each test—any more and you just tend to see repeated behaviours—but make sure you have five per major device category, as people can behave very differently on them. For example, I most commonly test with five on desktop and five on mobile.

When it comes to analysing your own tests try and stick to recoding observed behaviours. Users might say that they don't like a feature (especially if it is new) only to be perfectly competent at using it. Quotes are useful to put in reports to explain behaviours but shouldn't be used if they don't reflect what actually happened.

Aim to watch all your tests through and annotate them first so you have a good sense of events, before summarising the repeated insights and critical issues in a lightweight report.

Example tools (and cost)

For your own moderated desktop tests, a decent free option is just good old Skype along with recording on Quicktime.

When it comes to unmoderated testing platforms who recruit for you, there are several pay-as-you-go options with different features to suit all budgets. Here are a few I've used:

If you want someone else to recruit and facilitate and you have a large budget, look at bespoke but pricey services such as UserTesting, WhatUsersDo, and UserZoom.

How long does it take?

Assuming five users, unmoderated testing can be done in a matter of a few days. If you're moderating it yourself then the extra organising tends to mean it takes about a week.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 30 August 2019

Save time with this

User Testing Report Template

User Testing Report Template

$5

(+VAT in EU)

For Keynote, Powerpoint & Google Slides

A template for creating lightweight user testing reports. 11 pages including test outline; test participant details; executive summary; bugs list; and sheets for key findings (including usability problems and positives).

Guide to Articles & Videos for UX Design

☰ View all evidence-based methods

Articles & Videos

What you can learn

A decent portion of any designer's knowledge will probably come from blogs, videos, and content created by others. This is no bad thing as each person is limited in what they can work on and by sharing learnings we can all benefit from each other's experiences.

Many companies from startups to big corporations are good at writing up what they've discovered in the process of researching, designing, developing, and launching their products. Often they can be useful for preventing you making mistakes that others have already made. However you do have to exercise caution as not all content is created equal and you can use this learned knowledge inappropriately.

There are obviously several types of content but I’m referring to those that describe a recommended design, a successful experiment, or offer guidelines that you could cite as evidence for making a design decision.

How to do it

This is one method where there’s no specific process. You can build an RSS feed; create a list of trusted sources on Twitter; sign up to a selection of mailing lists; subscribe to podcasts or Youtube channels; even subscribe to magazines (very old school). Whatever your method, by setting up a regular delivery of content you save yourself having to go hunting for it and instead it finds its way to you.

I'm a big fan of the serendipity of Twitter and the chance to get content from publications you wouldn't otherwise look at in your stream. The content on there is also very current but the downside is it tends to suffer from a lot of noise.

Watch out for

When it comes to the content itself, there are some things to think about before deciding whether to apply the learnings to your product or company. You don't want to go telling everyone in your company you should do something and citing an article as evidence if it doesn't apply to your situation.

Check their proof

How solid is their evidence for what they are recommending? When blogs make big claims about how they 'increased conversion by 50% with one simple change' it's always worth asking how they measured that before rushing off to apply their findings. You may not get the full raw data due to privacy concerns but you should be able to get a sense.

Were their results from a focus group of 5 people in their office, or was it done as an A/B test with tens of thousands of users? If the former, take with a heavy pinch of salt and if the latter you might decide it's worth looking into. It doesn't just have to be based on large sums of quantitative data, if it has been learned through several rounds of user testing, that is also high quality research.

Business similarity

Is the business recommending this change in a completely different sector, do they have a different target audience, or are they at a different stage of growth to yours? If so then what they describe may not apply to your product at all. A video from a startup aimed at millennials that describes the perfect mobile navigation may not apply to enterprise software for the financial sector.

Check their motives

Plenty of companies just share what they've learned as a matter of giving back to the community but there'll be some articles you come across that are actually companies trying to push their software or service. In which case their software usually turns out to be the perfect tool for the job or the hero of their story. It's usually fairly obvious but can be done subtly using paid writers on third party sites.

Context matters

Generally be very wary of content that makes broad sweeping statements about one design being better than another. I've lost count of the number of people who have asked me what the best colour for a button is because they read that "green increases conversions". Of course there is no such thing as a 'best colour' for conversions as it is going to depend so much on where it is positioned and the colours used around it.

Ultimately there are so many variables that it's hard to carry across statements about what would work on your site. But this doesn't mean you shouldn't keep reading, watching, listening, and getting inspiration for ideas.

Example tools (and cost)

One tool that is highly useful for getting through lots of articles is Pocket. Install the plug-in on your browser and the app on your phone then when you come across something interesting you can easily save it to a list on the app for you to digest a quiet moment.

How long does it take?

Reading an article doesn't tend to take long – this is something you can dip into at any time.

How often should you use it?

Often

Sometimes

Rarely

Resources

The following websites feature generally well-researched articles, useful for UX decisions:

Last updated on 9 July 2019
Guide to Design Testing for UX Design

☰ View all evidence-based methods

Design Testing

What you can learn

First of all, a definition: what is design testing? I use it to mean running quick tests on designs that are still in progress and before they’re linked together as a prototype. This usually means showing screens individually rather than as a sequence or user flow, as you might with guerrilla user testing. This can be done by printing them out on paper or it can involve sharing the designs online.

Design testing offers a chance to gather user feedback early in the process and shape your design decisions with evidence before committing to building anything. It allows you to quickly test out a couple of options and solve arguments if it’s not clear what design would be best or your team can't agree on a way forward.

It's a lightweight method and won't give you lots and lots of insight but it is suitable for answering targeted questions and helping you course-correct. If you ask the right questions of a design you can save a lot of time in the long run.

How to do it

I'm a big fan of UsabilityHub for carrying out this method (see tools), and for that reason I'll use their test types to cover the different ways to do design testing.

To use Usabilityhub you need to upload your exported design and then create a simple test around it with a few tasks before putting it out to testers. If you're doing this offline yourself then you can just show people the design and ask the questions verbally.

In all cases I'd recommend testing either a whole page design or a section of a page that would be visible within a viewport. Don’t test just individual elements like buttons without the context of the page around them.

Here are the types of test you can create with Usabilityhub and what I recommend using each for. You can combine these together in a single test if you want test a few things like first impressions (five second test) and whether people find your main button (click test).

When you get your results back, it’s worth doing a bit of analysis yourself—I find the ‘word clouds’ provided by default on UsabilityHub aren’t that useful. I export the results into a spreadsheet and do a bit of sentiment analysis. For example, if I’ve asked what people think of a certain design, I’ll classify the results as positive, negative, or neutral. This helps compare if I run the same test with another design.

Watch out for

The main challenges with running good design tests are to give users simple instructions and ask good questions. These are brief tests and to get useful data out of them you need to be specific, as you don't have the time of a full remote user test to cover a lot. With that in mind here are my tips for good test writing:

Give a bit of context

Before each test you get to set the scene for the user with an initial introduction. When showing them a webpage or app screen tell them what they would have just done to reach it (e.g. "searched for a bank loan", "shopping for jewellery").

One sentence is usually enough, don't go overboard with lots of detail about what they could be doing, as they’ll never manage to digest this and keep it all in their head when going through the test.

Don't ask leading questions

People can see design testing as a chance to 'prove' that their idea is right and have some data behind it before showing the solution to management. One way you can bias the results in your favour is to write leading questions that suggest a solution or leave the user only with yes/no answers.

Instead you should write questions that allow the user to express their thoughts about a design ("what do you think...", "how useful is...") and let them give honest answers. If you skew the results your design might ‘win’ in the short-run but if you come to release it and it fails, you’ll have to deal with a much bigger problem.

Don't ask too many in-depth questions

These are short, simple tests and in the case of five second tests, the user barely gets a chance to see the design, so don't go overboard and ask too much. When you go into a design test you should have a single thing you’re looking to find out plus perhaps a related follow-up. Do the work to establish what this is and then stick to it in the test.

Example tools (and cost)

As mentioned above I find UsabilityHub is the perfect tool for testing at this stage, as the tests are super-quick to set up and quick to get responses to. It's also free if you have your own audience to distribute it to (perhaps on a mailing list). To use their panel, it costs $1 per random user or $3 per demographic-targeted user. I normally specify the demographics of my testers and run them past 25 people, which gives plenty of food for thought.

There's a similar newer tool, called Better Design, which allows you to set up quick tests in the form of polls for other users to vote on—fully free at the moment but more limited.

How long does it take?

Setting up a test and getting back results from 25 users takes about half a day.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019
Guide to Customer Interviews for UX Design

☰ View all evidence-based methods

Customer Interviews

What you can learn

Whether over the phone or in-person, taking the time to interview your customers or potential users is a very useful method to truly understand their needs. The idea of just talking to people is a simple thing and can easily be overlooked. Very often it is the most powerful way to find a truth about their behaviour (backed up with stories) that you otherwise may not consider.

This behavioural truth is typically known as an insight. A good insight can be transformational in shaping how you present your offering or develop you service for users.

It's a chance to move away from your perception of the world and your company's internal views on how things are. Interviews give you the opportunity to see the reality of your customers' lives and thought processes. Whilst the process isn't rocket science, good interviews do take a bit of time to organise.

How to do it

There are a few stages to organising successful interviews:

1. Define the scope

It starts with deciding what you want to get out of the process as you can't cover every subject and you need to keep your conversation on track. Define what the focus will be and the areas of behaviour you're looking to understand. It's usually a particular process or thoughts about a product that you want to discuss.

2. Write a guide

Once the direction has been decided you should capture what you want to cover in a discussion guide. Essentially this is the list of questions that you want to ask your interviewees and the topics you want them to talk about.

Collaborate with others in your team to put this together and use it to get sign-off on the scope of the research. It will form the basis of your interviews but you don't have to follow it to the letter.

3. Recruit

You'll also need to recruit your interviewees. You'll either want to find existing customers of your site or people you see as your target audience. For the former you should have some contact details of people who have purchased from you or who have enquired, whom you can reach out to—though make sure you have permission to contact them.

For potential users you might have to put out adverts to find them or get help from a user recruitment company, and you'll need to offer an appropriate monetary reward for their time. You'll probably need to contact more than you actually speak to, so expect to spend a bit of time on this. I'd aim for about 10 people in a set of interviews but if you've got the budget and time you can go for more.

4. Talk the talk

When it comes to doing the interview, it can be done either in-person or over the phone/Skype. Start the chat off with some small talk and try to build a bit of rapport before launching into the questions. Then once you're in the conversation you should mostly stay quiet and encourage them to speak, as it's their experiences you're interested in.

If you're on your own you should record the interview so you can play back and review it later. Where possible I prefer to have two people do the interview: one can focus their attention on the interviewee and hold the conversation while the other can capture notes of interesting points.

5. Analyse

After the interview is done you should spend a bit of time reflecting and analysing what was said. This can just take the form of the two interviewers chatting and comparing thoughts, before noting down key points of feedback. It's best to do this straight after the interview or things can be forgotten.

If you then want to disseminate this knowledge more widely, the key findings from all your interviewees can then be written up into a report.

Also don't forget to analyse the process of interviewing itself: what worked well, what didn't? Are there any questions you could tweak or drop altogther? A lot of it you'll learn through doing and is hard to get perfect first time.

Watch out for

Recruit widely

Try and recruit a range of types of people that use your site, don't just settle for the easiest ones to find. Sometimes the best insights are to be found in the extremes and the users that do odd/unlikely things. You also don’t want to just have the same interview over and over again.

Avoid leading questions

Be careful to avoid our old friend 'leading questions' (just like when running design testing and surveys), which mean the interviews can just confirm what you already think. This is a great opportunity to go in with an open mind and see what surprising ideas can come up (not that you have to use them all).

Nudge

Some interviewees can dry up or give short answers. It's your job to encourage them out of their shell and make them feel comfortable to speak further. Try simple things like saying 'tell me more about that', 'what do you mean by...' and of course, 'why?'. Often asking why several times gets you to the true reason behind a person's behaviour and that magical insight, so don't just settle for their well-rehearsed initial answer.

Focus on problems

Be aware that people are bad at explaining what they would like and predicting the future. Don't get them to tell you what they want, instead get them to give you concrete examples of actual past behaviours. It's your job after this to identify a proper solution that will help meet theirs and your goals.

You don't need big insights

Insights may not be all that revelatory when they appear so don't only be on the look out for something earth-shattering. They can be quite simple but if they contain a kernel of truth about how users actually behave/think they can shape the direction of your project. Something as basic as 'customers will only buy if there's free shipping' can cause a huge shift in the way you sell.

Don't overlook the admin

A big part of creating a good interview experience for your interviewee is being organised with your dates, times, and email notifications. This will help things run smoothly so don't ignore it—here's an example of when no attention was paid to this creating a poor interview experience.

Example tools (and cost)

Most of the tools for interviewing aren't going to be particularly fancy and are things you should feel comfortable using. It's a good idea to write your discussion guide on a collaborative writing tool like Google Docs.

Calendly is a very useful tool for allowing interviewees to book in times in your diary for the actual chat. The main tool for interviewing is just a way of recording it—these days your phone will do a decent job—and notes can obviously be taken on a device, or paper and post-its.

The online service User Interviews looks really useful, and will recruit people for you from $30 per person.

How long does it take?

Writing and setting up the interviews is the time-consuming bit and can take a few weeks. Each interview should be 30-60 minutes.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 17 July 2019

Save time with this

Customer Interview Script Template

Customer Interview Script Template

$5

(+VAT in EU)

For Keynote, Powerpoint & Google Slides

A template for helping you create great customer interview scripts (aka discussion guides). 12 pages and over 30 example questions to help you be sure you've covered off the different sections that make up a good interview.

Guide to User Feedback for UX Design

☰ View all evidence-based methods

User Feedback

What you can learn

This method of evidence is about using the unplanned user feedback a company might receive. These pieces of feedback usually come from sources like enquiry forms, emails, social media, phone calls, or in-person conversations. This is feedback that doesn't just exist on live chat or feedback you have sought out in surveys, it can come in at any time.

With this kind of user feedback you often learn the things that truly bother people, as they have taken the time to seek you out and complain or suggest something. They haven't just waited until asked in a survey, where they may have come up with something for the sake of it.

It is important to have an approach for collecting this evidence as it is the kind of thing that can be easily missed or lost if there is no system for it.

How to do it

Setting up a system to record this feedback will usually involve a bit of manual work. The majority of your feedback will come into customer service teams or from customer-facing staff, either via email or phone. You will need them to log every time a customer complains about something or makes a suggestion for improvement.

This can be done via CRM software if you have the budget or it can simply be a shared spreadsheet where they can enter a summary of the issue and an identifier for the user that raised it. The feedback this doc gathers should be more long-term and focussed around improving features. These customer service staff need to be able to distinguish when something is just plain broken—as this means a bug ticket needs raising with the development team.

Once you've got a solid system for logging ad-hoc feedback someone (usually a product manager) will need to check it regularly and classify the different issues to make spotting similar ones easier. Either use tags in a CRM or add an extra column of data in a spreadsheet where you specify what category issue each thing is, for example 'password reset emails slow', ‘requests wishlist functionality', 'bigger upload capacity' etc.

When you group issues like this you can keep a running total of the most requested or complained about features. This leaderboard can be a part of your suite of evidence for deciding the priority order for areas of your product to redesign. By checking it every week or so you can see if there are sudden spikes in certain issues or if there’s a leading problem that needs fixing more urgently.

Watch out for

Negativity bias

When dealing with this kind of feedback there will always be a bias towards the negative. As humans the frustrations or the bad things that happen stand out more than the smooth easy experiences, and as online user experiences improve people get more accustomed to everything working really well first time.

Just be aware that you're mostly going to see problems highlighted in this feedback so be careful not to throw out everything that is good when redesigning to solve the issues.

Fear of change

Generally, people don't like change. A common time for a raft of complaints is if you release a big redesign of a major feature or part of your website. If it's something lots of people use every day then don't be surprised to see confused and sometimes angry feedback when you first launch. This is to be expected as they adjust.

However if you get an avalanche of negative feedback or users are still making the same complaints weeks after launch then you might have a real problem. At this point you should try to understand why the complaints are coming in (user testing can help here) so you can fix it.

Level of feedback

Not all complaints are created equal. Someone quickly sending a moaning Tweet is not as meaningful as someone taking the time to phone up and explain their problem. You might want to weight your feedback to reflect the source it came from.

Interpreting feedback

Be careful not to misunderstand the feedback. It can be hard to truly understand what someone means if they've written a few sentences. People can have all sorts of odd ways of phrasing online behaviour and probably won’t know the correct technical terms.

Flag any feedback you're not 100% sure about and try and clear it up with the person that gathered it. Even better, if you have contact details for the user that raised it, then get in touch with them to get clarity.

Look for problems

Customers and users don't always know what they need so focus more on their problems than suggestions for new features. They may think they need a big all-singing, all-dancing feature but a simple tweak may be just as good. It's your job to define the solution, don't just implement what they ask for.

Example tools (and cost)

There are lots of different CRMs out there, and they’re generally not that cheap to implement. Some focus more around sales but usually incorporate customer service too and the most well-known are: Salesforce, Zoho, and Sugar.

The most basic free shared log you can use would be a Google Spreadsheet (a tool I've mentioned a few times in this guide and something I’ve seen many startups be built on!).

How long does it take?

To check on and classify about 50 pieces of feedback a week should take an hour or two.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019
Guide to Analytics Dashboards for UX Design

☰ View all evidence-based methods

Analytics Dashboards

What you can learn

A dashboard is a way of tracking your choice of quantitative data about your website. It is something I’ve found most useful for long-term projects or when working in-house with a company for an extended period of time.

It means you can define up the important metrics for your project just once and then current updates. You won't then have to manually go searching for the data in analytics software each time you want to check it. This dashboard can be your high-level view of a website, which enables you to easily spot anomalies in performance.

It can become something that is easy to share so everyone in a team or company is on the same page. This can be especially useful for people who may not have access to your analytics software or may not have the time or capability to go rooting around in it for the data they need.

How to do it

The first thing to do is define what is important for you to see in your dashboard. Don't go creating it without a plan or you can end easily up tracking things for the sake of it.

The most important thing to track is your key metric or goal which determines success for your website or part of a website. For some that could be sales while for others it could be sign ups but either way it's likely to be something related to making the business money.

The next thing to have on your dashboard is conversion rates for the steps in the user flow to reach your goal. This could be the same as any funnels you've set up elsewhere but your dashboard can be more or less granular.

The other data to consider including on your dashboard are your important secondary or engagement metrics. These are ones that tell you a bit more about how a page is performing and include things like bounce rate, time on page, and in-page events.

These secondary metrics can be for just one or two key pages in the flow, depending on what you have learned is a useful indicator. It might be the case that different metrics matter for different steps in your flow, for example bounce rate will be important for landing pages while specific button clicks would matter more for form pages.

Exactly how you set up your dashboard will depend on the tool you use. I won't go into that here but I cover a few options for doing so below.

Once it is set up it’s just a case of checking back regularly and building up the data over time. One of the most useful features of a good dashboard is to easily be able to compare how a key metric has changed over weeks or months.

Watch out for

Keep it high level

Don’t try to track too much detail with your dashboard. If you put in every stat you can get hold of for your website you might as well just use the standard interface for a web analytics package. The point is to show you key information in one view.

Tweak it

Setting up a dashboard can take a bit of tweaking to get it performing correctly. Make sure the data that appears in your dashboard tallies with what's in your analytics tools, and you’re not pulling in the wrong things.

Timings and seasonality

If you check your dashboard and spot anomalies or downturns in metrics, be sure that you are comparing like with like. If you see a conversion rate drop, check that the period you are comparing it against is the same length of time as the period in question. You may also need to check it is at the same time of year, as seasonality can be a big factor in conversions for many sites.

Investigate further

If your key conversion metrics really do drop then this should be the start of an investigation rather than a time to panic and pull the plug. Look at page data to see what is happening on specific pages or browsers, view heatmaps, and check visitor recordings to see behavior.

In fact it can kick off a chain of different sorts of evidence-gathering as I explain through the framework in my redesign course. Ultimately it's a case of playing detective to get to the bottom of your issues.

Example tools (and cost)

To create a board for data visualisation, try Geckoboard (from $25/mo) which offers the best looking online service I've seen or the highly customisable Tableau software (from $999).

To make one of your own for free you can use Google Analytics' own functionality under the ‘Customisation’ section, or the recently introduced Google Data Studio.

A tool I've used before is a plug-in such as Supermetrics (free and from $49/mo) to pull the data into Excel or a Google Sheet and manipulate the data here. This allows you to choose the exact resolution you want to see and means you can pull in lots of historic data.

How long does it take?

Setting up the dashboard should be a task that you do once for about half a day, and then check back weekly.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019
Guide to Personas for UX Design

☰ View all evidence-based methods

Personas

What you can learn

A persona is a profile that describes a type of user you have. A business will typically have several that describe the characteristics of different types of people. They are more of a second hand form of evidence, because they are usually distilled from a piece of user research like customer interviews.

However it is common to be brought onto a project and to be given personas as the output of previous user research. At this point they can become your main form of evidence about an audience. This often happens if you are working on a budget and you don't have the time to do a new piece of research.

If they are put together well they can tell you a lot about the people you should be designing for, and should be combined with your other sources of evidence like audience analytics data. On the other hand, if they're missing key pieces of information or they are getting old, you’ll need to use them carefully.

Sometimes they might not be called personas, but could instead be described as 'segments' or ‘customer groups'. Either way, they are usually pretty similar summaries of the types of people who use your product or service.

How to do it

In this piece I won't go too much into how to create personas as there are plenty of articles out there that explain this (see below). Instead I'm more interested in how to use them as a piece of evidence for gaining insight and helping you design.

First you should check if your personas contain enough information: they should include the person's demographics (age, location, etc), their behaviours (particularly around technology); their motivations/goals; and their fears. It's important to have those fears and motivations in there for you to really be able to understand why people want your product or service. A good piece of user research should have probed this.

It’s good to critique them. Go through the information provided and see how it tallies with any other evidence you have—take notes or highlight sections. For example, demographic information should match what you can see on web analytics, and motivations should chime with survey responses. Where something seems wildly different it's worth flagging it and questioning it with your client or others on your team.

There should be more than one persona, so make sure to study them all and understand the differences. Whilst a few products are laser-focussed for only one type of user, in reality most will appeal to a few different types of people.

One of the advantages of personas is they remind you that your audience consists of people with varying motivations. This helps you avoid the trap of designing for a non-existent perfect customer, or trying to design for 'everyone', which can mean you end up creating something for no-one.

Once you have understood the personas you should be able to create some key goals for your designs to meet to satisfy these user types.

Watch out for

Know the limitations

Whilst I do promote 'outline personas' as a quick way of summarising your audience analytics data, they aren't a replacement for full personas. They give you a sense of who your audience are and help you understand the different groups out there. They can be very helpful for knowing who to recruit for research or user tests and can tie groups of users to online behaviour. But without some qualitative research you'll always miss the important understanding of why people do what they do.

Avoid over-specifying

Be careful of excess demographic information in personas. It's useful to place people in your mind with an age, location, and job but any more than this and you can run the risk of drawing irrelevant conclusions.

Unless it's important to your product, ask yourself does it matter that your users have three dogs, speak Swedish, and exercise at 5am? As humans we can naturally lead ourselves to making stories out of these things and creating connections, even when a person's pets has nothing to do with them choosing a financial service (for example).

Careful of persona saturation

Watch out for being given too many personas or customer segments. If research was too broad a client can end up with ten or more personas, which can make it very hard to not design generically and create lots of features to satisfy everybody.

Also over time a company may have amassed several rounds of personas. You should try and understand who the current relevant ones are and trim it no more than four or five, which summarise the majority of your users.

Remember it’s second hand

Finally, do remember if you are given personas to work with, that they are second hand and rarely replace you actually talking to or watching users. If you can, try and speak to the people who created them and probe them on the details. It's usually worth you trying to do some of your own first hand research alongside to compliment or disprove them.

Example tools (and cost)

Personas take no specialist software to create. Whilst I'm sure specific tools exist, they can just be text documents or simply formatted presentations, outputted as PDFs.

How long does it take?

The process of reading through and understanding personas should be quite a quick one—an hour or two.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019
Guide to Social Media for UX Design

☰ View all evidence-based methods

Social Media

What you can learn

There are a lot of social media platforms available to reach your audience and converse with your users: Facebook, Twitter, Instagram, Pinterest, and LinkedIn being a few of the big examples. Just like any online medium for communicating with people, the data it generates offers you opportunities to learn more about them.

Social media can give you a lot of quantitative data in the form of followers, likes, reactions, retweets, etc. A lot of companies will have one or more people watching these numbers to determine the success of and reaction to content and campaigns.

It can also provide a place to gather more qualitative feedback from individuals in the shape of comments, praise, and complaints. In both the qualitative and quantitative cases it’s easy to get lose the signal amongst the noise.

How to do it

Just as there are lots of social media platforms there are also many tools for measuring impact. If you’re putting social content out daily it’s worth signing up to one of these in order to track your social performance (see tools below). How you measure success on social media should be tightly related to what you are using it for.

However as a UX designer all the marketing metrics don’t do much to help you understand your audience. In terms of gathering evidence of behaviour it’s more useful as another source of unfiltered user opinion and qualitative feedback.

Are people complaining about features or bugs on your website? Are people reacting (positively or negatively) to the titles of articles you post? Are people praising or telling of frustrations with a service? You should have a system to record the repeated ones, as with all other user feedback, and consider them when redesigning.

It can also be a good starting place to discover real-world problems people might be having that you otherwise weren’t aware of, as you can be sure people will moan about them online. For example if you’re looking to improve airport parking, then a search for that term along with words like ‘annoyed’ or ‘stressed’ will throw up problems. You can then take these specifics and dig into them further by surveying or interviewing more people.

Watch out for

Vanity metrics

Be careful not to track vanity metrics—defined as numbers that don’t help you improve your product. If you're trying to promote an article then the meaningful action would be clicks though to read it. It doesn’t matter how many likes and retweets you get if no one actually sees the content.

Useless activity

Social media is full of noise and it's easy to think things are going well just because there is a lot of activity. It’s easy to convince yourself that having loads of followers is a big success but they are far from the same thing as active uses or paying customers. It’s a better idea to tie this with your website analytics and track how many of those followers are going on to visit your site.

Actions v words

It’s also worth keeping in mind that what people say on social media is not necessarily the same as what they actually do. Take feedback on here with a pinch of salt. It’s such an easy, low friction way to communicate that people can say things just to fill their timelines.

In addition the default on social media is for people to move to the extremes (like saying they’re super happy or very angry) and you tend to get much more negative emotions than you would from people on a phone call. If a user is complaining, try to move them to a more nuanced channel for feedback, where you can get to the bottom of their issue.

Example tools (and cost)

There are a lot of options for tracking social performance and I’ve dabbled with a couple of them. Buffer (for decent analytics it starts at $99/mo) helps you learn what time of day your posts work and shows quant reaction data for each one. Hootsuite (from £19.99/mo) allows you to manage a dashboard of different social feeds and see analytics and reports on performance.

For finding comments you can always just use the search tools on social sites, which vary in quality. If you use Facebook and Twitter ads you gain access a lot of demographic data on users who engage, which can help you learn more about your audience.

How long does it take?

To just check social accounts for feedback and comments can be done in less than an hour a week.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019
Guide to Field Research for UX Design

☰ View all evidence-based methods

Field Research

What you can learn

A lot of this guide looks at technology-enabled methods for gathering evidence but sometimes—no matter how many analytics tools you have—you need to go to the source and get some raw data. This means actually observing people and their behaviour ‘in the wild’.

This is particularly true if you are looking to design an online service that interacts with people's daily lives. If you're designing an app which helps people make healthier decisions when out shopping in supermarkets, you'd better go out and get a good understanding of how people actually shop first.

You can survey and ask people all you want but if you're to truly understand what they actually do (rather than what they say they do), you should consider accompanying them when doing tasks. You can learn where they get stuck, why they have problems, how they find work arounds, which things they love, what they ignore, and more.

I’m calling this field research although you might also find this kind of thing being called ethnography.

How to do it

Recruit

The first challenge is going to be finding appropriate people to study. If you have existing customers you might be able to reach out to them and ask if they're willing to participate in research. If not you can always try putting out adverts in places like Gumtree or Craig's List. Try and make sure any non-customers match your audience.

You should pay people for their time, so come up with an appropriate sum for the duration of the task. If the people you need are just too specific or hard to find, then it might be worth using a recruitment company.

How many people to recruit is a question of how complex the product you’re looking to research is (the more there is to observe, the more people you’ll want) and how much budget you have. You need to build enough sample variety into your sessions so you reach the point where you see overlap with insights. 10-12 people is a decent target but just one or two is still better than none.

Plan

The next thing to be clear on is what you want to watch people doing. Is it a very specific task (like going to a doctor)? Or is it a full process, from writing a shopping list to unpacking the contents of their shop into their kitchen. Once you've defined what you're interested in, don't try to structure it too much. It's your job to just observe and record what happens.

Watch

You then need to go our into the field and watch what they do. Make sure to have their permission to record the events. You might want to try and film the whole thing or just snap clips of key moments. Of course you can have a notepad with you to jot down questions or incidents you want to dig into.

If it’s a task that they do a lot then asking questions as they go and getting them to think aloud is a good way to understand why they’re doing things. If you want to see them use a new product then try not to interrupt and influence their learning process but ask any questions at the end.

Report

After the task is over you need to take a bit of time to record what happened. It’s worth setting up a report template with things like person description, stories, quotes, insights, highs, and lows. Fill this in for each participant, ideally with another person you researched with, so you can check you agree on what you saw.

Watch out for

Keep out of the way

The big thing to watch out for in field studies is being sure not to insert yourself into the research too much. Don't ask the participants lots of questions as you go along, or help them too much if they're struggling with something, or get in the way with recording things. You could end up biasing the results and missing out on opportunities to learn how they truly tackle problems.

Save extra questions

If you do have lots of questions you want to dig into further then save them until the end of the observed period and ask them in a more formal interview then. Good questions to ask are 'why' they do things, especially if it isn't obvious from simply observing.

Allow tangents

Don’t let perfect be the enemy of good in this kind of evidence gathering. If the task goes a bit off-course or the participant ends up doing something you didn’t expect it can still be a chance to learn. By its nature this kind of work can be messy but you can still spot very real insights.

Watch, don't conclude

You should also try to be in ‘observe’ mode during the research and note down everything without bias. Then take the time to reflect and analyse later, rather than jumping to immediate conclusions.

Thanks to Paddy Long for his help putting this post together.

Example tools (and cost)

The tools for this kind of research are your classic 'reporter' tools of notebook and pen, camera, dictaphone, and practical clothing. A smartphone can contain all of thee tools but it's still a manual job to capture the events.

You might not be able to video the whole thing but it’s at least worth taking photos of key moments to help remind yourself how things happened later.

How long does it take?

This is going to completely vary depending on who and what you're looking to observe but budget a few weeks to run the whole thing.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019
Guide to Focus Groups for UX Design

☰ View all evidence-based methods

Focus Groups

What you can learn

This method is a more traditional marketing approach to gathering evidence. You bring customers or people from your target audience into a room and get their opinions on existing or new products. It promises the chance to easily understand what your customers want in one session, and gives you a chance to probe them on the areas you are interested in.

Ideally a focus group will provide you with feedback and reasoning, so you can go away and address the concerns. They may even give you quotes that describe how much they like your product, which you can use to drive marketing.

However when used poorly a focus group can become the justification for sweeping assumptions and overconfidence, based on a few throwaway comments. Focus groups can be a good place to start your research and help direct it, but they shouldn’t represent the only evidence you find.

How to do it

This is another method where recruiting the right participants matters. You want people who are either your actual audience or who match them.

If you already have personas defined then it’s worth trying to get representatives from each of those groupings and not just populate your group with one type. If everyone is too similar then you’ll potentially only hear a chorus of identical feedback.

Just like when interviewing or user testing, it’s important to write some kind of script or discussion guide, which captures the questions you want to ask. You don’t need to stick rigidly to it—part of the benefit of focus groups is in letting the group evolve the discussion to areas you hadn’t thought of—but it’s there as a structure to fall back on.

Also like interviews or user tests it’s a good idea to have a couple of people facilitate: one can talk and engage people while another takes notes and records. If you’ve not run any focus groups yourself before it can be worth getting an independent agency with a good track record to do so (they’ll help you avoid the mistakes outlined below).

The feedback you get from a focus group shouldn’t represent the end of your evidence-gathering, as it’s easy for them to come to skewed conclusions. It is better to take the outcomes (particularly any insightful comments) and use these as starting points for further research.

You want to test out if the findings are true by looking to see if you can find evidence in other forms, through things like surveys and analytics data.

Watch out for

Facilitator bias

One reason to be wary of focus groups is that the results can be so easily manipulated by either biased facilitators, who have a vested interest in the product being successful, or by the loudest voice in the room. This problem of biased facilitators can appear in other forms of research too but it’s made worse in focus groups by the power of ‘group think’ as once an idea is suggested to the group it can spread quickly so the group end up repeating back what they’ve been told.

The biased facilitators phenomenon is shown well on the TV show The Apprentice where it’s the only user research method they use in product-creation tasks (and then they often proceed to ignore what people say anyway).

Group think

‘Group think’ creates effects where people don’t always behave honestly. If one member of the group loudly declares that she dislikes something, quieter members who think the opposite may agree or stay silent to avoid conflict, or for fear of looking silly.

The group can also get sidetracked by one or two people’s opinions taking up all the time and the session can run out before everyone can have a say. You can miss out on the nuanced thoughts of some people, which you would be able to dig into when interviewing individually.

Too many words

Another big problem with focus groups is that you’re placing a lot of weight on what people say rather than what they do, two things that are often quite different. This is illustrated well by the classic Sony yellow Walkman story, which is worth reading here if you haven’t heard it.

In customer interviews you should try to ask people about actual behaviour but this can be harder to do in a group setting, where people can say things to impress others or to match the group consensus.

Lack of experience

The focus group is potentially a dangerous beast and not something I come across much today in tech product decision-making. It used to be the preserve of corporations but occasionally a client would present me with their main research from focus group feedback.

The downsides can of course be avoided but require careful moderation and analysis afterwards, and there just aren’t many people in the digital/startup space with that experience.

Example tools (and cost)

This isn’t really one that requires lots of tools and software, just a method for recording and reporting on it afterwards. You can do remote focus groups in online chat software like Slack—I’ve done this for new idea development.

The same rules apply online to doing it in-person. You tend to need to marshal users more to keep them on topic but it gives less vocal people a chance to be heard and you have the benefit of a written transcript to analyse at the end.

How long does it take?

A focus group should last 1-2 hours to keep it, er, focussed. Analysis requires another day.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 9 July 2019

☰ View all evidence-based methods