Get my ecommerce book (and 20 others) for just $0.99 on Wed 12 and Thu 13 May only!

☰ View all evidence-based methods

Design Testing

What you can learn

First of all, a definition: what is design testing? I use it to mean running quick tests on designs that are still in progress and before they’re linked together as a prototype. This usually means showing screens individually rather than as a sequence or user flow, as you might with guerrilla user testing. This can be done by printing them out on paper or it can involve sharing the designs online.

Design testing offers a chance to gather user feedback early in the process and shape your design decisions with evidence before committing to building anything. It allows you to quickly test out a couple of options and solve arguments if it’s not clear what design would be best or your team can't agree on a way forward.

It's a lightweight method and won't give you lots and lots of insight but it is suitable for answering targeted questions and helping you course-correct. If you ask the right questions of a design you can save a lot of time in the long run.

How to do it

I'm a big fan of UsabilityHub for carrying out this method (see tools), and for that reason I'll use their test types to cover the different ways to do design testing.

To use Usabilityhub you need to upload your exported design and then create a simple test around it with a few tasks before putting it out to testers. If you're doing this offline yourself then you can just show people the design and ask the questions verbally.

In all cases I'd recommend testing either a whole page design or a section of a page that would be visible within a viewport. Don’t test just individual elements like buttons without the context of the page around them.

Here are the types of test you can create with Usabilityhub and what I recommend using each for. You can combine these together in a single test if you want test a few things like first impressions (five second test) and whether people find your main button (click test).

When you get your results back, it’s worth doing a bit of analysis yourself—I find the ‘word clouds’ provided by default on UsabilityHub aren’t that useful. I export the results into a spreadsheet and do a bit of sentiment analysis. For example, if I’ve asked what people think of a certain design, I’ll classify the results as positive, negative, or neutral. This helps compare if I run the same test with another design.

Watch out for

The main challenges with running good design tests are to give users simple instructions and ask good questions. These are brief tests and to get useful data out of them you need to be specific, as you don't have the time of a full remote user test to cover a lot. With that in mind here are my tips for good task writing:

Give a bit of context

Before each test you get to set the scene for the user with an initial introduction. When showing them a webpage or app screen tell them what they would have just done to reach it (e.g. "searched for a bank loan", "shopping for jewellery").

One sentence is usually enough, don't go overboard with lots of detail about what they could be doing, as they’ll never manage to digest this and keep it all in their head when going through the test.

Don't ask leading questions

People can see design testing as a chance to 'prove' that their idea is right and have some data behind it before showing the solution to management. One way you can bias the results in your favour is to write leading questions that suggest a solution or leave the user only with yes/no answers.

Instead you should write questions that allow the user to express their thoughts about a design ("what do you think...", "how useful is...") and let them give honest answers. If you skew the results your design might ‘win’ in the short-run but if you come to release it and it fails, you’ll have to deal with a much bigger problem.

Don't ask too many in-depth questions

These are short, simple tests and in the case of five second tests, the user barely gets a chance to see the design, so don't go overboard and ask too much. When you go into a design test you should have a single thing you’re looking to find out plus perhaps a related follow-up. Do the work to establish what this is and then stick to it in the test.

Tools (and cost)

As mentioned above I find UsabilityHub is the perfect tool for testing at this stage, as the tests are super-quick to set up and quick to get responses to. It's also free if you have your own audience to distribute it to (perhaps on a mailing list). To use their panel, it costs $1 per random user or $3 per demographic-targeted user. I normally specify the demographics of my testers and run them past 25 people, which gives plenty of food for thought.

I'm not sure there are any alternatives other than creating your own design surveys. You can add images to most survey tools and get opinions from a group of users that way.

How long does it take?

Setting up a test and getting back results from 25 users takes about half a day.

How often should you use it?





Last updated on 8 April 2020

☰ View all evidence-based methods