If you have a well-developed wireframe, prototype or existing product, you can use usability testing to gather valuable data on the system’s user experience quality as well as identify any specific usability failings. Whether the tests are moderated or unmoderated, when it comes to writing task questions for the tests, there are rules that should be followed to ensure effective testing and reliable data.
Note that these rules relate specifically to usability testing – how easy a system is to use to complete tasks, not general user testing which can have much broader lines of questioning.
Before you write your tasks, identify what you want to evaluate. Focus on specific features or short workflows. If there are multiple ways a user could complete a task, for example by ‘searching’ or by ‘filtering’, be specific in the task about which feature you want the tester to use.
Avoid the temptation to chain too many workflows together. For example, if you were writing a usability test for Google image search, you would specify one task for ‘search by image using an uploaded image file’ and another task for ‘filtering the image search results by time’.
Not only will this keep tasks clear for the tester, it will give you more feature-specific (and therefore more actionable) data points.
Keep the text length of the task to a minimum. Most tasks can be communicated in two sentences or fewer. One sentence can be used to communicate the general aims of the task or help the user understand where they are within the system. One sentence must communicate the specific requirements of the task as concisely as possible. Using the Google image search example, one task might be:
“Your image search has returned a list of matching results. Filter the results to show only images from the past 24 hours.”
Avoid unnecessarily complex words and phrases; you want to be sure you’re testing the product, not the testers’ language comprehension.
Use industry standard terminology
Especially when testing a system you are familiar with, there is a temptation to integrate the idiosyncratic words, phrases and terms specific to a system within the tasks. For example, a system may have a feature labelled as “Refine by attributes”. The industry standard terminology for this would be a ‘Filter’, so use the word ‘filter’ in your task.
You may even be tempted to avoid words and terms that accurately describe the feature or action for fear of making the task too easy. Fear not. If you find yourself writing a task using standard terminology and the corresponding action is staring users in the face on the page, that’s a sign of a good user experience, not a poor question.
When writing multiple tasks for a test, be consistent in the terminology you use to refer to elements within the test. A surefire way to confuse testers is to start referring to a feature or element using one phrase, then use a different phrase for the same element later on. This is especially important when working with other people contribute to writing the tasks. Positioning yourself as an editor, read all the tasks in series and identify and amend any inconsistencies in terms and tone.
Avoid forgettable task elements
This is not a memory test. While users should have access to the written task questions at all times during the test (for the vast majority of testing scenarios), the task should sit comfortably in their mind before they start. This means avoiding long or complex strings, file names, project names or any other unmemorable values critical to completing the task. Where some memorisation is unavoidable, for example if you need the tester to save a file in a given folder, make the folder name easily memorable.
Don’t confuse tasks with context
Setting the background and context with a tester helps them adopt the right mindset for completing the tasks. For example, if you were testing a software product used by firefighter station commanders, you could use images and text to help them imagine being in that role in the station control room. For the sake of clarity, do this context-setting before they start the task. Context-setting text and images should be kept separate from the task questions to ensure the requirements of the tasks are clear and concise.
Test your questions
Always get a second opinion on the quality and clarity of your task questions. Better still, get a third opinion. Use a mock test session to ensure the tester understands the requirements of the tasks. Ideally this mock test would be with somebody representative of the real testers, but in a pinch use a colleague or two. Whether they can complete the task is statistically irrelevant, but do ask the tester to read each task question back to you and explain what they believe they have been asked to do.