After having to run many usability tests while working in a bank, I saw their importance. In the best case, a mistake in the interface won“t let the client use your service and you won’t receive profit. In the worst case, in the matters where any mistake in the interface can do serious damage (atomics, aviation, banking) it can lead to catastrophic consequences. I can show you an example from my area of business, when a bank lost 500 million dollars because of a bad interface design.
Let’s speak about how to run moderated usability testing in order to avoid such situations with your product.
What is usability testing?
Usability testing — is a quality method of client experience research that suggests that users do typical tasks when interacting with sistem’s interface.
We can distinguish some testing methods such as:
- Automatic (the respondent fulfills tasks in special software) with moderator included (ux-researcher participates in the testing);
- By degree of participation of the ux-researcher: moderated (ux-researcher talks to the respondent) and not moderated (ux-researcher only observes the respondent`s actions);
- By respondent`s location: intramural, distant (with the using of programs for video conferencing) and field (observing the user in real time);
- Depending on the goal: exploratory (testing comfortability of using the MPV), verificational (searching for usability problems in the existing service), comparative (comparing the interfaces of rival services)
In this article we will talk about moderated distant usability testing. This text is oriented on beginner ux-researchers and product owners who want to get acquainted with this instrument. It’s worth mentioning that the sequencing that I described is not really suitable for entertainment projects because emotion’s influence affects them. However, this approach will suit to any other interfaces. Before the covid-19 pandemic I ran intramural interviews and in 2020 I began to run distant ones. In my opinion, if you have enough experience in research, the quality of the results will not worsen.
The main goal of usability testing is the search of potential problems in the interface. As a result, you will find mistakes, poor decisions and potential problems in the interface.
This approach can“t be used to pick statistical information or to test conceptions of a product (how the product will be demanded by the market). For that aim there are different instruments. On the other hand, no one forbids combining methods and, for example, express interviewing and testing the interface.
Usability testing is used almost on every stage of a product’s life cycle: development, introduction, growth, ripeness. I think that the only stage when this instrument is not relevant is recession.
You can see a list of some business indicators below. Usability testing might have an impact on them:
- Increasing of conversions’ percents in target action (buying a product, subscribing to a service);
- Recession of rejections (the ratio of those who quit a page after the loading to the total number of visitors);
- Increase of user’s productivity (number of targeted actions per unit of time);
- Reduction of the excessions on supporting the product.
All in all, chun rate indicator decreases and LTV and ARPPU increase.
However, it is very difficult to distinguish which part of these indicators comes from the usability testing and which from the marketing, for example.
Algorithm of usability testing’s conduction
What are stakeholders
Stakeholders are people who are interested in your project. When you define who are stakeholders, you acknowledge their goals and how much they affect the project. In the large companies it is common that several subdivisions are involved in the development of the same product and it would be a mistake to not take into the account opinions of those who have influence on the product.
How to find aims and objectives of a research
It is necessary to talk to the stakeholders and figure out (even if the stakeholder is your supervisor) the goal of a research — is the final result on which the research is directed (to increase the conversion into the target action from 1% to 4%, for example).
Tasks also need to be discussed — they are specific actions directed on achieving a specific goal (rating of an interface using heuristics, rating using metrics SUM or UMIX e.t.c.).
It will also be useful to talk about organizational matters: when the interface prototype will be ready for the research, which charterer will attend the meetings with the clients, what is the approximate completion date or the testing, e.t.c.
How to find your target audience
Target audience is a group of people who are united by a quality that is a reason for having an ache or a need in the product.
During the kick-off meeting with the stakeholders the target audience and it’s criterias are discussed. Interfaces are usually made during the stage when the target audience of a product is already detected and you can tell the chartered what it is.
For example, in B2B sphere there might be the following criterias for choosing a company representative:
- User’s experience (they have the experience of using the product, don’t have any experience of using the product, have the experience of using a similar product);
- Using the interface on a specific devise (smartphone with iOS or Android software);
- Role in the company (for example, superviser, accountant or analyst);
Work in a company of a certain scale (small, medium and large business).
The final portrait of a target audience might look that way: finance director from the logistics company who is related to the large business, who uses the «Lising» bank product.
We often can distinguish several representatives of the target audience, for example the company’s accountant and the director, who use an online bank’s interface. In this case it is necessary to test the interface starting with the first and then the second respondent group because the perception of the interface, and the needs of these client groups might be different.
Heuristic interface analysis
Heuristic analysis — is an expert rating method when the interface is being analyzed by an expert to see if it matches specific principles that are called heuristics.
In this article I won’t talk about this approach in detail because it requires a different article specifically for it.
Speaking about this kind of analysis, people often recall Jacob Nielsen’s heuristics. Of course you can use different approaches for expert interface rating (principles of interactive design Bruce «Tog» Tognazzini, 8 golden rules of interface design Ben Shneiderman and others).
Heuristic analysis helps to detect the most obvious interface problems and correct them before the testing with the users.
Formation of hypotheses
Hypothesis — is a valuable business assumption which needs to be confirmed or refuted.
Where you can take hypotheses from:
- Try to guess the client“s actions with the interface or ask 1 or 2 friends to do that;
- Analyze the interface using the heuristics;
- From quantitative metrics of a product, for example, CSI or NPS;
- From complaints and reviews (for an existing product).
A quality composed hypothesis has the following characteristics:
- Binary (can be confirmed or refuted);
- Consists a specific description of the user’s expected actions.
I may recommend a hypothesis form as a reference point. It is suitable for most of the objectives of usability testing:
(Who) (in this situation) (will do something) in order to (goal)
For example: chief accountant (who), while being in an online bank (where) will open a page «credit» (will do something) in order to form an application for a credit (goal).
Hypothesis will become a basis for making a script of interface testing. While writing a script you will be able to predict questions that can confirm or refute a hypothesis.
Writing a testing script
To make a testing script you will need to accomplish some tasks:
- To form the opening questions before the testing
- To turn client’s way in the interface into logical steps
- To write client“s tasks on each step
- Add questions after the task
While writing the opening questions you might predict the questions about the respondent, about the area you research (what products the client uses), about the experience of using a product you research. Answers to these questions will give you more information about the context in which the client uses your product. Photos below show the importance of the context — what story you see behind the first picture and which behind the second. Let’s agree that they tell two different stories.
Onwards, we need to divide client“s actions in the interface into separate parts, for example: authorization in the online bank, document formation, payment confirmation. If a client can come to the final step in different ways, then you should mention it in the script. For example: if the client needs a credit consultation, they may write to the chat or ask a question on a special form in the «Credits» section, and you will need to include both options.
After we get separate steps in the interface, we will need to write a text of a task for each step. The ux-researcher will say the task’s text to the client. Ideally, the client will hear one task in the beginning, for example: «Apply for a loan», and then the researcher will not give any opening data to the end of a test. However, while working with a complicated product it is possible that the client, in order to fulfill their tasks, will need some additional information: for how long should they take a loan in the interface’s prototype, what sum to demand. That’s when the intermediate tasks are needed.
In the task’s text there must be no hints and no leads to a specific action. Here’s an example of an incorrectly worded task: «Make an application for loan processing». Here’s the correctly worded task: «Apply for a loan».
After the task we can foresee some additional questions:
- Questions that help to confirm or refute hypotheses;
- Questions that show the client“s logic if deviate the from the way that was planned for them;
- Questions that show the additional needs.
The testing script is coordinated with the stakeholders in order to prevent conflicts between group members during the research.
Recruiting the users
First of all you need to reach an agreement with the stakeholders on the subject of the number of the respondents who provide the research“s representativeness.
Representativeness is a sample fit of the general population. In usability testing there isn’t a single approach to the number of the respondents, I think most of the people support the Jacob Nielsen opinion who states that 5 respondents for each target group is enough to show 85% of problems in the interface. However, we can give Jared M. Spool as an example, he states that more than 10 people might be needed in order to run a test. My practice shows that 10 respondents will suffice for a quality interface test of medium difficulty.
Also, on the kick-off meeting with the stakeholder it is decided who will take the responsibility of attracting the respondents: stakeholders, marketing agency or the researcher themselves.
If the client are being recruited by the charterer or the researcher, then it is necessary to form a base of the respondents and contact the clients in that base. If you have access to contacts of your existing clients, then you might address them. Alternative sources of the respondents are specialized networks in which there are many people who will be ready to participate in the commercial researches. List of those networks can vary depending on the country.
In a conversation it is important to show the respondent the importance of the meeting, tell the information about the time and date of the meeting, about the format of the conversation, information about the necessary equipment or software (if this is a distant interview). The day before the conversation it’s desirable to call and remind about the planned meeting.
While recruiting the clients through the marketing agency, it’s important to feel the briefing in detail: to state the goals and tasks of the research, maximal recruiting time, number of the respondents, list of respondent’s characteristics, time of the interview with the client, meeting format (online or offline), permission on the audio or video recording, e.t.c. For example, I heard from one of my colleagues about the time when after the intramural attendance with a client the researchers weren’t given the let-pass into the building where the respondent was working because that wasn’t said in the briefing.
Moderated usability testing of the interface of a medium difficulty usually takes 45 minutes ± 15 minutes depending on a client“s activity. In my experience, if you run a research for more than an hour, the respondent gets tired, their concentration worsens and test results get distorted.
Meeting structures that will suit for most of the cases:
- Getting acquainted — we need to introduce ourselves and talk about how the meeting is going to pass in order to deprive the client of any fears. On this step we need to establish trust in the relationship with the respondent.
- General questions on the researched matter — we need to speak about the researched matter in general: how the client solves their need, which product they use, how the client characterizes the existing products
- Testing of the prototype
- Summing up the meeting — we need to say the client“s wishes in order to figure out if we understood them correctly, and then we need to thank the client for the meeting
Researcher does three things during the testing:
- Immerses: gives tasks, motivates the users to comment their actions while working with the prototype
- Asks: clarifies the reasons of the users“s actions during the fulfillment of a task
- Watches: keeps track of and fixates how the user works with the service
Ideally, these functions are divided between two people: one is leading the conversation, and the second fixates the results. But depending on the researcher’s experience and the prototype difficulty, all these three functions can be fulfilled by the same person.
In the previous step we divided the client’s actions into separate steps. While testing, the researcher marks the signs of the tasks (made it/ didn’t make it/ made it but had some difficulties), how well the client fulfilled the tasks. Here we need to write what difficulties there were.
If the client deviated from the planned way into the interface, then you should ask why did the client do that. Additionally, you might ask questions that can allow you to confirm or refute the hypotheses.
Typical mistakes in the moderated usability testing:
- Asking «closed» questions that can only be responded to by «Yes» or «No». Example: Did you understand everything on this page? Because we are speaking about moderated usability testing, you have an opportunity to ask «open» questions and detect the user“s motivation. Example of an open question: «Why did you choose this package of services?»
- Talking more than the respondent. We explore the client’s experience, not the researcher’s. Researcher“s function is to ask questions and clarify why the client did what they did.
- Asking questions with a hint, for example: «You use the product rarely because it’s expensive or you just don“t see any benefit?»
- Arguing with the client or trying to convince them. If clients make mistakes, then our goal is to find out why it happens. And then to solve the problem by changing the interface. If you convince the client that you are right, then the problem won“t be solved.
- Using professional vocabulary. Not all clients might know the names of the interface elements. For example, when speaking about one of the types of navigation in the «breadcrumbs» network, some people can understand a word literally and get confused.
As a result of interface testing with the clients, you will receive:
- list of steps with signs «made it/ didn’t make it/ made it but had some difficulties»
- List of difficulties that the client had during the testing
- Hypotheses with the signs «confirmed/ not confirmed»
- Client’s expectations and wishes if the client said them
Testing results analysis
The perfect option is to analyze the direct speech of the clients. The first option: transmitting audio into text by hand but this is labor-intensive. For automatisation you can search for services that can transcribe the text, the list of these services will vary depending on the language of the speaker.
The second option: to fasten the work you can use store up patterns and fill them during the interview. This will reduce the processing time but decrease the report`s detalisation.
Let`s see an example:
- We state the problems that were detected in testing. A good description of a problem consists of two parts: Description of the respondent’s behavior and description of the design’s peculiarities to which some difficulties are connected. Example: the respondents don“t press the button with settings because it’s grey and looks inactive in users’ perception.
- Pinpoint a problem`s frequency. Metrics help to understand what problems need to be corrected in the first place.
- Define the problem`s criticality. We can divide all problems on: critical are those that don’t let the user to fulfill a task or fulfills it incorrectly; of medium criticality — the user spends much time and strength and their satisfaction with the service decreases; problems of low criticality don“t affect the success of fulfilling a task but decrease the satisfaction from the service.
Everything that’s not related to the fulfillment of a task (the context of using a product, additional client’s wishes) might be visualized for analysis using a testing matrix. The matrix helps to group and show the testing observations.
As a result of all interviews you will see things that repeat in the testing matrix. You could distinguish them in your report.
Usability testing could be supplemented with the quantitative metrics. Metrics need to be discussed in another article that’s why we will mention only the main features.
The most famous usability metrics:
- Time for accomplishing the task
- Task Completing Success rate
- Mistakes frequency
- System Usability Scale (SUS)
- Single Ease Question (SEQ)
- Usability Metric for User Experience (UMUX)
- Customer Satisfaction Score (CSAT)
Remember that classical usability testing — is a quality research. And the metrics that you got are illustrative.
When do you need metrics:
- Prove — numbers are used as a proof for decision makers, those who are used to numbers;
- Put an emphasis — number illustrate the problems better;
- Show the dynamics — if you systematically measure the indicators, you can track the efficiency of changes in the interface.
Benefits of improvement
It is often difficult to understand how to divide the change of the income that comes from usability testing or from the other company’s activities such as marketing or sales. That’s why it’s not always possible to define the income and ROI.
Income can be received from usability testing in two ways:
- Increase of the product income (for example, increase of purchase conversions);
- Reduced product costs (for example reduced number of calls to the call center).
If you need to track the increase of the income then you should pay attention to the change in the number of purchases, number of those who pay the average check, e.t.c.
If you track the reduction of the costs, you can count the changes of the costs of the customer support and the training.
Deloitte company published a text saying that customer-centric companies are 60% more profitable than the others. Use the instruments of working with the clients’ experience (usability testing included) to make your client happy and be successful in business.