Summary: What are Heuristic evaluations in UX? An expert review of your design to determine potential problem areas and opportunities for enhancement. They also provide guidelines for designers to use while designing.
One valuable tool for determining UX problems and opportunities is the heuristic evaluation or “expert review.” Expert reviews combined two major analysis sources:
1) Applying known guidelines or ‘rules of thumb’ heuristics developed by usability pioneers (Nielsen & Molich 1993).
2) Advocating known user behaviors, such as confusion with a UI element, from spending hundreds of hours in usability testing sessions. This is also called User Advocacy.
Heuristic evaluations can be conducted when you design to avoid mistakes like interface consistency, or they can be used as an evaluation or quick research tool.
See Interface Consistency- The Case For And Against Consistency
A demonstration of a heuristic evaluation
Heuristics can be used on any platform: Web, Web app, mobile, tangible product, IoT, VR/AR or AI. You start with reviewing the UI element and the heuristic guideline. At the same time, it’s important to think like your users, performing User Advocacy. If it’s an Accessibility review, you perform Disability Advocacy. In a review of harm and inclusion, it’s Inclusion Advocacy.
See this User Advocacy Masterclass
Advocacy is important because you start with user context. There’s no point assuming you are an IT professional when reviewing a website aimed at helping parents and carers find childcare for their children. User advocacy is gained by observing users in many usability tests. That’s why Usability Testing is so important to ongoing UX skill development.
The following heuristic evaluation example review demonstrates how an expert review works. Let’s use the NASA Dragon UI as an example. That’s the recent major redesign of the NASA cockpit from buttons and knobs to touchscreens.
- First, what are the business objectives?
- Next, who are the users specifically (role-based personas vs job titles)?
- What tasks do users need to perform?
- Under what conditions (and context) do tasks get performed? (Since we’re using a NASA example, it’s vital to refer to the “NASA-TLX” Task Load Index workload criteria, the heuristics NASA uses themselves to measure task performance).
- Finally, once we identify the users (pilot?; co-pilot?; crew?) and their tasks, we perform the “cognitive walkthrough” advocating for users as they move through their tasks.
Example review:
- With the glove on, how easy is it to hit that target? Is the cockpit shaking (takeoff and landing) when pushing that button? Is it still easy to do that with environmental constraints? Also, what is it like reading using the UI upside down (in microgravity conditions)?
- Are fonts clear? Are astronauts wearing headgear or glass? Does that deter from readability? Is the contrast strong enough? Do UI’s use too much text requiring concentration?
- Are major incidents and issues easy to distinguish from other statuses?
Once identified, these elements are ranked in severity from Minor to Extreme. These findings are put into a report and used to improve the design.
Limitations and how to correct them
Heuristic evaluations have some limitations as a UX research tool, but this can be corrected with awareness of what they are, their limits, and how to use them expertly.
First, they are best performed with multiple reviewers to discover and cross-examine issues. Jakob Nielsen’s research on his technique shows you find 75% of problems with a design using five expert reviewers. Since it is rare to get five UX experts to conduct a heuristic evaluation, the heuristic evaluation technique can not be considered the engine of your UX research efforts. You need to supplement it with other techniques.
Note if you are using the heuristic evaluation technique to improve your designs as you go along, that’s great. Just remember user testing is required to keep you honest by getting feedback directly from your users.
Next, since heuristic evaluations do not involve users, this is an invitation to correct that. You can spend less time doing a heuristic evaluation, say with two reviewers or one. Spend more time doing usability testing since users will help you identify issues more expertly than any expert review.
The need to bring heuristics into modern business challenges
The most popular heuristics are the ones Jakob Nielsen created (1993). While Jakob’s heuristics get widely circulated, they rarely mention that other heuristic sets exist. In addition to Jakob’s, there are several other sets you want to include in your heuristic reviews for a more holistic analysis. Check this Gerhardt-Powals (1996) heuristics cheat sheet. See the Wikipedia heuristics entry currently listing four sets of heuristics.
A bigger problem is that our heuristics are hardly 21st-century-friendly. Instead, there is an urgent need to bring these UX heuristics into the challenges of modern business and society. Specifically, we need to emphasize context of use, harm reduction, and sustainability. This means we need to apply heuristics to technologies that are already mature or maturing, including mobile, AR, VR, and AI specifically.
See Microsoft’s Guidelines for Human-AI Interaction or AI heuristics (image below).
We cover this and more in Frank Spillers UX Inner Circle workshop: Conducting holistic heuristic evaluations
Conclusion
Heuristic evaluations are an evaluative tool that can provide rapid user advocacy to design projects. They also are guidelines that you can use to design with, early and often. They help UX designers minimize user errors and address basic UX issues like consistency. However, since multiple reviewers are needed for ideal results, user testing offers a more powerful research method–directly involving users. Finally, heuristics should consider the priority of inclusion, sustainability, and UX for emerging tech like AI.
Learn more: Take our Heuristic Evaluations training.