Methodology Madness: or "caveat emptor" (buyer beware)
What you buy or "buy into" influences how you think about something and how you represent that information in your mind is what cognitive scientists refer to as an "internal representation". Whether you buy usability services or not, at some point along the way I am sure you will or have encountered "methodology madness", and maybe you don't even know it.
What is "methodology madness"?
Methodology madness in the usability services and products area refers to the espousing of convenient beliefs, "truths" and proclamations about the right way or new way to do things. The methodology is typically proprietary or masked and is typically part of some form of a sales pitch- either for a report or a "customer experience management" solution. The "right or new way" implies that the approach is more refined, more advanced or a best practice.
Methodology madness is not new to usability consulting; in fact it exists in many industries. In terms of the usability industry, the problem with proprietary methodologies is that they are often inaccurate or distorted versions of the truth. The other obvious problem is that proprietary usability methodology, techniques, or research serves that company's interests, with a clear commercial bias.
A fair degree of usability nonsense seems to be emerging as the industry grows, and it's main motive is sales and competitive differentiation. Further it is hard to tell what is nonsense and what is valid, witnessed by the fact that I have met many usability consultants who believed certain methodology myths. Like many of my colleagues, I have even fallen for some of the mythology because it sounds so convincing.
Let's face it: it's hard to think critically about something when it's packaged in a compelling way and important details are withheld in the name of confidentiality.
To help identify the madness, let's look at just a few common methodology myths still in circulation today:
Claim 1: "Usability testing must be conducted in the user's natural setting".
Source: This one comes from a leading provider of a semi-proprietary online customer experience solution that uses panels of users in their homes, traffic log data and analysts to generate reports.
Problem: There is no evidence of this claim in the Human Computer Interaction literature (the field where usability comes from). While the claim makes sense, it dissolves when you trace it to the usability technique where it was borrowed from: field studies. In field studies (aka Task Analysis, Ethnographic studies, contextual interviews) it is absolutely essential that the user's environment be observed and assessed. The point in field studies is to note interactions and influences of the environment. In usability testing, the point is to gauge if the website or software works to expectations. This has very little to do with the user's setting, or PC settings etc.
Claim 2:"You need to test your website with hundreds of users".
Source: Same as above.
Problem: This belief sells statistics, not usability insights. Since the majority of people are most comfortable with statistical data (quantitative), this claim again sounds convincing. The flaw however, is that usability testing is a qualitative research technique (observation is the metric not numbers). In qualitative research the research rules are different and it is normal to have small sample sizes e.g. 15-40 users. Usability testing is about observing actual user behavior and capturing expectations. Insight is the indicator, not statistical significance.
Claim 3: "If it takes more than three clicks, forget it".
Source: Unknown. Probably went around dozens of startups in Silicon Valley in the late Roaring 90's.
Problem: This "3 click rule" metric is e-commerce centric. Three clicks to the user destination is a metaphor for saying "don't take the user down the garden path to do something". The 3-click rule losses validity in other domains where users will naturally click 10 times to research an issue or purchase.
Claim 4: "Navigation is not important. Users don't care where they are in the website".
Source: A popular "customer experience" guru and evangelist.
Problem: This is a new one (Feb 16th 04) cycled back from something guru Jakob Nielsen said a few years back with the effect that navigation was "overdone" on many sites. In the new version, we are told "consistency is NOT necessary", and does not apply to websites. Outside of falling down on the floor with laughter, the problem here is that while users don't appear to be consciously concerned with navigation, their unconscious behavior indicates otherwise. A simple fact that every seasoned usability practitioner knows is that consistency increases ease of use (whatever the medium). Again, the prescription references the insights to "listening labs" (a reframed usability testing lab with questionable methodology of it's own).
Skilled observation by professionals that understand consumer cognition can go a long way to prevent sweeping generalizations about user behavior. For more on the topic of understanding unconscious customer behavior, see Gerald Zaltman's new book How Customer Think, where research shows physiological evidence of consumer behavior using brain scans.
Claim 5: "Website usability can be measured by proprietary software, agents or algorithms".
Source: a) a now defunct company and b) a new consultancy with a similar story.
Problem: Because usability involves the understanding of complex, dynamic, state dependent cognition it is virtually impossible to model user behavior with a bot, agent or algorithm. For example how can a machine model semantic interpretation? Machines can't. I worked for a time with the company who claimed the had invented a "technique that models human perceptual, cognitive, and motor behavior, and is programmed with a set of characteristics and Web-browsing behavior that represents the way an average user sees, thinks, and moves through a Web site".
The claim is completely false and was disproved by a world authority at Xerox PARC. I even compared four automated tests to four equal real usability test and had consistent dramatic failure of results from the automated algorithm approach. I realized that it is impossible to model how a user makes sense of a website, how they interpret content and to predict their expectations and train of thought. Yet, a new usability consultancy (that refuses to provide basic details about their methodology) has "invented" a proprietary algorithm for assessing competitive usability performance capturing data such as scrolling, scanning, typing in data, reading text, clicking and annoyance. Sounds too good to be true. You don't get to find out unless you become their client the President told me...
What is the anti-dote to methodology madness? As the Latin term "caveat emptor" implies (let the buyer beware), the best thing you can do is think for yourself, do your homework, compare and contrast the information. Ask a seasoned practitioner if you are not sure.
For the usability consulting industry, the agenda ought to include the following:
1) Clarifying and providing rigorous detail about proprietary methodologies (including peer reviewing).
2) Promoting integrity by serving prospects and clients with non-biased and non-partisan information.
3) Building and expanding upon existing agenda-free techniques and methods that service the greater good of the community.
I personally don't believe that "new" usability methodologies should be kept proprietary under the auspices of commercial protocols. Best practice research is not like a new technology or invention. Usability is about understanding user behavior and there is nothing proprietary about human behavior.
I also don't think it serves the industry or the pursuit of integrity for that matter, to claim that a technique is the "secret sauce". That's like one lawyer saying they have a better methodology to practice law then another attorney. There are people and companies who are competent and skilled and there are those who are not.
Best Wishes,
FS