…where it was Todd’s turn to buy the beers. While Todd was elbowing his way through the crowd at the bar, David and I started the discussion without him.
Philip: I just read an article by Jeff Sauro where he posed the intriguing question: Is Usability a Science?
“Is manufacturing a science? Is marketing a science? Is engineering a science? None of these are sciences. But they all can be informed by science or ignore science. To be ‘scientific’ is to rely on observation and measurement instead of intuition and superstition.”
This got me thinking. Before I tell you where I stand, what’s your view?
Science and empiricism
David: When I think about science I think about empiricism. And usability is certainly an empirical discipline. We identify assumptions, create hypotheses to test our assumptions, and then create ‘tests’ (which could be real-world observations or lab-based studies) to validate or invalidate the assumptions.
Philip: I agree, usability is test-based, methodical and rigorous. But is that enough to make it a science? It’s like Jeff’s point that manufacturing, marketing and engineering are not obviously sciences.
At this point, Todd brings beers and crisps. We tuck into both and bring him up to speed with the discussion.
Todd: I’d say its science if we apply the scientific method, and that could happen in most domains if you try hard enough.
How do we characterise science?
Philip: Why don't we back up a bit and remind ourselves what science actually is? Then we can see if usability, as a discipline, meets the criteria.
The most common definition of science, and the one that springs most readily to mind, is that it is a body of knowledge to do with the physical or natural world. It’s everything we know about the way the world is, how the world works and why the world works the way it does. It’s what we read about in most science textbooks.
But science has another meaning. Science is a self-correcting process of inquiry, a method of discovery, a way of generating and answering questions with a known degree of confidence. It’s a way of generating knowledge, not just the knowledge itself.
And a third important characteristic is the ability to falsify hypotheses. As Karl Popper famously pointed out, if your hypothesis is that all swans are white, no amount of white swan sightings can prove your theory—but just one black swan sighting can disprove it. If a hypothesis is not potentially testable and falsifiable then it simply doesn’t fall within the realm of science. This is why Popper considered Freud’s theories to be unscientific, and why scientists don't spend their time trying to disprove the existence of unicorns and fairies.
Todd: But aren’t you just agreeing with my point—that it’s science if we apply the scientific method?
The steps in scientific inquiry
Philip: Yes, but the scientific method requires some specific steps to be carried out. Let’s list those steps.
Scientific inquiry begins with a hypothesis about something—often the result of prior observations or research—that we frame as a specific and testable research question. To make this easier, let’s imagine that we hypothesize that memory is improved by consuming beer.
The three of us take a drink to toast this wonderful idea.
Todd: That’s a good example because it gives us a null hypothesis that is falsifiable—namely that beer has no effect on memory. We could test the null hypothesis in a memory experiment in which we manipulate the amount of beer between, say, three groups of people. Then, based on empirical data, we either accept or reject the null hypothesis. If we get no effect we accept the null hypothesis and conclude that beer, as used in our study, has no effect on memory. If we reject the null hypothesis we have reason to be confident—and using statistics we can calculate how confident—that beer does affect memory.
David: Finally, we then interpret our findings and explain what they mean for memory and for cognition more generally.
Philip: No, there’s one last step. We must communicate our findings by publishing or presenting them to the larger community where we may need to subsequently defend them.
Where is the Grand Theory of Usability?
Philip: If that’s a description of science, what about usability? Given those quite rigorous criteria, I’m struggling to see how some usability methods—a formative usability test for example—can be considered science. However, I can see that a well-controlled summative test that compares, say, two interfaces that differ by virtue of some interesting variable could be.
David: Hang on. What’s your concern about a formative usability test not being science? I’d argue that hypotheses can be tested with formative test methods—you can even test hypotheses with field visits. ‘We believe that our users carry out these tasks every day in their job’ is a hypothesis that you may have when putting together an assumption persona. And we can ‘test’ that hypothesis. If we find that just 3 out of 10 people in our field visit carry out those tasks, irrespective of its statistical significance, we have evidence that our hypothesis is wrong.
This matters, because hypothesis testing that needs to meet a statistical bar (like ‘95% confident’) takes time — much more time than product teams typically have. But we can still inform teams which way the wind is blowing using rough and ready methods. And the great thing about iterative design is that it has a self-correcting process built in, just as in your description of science. You try something else and quickly test it with people to see if you are on the right track. Iterative design is a bit like empiricism on amphetamines.
Philip: I don’t disagree with this. I’m just coming back to whether a hypothesis is building towards a ‘grand theory’, which is another hallmark of science. ‘We believe that our users carry out these tasks every day in their job’ seems to me not much different in it’s ‘scientific’ nature to ‘Given the road works by the station, I hypothesise that the number 2 bus will come round the corner before the number 7 bus.’ It has an air of triviality to it.
David: I disagree about these assumptions being trivial. Maybe it’s the example I gave, but the field of design is littered with failed products because people didn't question their assumptions (because the answer seemed ‘obvious’). I’m sure Darwin didn't start with his theory of evolution the day he set foot on HMS Beagle. He made observations and thought, ‘That’s funny, what if …?’ I’d argue he was doing science at that point, not just at the point where he had a grand theory.
Philip: Yes, you’re right of course about Darwin’s approach… he was doing science right from the beginning as he observed different species. But although he had no grand theory at the beginning, he was still building towards one. I just can't see how finding and fixing usability problems can be called science; otherwise my car mechanic is doing science too, as is my plumber, who are both using a kind of empiricism to find and fix problems. Empirical research is just one characteristic of science. Building towards, or adding to, a more general body of knowledge is also important.
Todd: Good point. So the question becomes: when I do research in usability am I contributing to a body of knowledge?
I think the answer is both ‘yes’ and ‘no’. It’s ‘no’ because I am mainly interested in finding and fixing problems in a system. The knowledge doesn't need to live beyond that.
David: But it’s also ‘yes’ because some general principles may emerge. Jakob Neilsen’s F-shape heat map from eye tracking web pages is a good example. He discovers that people don’t pay attention to the stuff on the right hand side of a web page. That’s useful, practical, advice for the team developing the web site. But it’s also of more general significance—it adds to a body of knowledge.
Philip: Or is it just a collection of facts? Do usability findings coalesce into an overarching theory of something? Remember, we’re not using the term ‘theory’ in the common parlance of having a hunch about something, such as ‘I have a theory that my football team will win this weekend because we’ve just signed a new player’. That’s a hunch, not a theory.
We’re using the term ‘theory’ in the scientific sense, in the way that Darwin and Newton and Einstein used the term to refer to a comprehensive and coherent explanatory model, like the Theory of Evolution, or the Theory of Gravity, or the Theory of Relativity.
But I think your F-shape example is a good one. If Neilsen’s phenomenon had been discovered in a psychology study that was trying to explore human visual attention no one would hesitate to say it was science, and the findings would map on to a larger body of knowledge that attempts to understand human vision and cognition.
So how does the field of usability stack up?
David: Although their Science Citation Index may be low, surely journals in the usability and human factors field are contributing to a bigger theory. Given that, and its use of empirical methods, I’d argue that usability is a science.
Philip: The jury is still out for me. I’m happy to acknowledge that usability does use scientific methods sometimes. But for the most part the findings are study-specific and seldom contribute to a larger body of knowledge or form a theory of anything.
In fact I’d say that usability practitioners are seldom thinking about any particular scientific theory when they design their studies. That’s not a criticism; it’s the nature of the job. So I’d say that, as a discipline, usability doesn’t really meet the standards of a true science. It’s more akin to what Jeff Sauro said in his article about marketing and engineering: the overall discipline is not a science, but it can benefit from some of the characteristics of the scientific method.
Todd: I think I could go either way on this, but—
Just then, in one of those coincidences that could only happen in a usability article, who should walk in the pub but Jeff Sauro himself. Jeff joins us at the table and David gets a round in while Todd and I summarise the discussion.
Jeff: I want to expand on Philip’s point about a ‘grand theory’. Most managers shun the idea of “theory” because it sounds synonymous with impractical. Even though most designers and product developers don’t think of themselves as being theory driven, they are in fact acting on some theory. When interface or product decisions are made, they are based on a theory—an implied explanatory model—that leads them to believe that the design changes will result in a more usable interface or a better customer experience. The problem is design teams rarely make these theories explicit—or worse, use a wrong theory. I’d argue that it’s the absence of conscious and trustworthy theories of cause and effect that makes usability seem so random and probably makes us question the whole idea of the field being a science.
In his book, The Innovator’s Solution, Clayton Christensen writes that good theories proceed in three stages. Researchers:
- Describe the phenomenon. For example, researchers show how a poor design feature degrades “use” (this can be measured from attitudes and actions).
- Classify the phenomenon into categories. Researchers attempt to understand the repeating patterns that lead to usability problems (heuristics and guidelines are built on these).
- Create a theory to explain the results. Researchers assert how a theory can lead to cause and effect and prediction.
Our field does OK at Christensen’s first stage. But I’d argue we’re less advanced in the second stage and even further behind at the third stage involving cause and effect.
We can only trust a theory and claim what we do as 'science' when we can show that our recommendations (from the theory) lead to the desired outcomes.
Todd: I think you’ve hit the nail on the head there, Jeff.
It’s been fun kicking this question around, but, as I was about to say when Jeff walked in, I want to end with a reality check. Frankly, I don’t really think that calling usability a science or a craft is going to change the discipline one way or the other.
But I do think we can agree that as usability practitioners we should at least adopt a scientific way of thinking. By this, I mean we should approach our work with a self-critical and naturally sceptical mind set, and that our methods, whenever circumstances and budgets allow, should employ the scientific method of investigation.
Philip: I can drink to that.
David: Me too—if I actually had a drink. Whose round is it?
About the author
Dr. Philip Hodgson (@bpusability on Twitter) holds a B.Sc., M.A., and Ph.D. in Experimental Psychology. He has over twenty years of experience as a researcher, consultant, and trainer in usability, user experience, human factors and experimental psychology. His work has influenced product and system design in the consumer, telecoms, manufacturing, packaging, public safety, web and medical domains for the North American, European, and Asian markets.
Love it? Hate it? Join the discussioncomments powered by Disqus
Foundation Certificate in UX
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most recent articles
- Apr 3: Is Usability a Science?
- Mar 6: Why iterative design isn’t enough to create innovative products
- Feb 6: The Beginners' Guide to Contextual Interviewing
- Jan 9: The 8 competencies of user experience: a tool for assessing and developing UX Practitioners
- Dec 5: Non-UX books that every UX practitioner should read
Our most commented articles
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 16 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 14 articles tagged ethnography
- 14 articles tagged expert review
- 1 article tagged fitts law
- 4 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 11 articles tagged iterative design
- 3 articles tagged layout
- 2 articles tagged legal
- 11 articles tagged metrics
- 3 articles tagged mobile
- 7 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 9 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 16 articles tagged selling usability
- 12 articles tagged standards
- 44 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 21 articles tagged tools
- 53 articles tagged usability testing
- 3 articles tagged user manual