The terms 'quantitative' and 'qualitative' refer to kinds of data. The definitions of these terms are uncontroversial and can be found in any standard statistics text book. Witte & Witte (2009), for example, present the distinction concisely, defining quantitative data as follows:
"When, among a set of observations, any single observation is a number that represents an amount or a count, then the data are quantitative."
So body weights reported by a group of students, or a collection of IQ scores, or a list of task durations in seconds, or Likert scale category responses, or magnitude rating scale responses, are quantitative data. Counts are also quantitative, so data showing size of family, or how many computers you own, are quantitative.
Witte & Witte define qualitative data as follows:
"When, among a set of observations, any single observation is a word, or a sentence, or a description, or a code that represents a category then the data are qualitative."
So 'yes-no' responses, people's ethnic backgrounds, or religious affiliations, or attitudes towards the death penalty, the presidential candidate you wish to vote for, or descriptions of events, speculations and stories, are all examples of qualitative data. Certainly, numerical codes can be assigned to qualitative responses (for example, 'yes' could be assigned 1 and 'no' could be assigned 2) but these numbers do not transform qualitative data into quantitative data.
The market researchers' fallacy
The Market Research Society also uses these terms to refer to the kinds of research methods that are used to collect quantitative and qualititative data. This can sometimes create confusion because few methods used in behavioral research collect either type of data exclusively. In daily practice another tendency has emerged that creates even more confusion. Many practitioners (again, we notice this mostly in market research) informally use the term quantitative or "quant" to refer to a study that employs a large test sample, and qualitative or "qual" to refer to a study that employs a small test sample. The threshold between the two is unclear. This latter use of the terms is simply incorrect. Sample size does not determine whether data are quantitative or qualitative.
Read the definitions again, and note that the fail-safe way to distinguish between quantitative and qualitative data is to focus on the status of a single observation, or datum, rather than on an entire set of observations or data. When viewed as a whole, qualitative data can sometimes bear a striking resemblance to quantitative data. 57 'yes' responses vs. 43 'no' responses look like quantitative data, but they are not. Although these numbers are important (and essential for some statistical procedures) they do not transform the underlying qualitative data into quantitative data.
The case of rating scales
Rating scales present an interesting case because they are used to capture subjective opinions with numbers. The resulting data are often considered to be qualitative. However, rating scales are not designed to capture opinions, per se, but rather are designed to capture estimations of magnitude. Rating scales do not produce qualitative data, irrespective of what the end-point labels may be. Data from Likert scales and continuous (e.g. 1-10) rating scales are quantitative. These scales assume equal intervals between points. Furthermore they represent an ordering, from less of something to more of something — where that ‘something’ may be ease-of-use or satisfaction or some other construct that can be represented in an incremental manner. In short, rating scale data approximate interval data and so lend themselves to analysis by a range of statistical techniques including ANOVAs. Qualitative data do not have these properties, and cannot be ordered along a continuum, or compared in terms of magnitude (although qualitative data can still be analyzed statistically).
While quantitative studies are concerned with precise measurements, qualitative studies are concerned with verbal descriptions of people’s experiences, perceptions, opinions, feelings and knowledge. Whereas a quantitative method typically requires some precise measuring instrument, the qualitative method itself is the measuring instrument. Qualitative data are less about attempting to prove something than about attempting to understand something. Quantitative and qualitative data can be, and often are, collected in the same study. If we want to know how much people weigh, we use a weighing machine and record numbers. But if we want to know how they feel about they weight we need to ask questions, hear stories, and understand experiences. (See Patton, 2002, for a comprehensive discussion of qualitative data collection and analysis methods).
Subjective and objective data
Another frequent source of confusion — especially when used in the context of qualitative and quantitative data — is that of subjective and objective data. The 'rule' is that subjective data result from an individual's personal opinion or judgement and not from some external measure. Objective data on the other hand are 'external to the mind' and concern facts and the precise measurement of things or concepts that actually exist.
For example, when I respond to the survey question "Do you own a computer?" my answer "Yes" represents qualitative data, but my response is not subjective. That I own a computer is an indisputable fact that is not open to subjectivity. So my response is both qualitative and objective. If I am asked to give my general opinion about the price of computers, then my response "I think they are too expensive" will be both qualitative and subjective. If I am asked to report the chip speed of my computer and I reply "2 GHz" then my response is both quantitative and objective. If I respond to the question "How easy is your computer to use on a scale of 1 to 10?", my answer "seven" is quantitative, but it has resulted from my subjective opinion, so it is both quantitative and subjective.
|Objective||"The chip speed of my computer is 2 GHz"||"Yes, I own a computer"|
|Subjective||"On a scale of 1-10, my computer scores 7 in terms of its ease of use"||"I think computers are too expensive"|
Confusion often arises when people assume that 'qualitative' is synonymous with 'subjective', and that 'quantitative' is synonymous with 'objective'. As you can see in the above examples, this is not the case. Both quantitative and qualitative data can be objective or subjective.
Beware of smoke and mirrors
We could put all of this down to troublesome semantics and dismiss the matter as being purely academic, but clarity of thought and understanding in this area is critically important. Misunderstanding and misusing these terms can signal a poor grasp of one's data, and may reduce the impact of any study results on product design decisions. It can result in the wrong analysis (or in no analysis at all) being conducted on data. For example, it is not uncommon for usability practitioners to collect subjective rating scale data, and then fail to apply the appropriate inferential statistical analyses. This is often because they have mistakenly assumed they are handling qualitative data and (again erroneously) assume that these data cannot be subjected to statistical analyses. It is also not uncommon for usability practitioners to collect nominal frequency counts and then to make claims and recommendations based solely on unanalyzed mean values.
Handling data in this casual way can reduce the value of a usability study, leaving an expensively staged study with a 'smoke and mirrors' outcome. Such outcomes are a waste of company money, they cause product managers to make the wrong decisions, and they can lead to costly design and manufacturing blunders. They also reduce people's confidence in what usability studies can deliver.
The discipline of usability is concerned with prediction. Usability practitioners make predictions about how people will use a website or product; about interaction elements that may be problematic; about the consequences of not fixing usability problems; and, on the basis of carefully designed competitive usability tests, about which design a sponsor might wisely pursue. Predictions must go beyond the behaviour and opinions of a test sample. We care about the test sample only insofar as they are representative of our target market of interest. But we can have a known degree of confidence in the predictive value of our data only if we have applied appropriate analyses. To fail to apply inferential statistics can be a serious oversight. Such an approach could be justified only if we cared not to generalize our results beyond the specific sample tested. This would be a very rare event, and would apply only if our test participants were so specialized that they turned out to be the entire population of target users.
Power and understanding
Usability experts collect both qualitative data (usually during early contextual research and during formative usability testing when identifying usability problems) and quantitative data (usually during summative testing when measuring the usability of a system). In both cases they typically focus on collecting data that are objective and that result from observable user behavior. Is it better to collect one kind of data over another? Usually both are required to get a full understanding of a user's experience with a product or system. But we like what the astronomer and author Carl Sagan had to say on the matter. It borders on the poetic, and it made us think. So we'll leave the last word to him:
"If you know a thing only qualitatively, you know it no more than vaguely. If you know it quantitatively — grasping some numerical measure that distinguishes it from an infinite number of other possibilities — you are beginning to know it deeply. You comprehend some of its beauty and you gain access to its power and the understanding it provides."
Sagan, C. (1997) Billions and Billions. Ballatine Books.
Patton, M. Q. (2002) Qualitative Research and Evaluation Methods (3rd Edition). Sage Publications.
Witte, R. S. & Witte, J. S. (2009) Statistics. John Wiley & Sons.
About the author
Dr. Philip Hodgson (@bpusability on Twitter) holds a B.Sc., M.A., and Ph.D. in Experimental Psychology. He has over twenty years of experience as a researcher, consultant, and trainer in usability, user experience, human factors and experimental psychology. His work has influenced product and system design in the consumer, telecoms, manufacturing, packaging, public safety, web and medical domains for the North American, European, and Asian markets.
Love it? Hate it? Join the discussioncomments powered by Disqus
Online training in user experience
Get a job in UX and improve your web site's UI design with these in-depth, hands-on user experience training courses. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most popular articles
Our most commented articles
Our most recent articles
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 12 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 9 articles tagged ethnography
- 14 articles tagged expert review
- 1 article tagged fitts law
- 4 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 8 articles tagged iterative design
- 3 articles tagged layout
- 1 article tagged legal
- 10 articles tagged metrics
- 3 articles tagged mobile
- 7 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 9 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 15 articles tagged selling usability
- 12 articles tagged standards
- 40 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 21 articles tagged tools
- 46 articles tagged usability testing
- 3 articles tagged user manual