If you listen carefully you can hear it. The sound of marketers and product developers bemoaning the usefulness of their focus group research data, and the stifled groans of companies falling on their metaphorical swords. These noises are not going unheeded. The ground swell of opinion among marketers and researchers alike is that all is not well with focus groups, and that something needs to be done about it.
A new consensus
In his recent Slate Magazine article, "Lies, Damn Lies and Focus Groups?" Daniel Gross strongly challenges the effectiveness and value of focus groups for informing product development and marketing. He draws attention to the widely documented mismatch between what people say about product concepts in focus groups, and the way they actually behave when it comes to making purchases — a mismatch that costs companies millions of dollars in misdirected product development efforts. Gross is not alone in his views on what is unquestionably the most widely used research method on the planet. Kay Polit, principal analyst at the global management consultant company A.T. Kearney, refers to focus groups as "a faulty process". Mary Lou Quinlan, founder and CEO of Just Ask A Woman, calls focus groups "a dangerous way to get market intelligence". Dev Patnaik of design strategy firm Jump Associates likens focus groups to:
…a customer terrarium, with people behind glass—taken out of their natural surroundings and observed for scientific purposes… Focus groups are the crack cocaine of market research. You get hooked on them and you're afraid to make a move without them.
And authors Joseph Pine and James Gilmore refer to focus groups as "the great lie". In their opinion, "The guidance from focus groups can be downright dangerous."
Lies? Dangerous? A faulty process? These are not encouraging testimonials upon which to stake millions of dollars, or a company's future. But how justified are these concerns?
Some real examples
- Over reliance on focus groups failed NBC whose sit-com, Coupling (a re-make of a Brit-com and intended to replace Friends), relied for direction, as most TV pilots do, on focus group responses. NBC had to pull the show from the air after only three disastrous episodes within less than a month of launch. Beryl Vertue, a lead writer on the original British show, says:
There's a huge reliance on ratings and focus groups and far, far too little reliance on a gut instinct, and I think that's a pity. And ultimately, I think it's a mistake.
- Poor management of focus group data failed the Pontiac Aztek which is selling below original expectations. Its styling was poorly received by focus group respondents, and should have caused concern and a possible re-design, points out Kay Polit:
Ideally, GM should have stopped Aztek in its tracks when it did so poorly in clinics. They might have been able to save it if they changed a few pieces of sheet metal, but instead somebody edited the data they got and senior management was making decisions on some pretty intensive editorialization… selling the vehicle at this point is probably going to cost them more than it did to design and build it.
- Focus groups failed the Chrysler PT Cruiser even though its sales now exceed expectations. Focus group data led the Chrysler planners to believe that they had, not a mass-appeal vehicle, but a niche vehicle. They geared up accordingly, and … underestimated volume.
- Focus groups failed a company targeting products to teenage girls. MIT Professor, Justine Cassell, author of a thought-provoking piece entitled "What Women Want" reports her experience working with the company. Following a series of focus groups the company concluded that what teenage girls wanted was technologically-enhanced nail polish. This was a happy coincidence as technologically-enhanced nail polish was precisely what the company produced! However, in Cassell's own research with 3,062 children (60% of whom were girls) in 139 countries, in which the children were invited to describe what they would like to use technology for, not a single one of them said technologically-enhanced nail polish!
- Reflecting what is now a well documented lack of positive correlation between what people say and what they actually do, the Yankelovich Minute Monitor recently provided data listing the top six attributes that respondents said will most strongly influence their purchase decisions for an SUV; and a list of the actual decision criteria used at the point of purchase. You can guess the punch line. Not one of the six attributes nominated actually played a role in the final purchase decision.
And so on and so forth. You get the picture. That these cases are not exceptional is evidenced by the fact that a staggering 80% of new products fail within the first six months of launch in spite of most of them going to market on the back of seemingly strong market research data. Data, incidentally, that cost $1.1 billion in 2001. An 80% failure rate?! This is not a subtle clue that something is wrong. It is like turning on the light and getting an electric shock eight times out of ten!
Why focus groups fail
So why do focus groups result in such costly blunders? After all, focus groups:
- Have a long history (they were first used over 60 years ago by US government sociologists investigating the effectiveness of WWII military propaganda movies);
- They are widely used;
- They have unquestionable face validity;
- They are quick and easy to design and use;
- They seem to be obviously in direct touch with "the voice of the consumer".
Of course, like any research method, focus groups need to be used appropriately, and they need to be put in the hands of research experts. Focus groups, thoughtfully prepared and effectively conducted, can meet certain objectives very well. I have had success with focus group data on numerous occasions in the USA and in Europe; and I have seen them used well by others.
Focus groups used for the purpose of idea generation, rather than for market verification, can be particularly effective. Ideas are ideas, and all is grist to the mill. I have conducted focus groups with telecoms managers, engineers, surgeons, nurses, policemen, and firemen, and these very specific group sessions do have a genuine and explicit focus, and can provide real insights. These are typically complemented by actual in-the-field ride-alongs and work-alongs, so that behaviours discussed in the focus groups can then be experienced first hand.
But I have also experienced less useful outcomes, and less insightful data, from groups representing more general market segments, such as teenagers, home-makers, and general consumers.
There are a number of important reasons, familiar to most of us, why most focus groups do not fare as well as they could. It is easy to point to methodological design flaws, or badly moderated sessions. Fingers can be pointed at poor and unrepresentative sampling, or at misleading data interpretation or badly written reports, or at recommendations that are ignored. None of these things are conducive to good research, no matter what the method. But none of these are the fundamental problem. The fundamental problem is that, in spite of what conventional wisdom tells us, it is not the voice of the consumer that matters. What matters is the mind of the consumer. The big mistake is in believing that what the mind thinks, the voice speaks. On that thought, I am reminded of an experience in Newcastle — ironically one of the most useful focus groups I ever ran — where the participants were all mildly drunk. They had been waiting in the bar of a hotel for the focus group to begin and they all arrived at the room carrying a pint of beer in each hand! Now I am not advocating this as a technique — but I did get the distinct feeling that I was circumventing their conscious awareness and accessing their genuine thoughts and beliefs!
Insight or hindsight?
There is a reason why "unarticulated needs" go unarticulated. Behavioural researchers have long known that expert behaviours (consumers are nothing if not experts at their own daily behaviours) are all but impossible to introspect upon, and so difficult to reliably articulate. We have known for almost 30 years (see Nisbett and Wilson's classic Psychological Review paper) that people have little, if any, reliable access to the cognitive reasoning that underlies decision-making; and that in most instances people are unaware of the factors that influence their responses. This does not mean that respondents cannot provide answers to the "Why?" questions by which most focus group moderators live and breathe. But it does mean that responses are not made on the basis of true introspection. Instead, responses frequently reflect a priori implicit causal theories about the extent to which particular stimuli may plausibly be associated with given responses. In other words, rather than reflecting deep and veritable cognitive processing, respondent's explanations for their decisions are frequently created on the fly in order to fit the situation.
But, even if — which is not the case but let's pretend it is — even if respondents could reliably access their own reasoning processes, and could reliably report on their decision making so that the researcher was indeed collecting bona fide data, we cannot escape the fact that most conventional focus groups actually measure the wrong thing. They do not measure what people think when making a purchase. They measure what people think when participating in a focus group. The psychological, sociological, neurological, and even pecuniary factors bearing on a person's decision making while they are participating and responding in a focus group are not the same psychological, sociological, neurological, and pecuniary factors that bear on decision making when the same person makes an actual purchase. According to Harvard Business Professor, Gerald Zaltman, focus group methods can tap into only about 5% of people's thought processes — the 5% that lies above the level of consciousness. But it is the 95% of cognition lying below the respondent's level of awareness — the bit that is not visible to focus groups — that is largely responsible for decision making.
Beyond the voice of the consumer
So we need to start considering more effective and more reliable methods for discovering consumer needs and preferences. We need to put aside the simplistic and overt questioning that telegraphs the researcher's intent, and approach the investigation (rather like a detective might proceed, in fact) from unexpected directions. Indeed, this is how Experimental Psychologists and Cognitive Scientists work. They do not tap into the complexities of human behaviour by simply asking people "what are you thinking?" and they seldom rely on people's introspection and self-report. Instead they use indirect methods of tapping into cognition and behaviour. Consumer research can learn much from the methods and tools of the Experimental Psychologist.
The key is to try and actually bypass the direct voice of the consumer. As an industry we agree that what people say and what people do are seldom the same thing. So it remains somewhat puzzling that we keep basing major decisions on what people say, while paying far less attention to what people do. What we ultimately want to know is the consumer's actual intent. This can be secured by methods other than expecting focus group respondents to inspect their cognitive machinery, understand what they find there, translate that into language, and then articulate unambiguously. It should come as no shock to learn that methods that directly exploit and capture the actual behaviour of consumers result in extremely strong predictions of … actual behaviour! Cultural or social anthropology and ethnography (in the hands of expert Anthropologists and Ethnographers), and structured methods (such as Beyer and Holtzblatt's Contextual Design) are highly effective ways of revealing unarticulated consumer needs. Surely it is no coincidence that these methods actually observe people as they engage in daily activities. These findings can then drive the conception, development and marketing of real product solutions — solutions that actually solve something. Although the resource investment of this approach to consumer research is often high compared to the costs of a few focus groups, it is minimal compared to the cost of getting development and marketing decisions wrong.
Later in the development process, lab-based or in-home user tests with high-fidelity prototypes can be used to refine the understanding of needs and to validate a solution's fit to a consumer's problem. Although usability testing methods are typically employed to identify user-interaction problems, they can lend themselves effectively to understanding the extent to which a product is actually useful. These behavioural approaches are all effective by virtue of obviating the need for consumer introspection and conjecture. Methods such as Gerald Zaltman's ZMET technique (which exploits the use of metaphors and thus bypasses explicit consumer awareness) is well grounded in established cognitive, psychological, and brain sciences, having emerged from work with the MIT Brain and Behaviour group. This method, and similar methods that employ known Experimental Psychology techniques, are essentially methods for "interviewing the brain". They are designed to tap into that hidden 95% of cognition that focus groups cannot see.
Focus group methods are prime for a rethink. It is time to start embracing methods that can deliver stronger predictive value. Until the industry starts consistently adopting methods that get to the core of consumer behaviour, rather than depending so heavily on obvious top-of-mind consumer opinion, billions of dollars will continue to be invested each year in throwing that light switch, only to feel the shock of market failure.
About the author
Dr. Philip Hodgson (@bpusability on Twitter) holds a B.Sc., M.A., and Ph.D. in Experimental Psychology. He has over twenty years of experience as a researcher, consultant, and trainer in usability, user experience, human factors and experimental psychology. His work has influenced product and system design in the consumer, telecoms, manufacturing, packaging, public safety, web and medical domains for the North American, European, and Asian markets.
Love it? Hate it? Join the discussioncomments powered by Disqus
Foundation Certificate in UX
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
If you liked this, try
People often throw around the terms "objective" and "subjective" when talking about the results of a usability test. These terms are frequently equated with the statistical terms "quantitative" and "qualitative". The analogy is false, and this misunderstanding can have consequences for the interpretations and conclusions of usability tests. Usability test data.
User Experience Articles
Our most popular articles
Our most commented articles
Our most recent articles
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 13 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 13 articles tagged ethnography
- 14 articles tagged expert review
- 1 article tagged fitts law
- 4 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 9 articles tagged iterative design
- 3 articles tagged layout
- 2 articles tagged legal
- 11 articles tagged metrics
- 3 articles tagged mobile
- 7 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 9 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 16 articles tagged selling usability
- 12 articles tagged standards
- 41 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 21 articles tagged tools
- 52 articles tagged usability testing
- 3 articles tagged user manual