In 1958 Theodore Sturgeon, an American writer and critic of science fiction and horror stories, argued that 90% of everything (he specifically listed science fiction, film, literature and consumer products) is crap.

I remember thinking about Sturgeon’s Law during my first day in the world of product development. I had transitioned from a university science research lab to a multinational telecoms company. As a curious form of welcome, one of my new colleagues shared with me the depressing fact that in the eight years she had worked for this company not a single product that she’d worked on had been a success. It came as a shock to the business world (though I’m sure not to my colleague) when after a few more years the company — which could trace its lineage back to Alexander Graham Bell — folded completely.

It seems that Sturgeon was onto something, because working on products that don't succeed is now the rule rather than the exception. It's generally asserted that about 90% of all new products fail in the marketplace within six months of their launch. (By the way, this figure varies between 65% and 95% depending on which book or article you read, and I’ve yet to see anyone cite actual study data. But the exact percentage is not really important — it is enough to know that most new products fail.)

It means, purely on the balance of probability, that the product, application, device, system or app that you are working on right now is more likely to be a commercial failure than it is to be a success.

It might even end up in a museum, though maybe not the kind of museum you’d like it to be in. If you want to see some of the sorry 90% take a trip to the North America offices of market research giant GfK in Ann Arbor, Michigan. It is where products go to die. Among the more than 100,000 clunkers on display in this ‘Museum of Failed Products’, you may spot Clairol Touch of Yogurt Shampoo, Pepsi Breakfast Cola, Gillette For Oily Hair Only, Ben-Gay Aspirin, Colgate TV dinners and, of course, Crystal Pepsi.

The thing that should strike you is that, once upon a time, these were all active projects in major product companies, with project plans and timelines and launch targets, and with enthusiastic business and project managers, legal experts, accountants, designers and marketers meeting to debate mission critical decisions, and all convinced they were on to a winner.

They were wrong.

“Failure,” points out economist, Paul Ormerod, in his book Why Most Things Fail, “is the distinguishing feature of corporate life.”

Theodore Sturgeon would have just nodded: “Told you so.”

Cheerleaders, blind faith and ideas that won't die

It comes as a bit of a surprise, then, to know that company executives and employees, by and large, know quite well why products fail. It’s not a mystery.

For example, market research resource GreenBook lists these reasons for new product failure:

  • Marketers assess the marketing climate inadequately.
  • The wrong group was targeted.
  • A weak positioning strategy was used.
  • A less-than-optimal “configuration” of attributes and benefits was selected.
  • A questionable pricing strategy was implemented.
  • The ad campaign generated an insufficient level of awareness.
  • Cannibalization depressed corporate profits.
  • Over-optimism about the marketing plan led to an unrealistic forecast.
  • Poor implementation of the marketing plan in the real world.
  • The new product was pronounced dead and buried too soon.

Clearly, any and all of these factors can contribute to failure, but there’s something else going on too, and Isabelle Royer, Professor of Management at Jean Moulin University Lyon 3, nailed it in her Harvard Business Review article entitled “When bad ideas won't die”.

Analyzing the problem in two French companies, Royer uncovered, not incompetence or poor management per se, but a “fervent and widespread belief among managers in the inevitability of their project’s ultimate success.” This belief surges through an organization and gains momentum as everyone roots for the project like cheerleaders urging on a football team. This produces a ‘collective belief’ that tolerates only positive talk, positive data, and positive outcomes, and it blinds the project team to warning flags and negative feedback.

But it is precisely this kind of unquestioning belief — Royer refers to it as blind faith — that is anathema to scientists and critical thinkers. In science, nothing is taken on belief or blind faith. Everything — every idea, assertion or hypothesis — requires evidence, and every claim and every data point is challenged. In fact, in science it is a prerequisite that an idea or hypothesis must, in principle, be falsifiable, otherwise it is simply dismissed out of hand.

Science is a self-correcting method for discovering the truth about things. But first and foremost science is a way of thinking. As part of their training, scientists, irrespective of their specific discipline, acquire a set of techniques or ‘thinking tools’ that are continually sharpened through use.

This raises an interesting question. Can thinking like a scientist, rather than thinking like a cheerleader, help teams and individuals challenge dodgy product ideas, help kill off bad projects, and provide confirmation for potentially good product ideas?

I think it can.

Let’s take a look at the toolkit.

Thinking like a scientist

In his book, ‘The Demon Haunted World - Science as a Candle in the Dark’, Carl Sagan, scientist, astronomer, astrophysicist, cosmologist, author, and science communicator, presents a tool kit for critical thinking — he calls it his ‘Baloney Detection Kit’. Applying some or all of these tools is a guaranteed way to uncover errors, flawed thinking, false assertions, preposterous claims, hoaxes, frauds, flimflam, pseudoscience, deception, con tricks, scams, myths, superstitions, fantasy, fiction, mysticism, hocus pocus, outright lies and general BS. In science such thinking tools underpin the design of experiments and are used to challenge and test hypotheses, including a scientist’s own hypothesis, as well as to debunk spurious claims.

In product design and development we can use these tools to strengthen an idea’s claim for support, or to expose flawed assumptions, or to identify projects that should be shut down, and ultimately we can use them to ensure that critical ‘go/no-go’ decisions are based on verifiable evidence. If you've read anything about the Lean Startup movement, or about Lean UX in particular, some of these ideas may sound familiar.

Here is Carl Sagan’s Baloney Detection Kit with Sagan’s notes shown in italics.

1. Confirm the facts

“Wherever possible there must be independent confirmation of the facts.”

Require evidence. Don't take product mandates or design decisions at face value, or assume that people know what they are doing or that someone else will check the facts. Be a skeptic.

We’re not talking here about being negative or cynical, or being a curmudgeon or a naysayer. Thoughtful skepticism is a good thing. It’s what stops us from being gullible. Ask yourself, what’s driving this decision? What’s the evidence for believing that X, Y or Z is the case?

Evidence supporting a design or marketing decision might be in the form of results from a market research study, a usability test, or observational field research; or it might be an account manager’s record of customer requests for a certain product or feature, or a pattern of customer helpline questions. Or the decision to move in a certain direction may be a business decision supported by economic or trends data. Whatever forms the evidence takes, confirm its source and its validity and reliability. I know I don't need to write this sentence but I will anyway: Just because you read a ‘fact’ in an article or on Twitter doesn’t mean it's true. If your source is the result of secondary ‘desk’ research, try to follow the references right back to the original source.

As an aside, here’s a question we might all ask ourselves: “When was the last time I made a specific design or marketing or (insert your specialty) decision based on actual verifiable evidence, rather than on my gut feel or on my, or someone else’s, opinion, or on the outcome of political debating?” Or for that matter, “When was the last time I double-checked the facts about something?”

2. Encourage debate

“Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.”

Discuss the evidence as a team. Is the evidence empirical? How was data collected? Does the research method (or the assumptions, if there was no research) stand up to scrutiny? Do independent sources of evidence converge to support the same decision? Was the evidence interpreted correctly? Do the decisions that resulted flow logically from the evidence? Note that debating the evidence is not the same thing as debating personal opinions and preferences — there must actually be some evidence on the table.

3. Remember that authorities can be wrong

“Arguments from authority carry little weight. Authorities have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that there are no authorities; at most, there are experts.”

Don’t pull rank in product design or marketing meetings. Instead, present data. Data trumps opinion, no matter whose opinion it is. If no one has data, use your position to authorize someone to do the work required to get it.

4. Develop more than one idea

“If there is something to be explained, think of all the different ways in which it could be explained. Then think of tests by which you might systematically disprove each of the alternatives. What survives, the hypothesis that resists disproof in this Darwinian selection among 'multiple working hypotheses,' has a much better chance of being the right answer than if you had simply run with the first idea that caught your fancy.”

Think of all the ways one could solve the customer problem or meet the user need. Sketch them as storyboards or mock them up as low fidelity paper prototypes or cardboard models. See which ones fly best. How do you decide which idea to keep? Run experiments. Try to shoot them all down. Don’t try to prove them, try to disprove them. This is how science works. Let the data decide. Don't just ask people whether they like your concept. That’s a cop out and it does your company a disservice. Instead, design your research to pinpoint the idea that best resists all attempts to discredit it.

5. Keep an open mind

“Try not to get too attached to a hypothesis just because it’s yours. It’s only a way station in the pursuit of knowledge. Ask yourself why you like the idea. Compare it fairly with the alternatives. See if you can find reasons for rejecting it. If you don't, others will.”

Be open to changing direction (lean practitioners call this a pivot). You may think this is odd, but scientists rejoice when a hypothesis is proven wrong. It means they have nudged science forward, have added to the body of human knowledge and have advanced our understanding of the world. Be ready to change direction for the sake of the project. Being wrong is OK. It’s how we develop expertise.

6. Measure things

“Quantify. If whatever it is you’re explaining has some measure, some numerical quantity attached to it, you’ll be much better able to discriminate among competing hypotheses. What is vague and qualitative is open to many explanations. Of course, there are truths to be sought in many qualitative issues we are obliged to confront, but finding them is more challenging.”

Quantifying things takes the ambiguity and guesswork out of decision-making. Whenever possible design your experiments to gather quantitative data, not just people’s opinions and comments. Note that Sagan is using the terms ‘quantitative’ and ‘qualitative’ to refer to the kind of data one should collect (precise numerical data vs. vague verbal data). He is not using the terms in the way market researchers currently use them to refer to a study design that has a large or a small number of respondents.

“If there’s a chain or argument, every link in the chain must work, including the premise, not just most of them.”

Every part of an argument must stand up to scrutiny. Similarly, every element of an idea or product concept must work, or the weak links must be identified so they can be strengthened. In product development we can also apply this thinking to another form of chain. Over the course of a typical product development cycle, bursts of design activity are strung together like links in a chain, connected by stage gates, or checkpoints, that allow for progress and quality checks. The stage gates are an invitation to apply the thinking tools and to flag any concerns. Applied early they can help confirm or redirect a project; applied late they may still be able to save your company the embarrassment and cost of a failed launch.

8. Apply Occam’s Razor

“This convenient rule of thumb urges us when faced with two hypotheses that explain the data equally well, to choose the simpler.”

William of Ockham was a Franciscan friar, logician and philosopher who lived in the late 13th and early 14th centuries. He is known for his maxim (his metaphorical razor) that advocates cutting or shaving away unnecessary assumptions. He wrote: Numquam ponenda est pluralitas sine necessitate, which means, ‘Plurality must never be posited without necessity’. In other words: opt for the simpler and more parsimonious of available explanations or solutions.

Taking it a step further, design for simplicity. Don’t make your product — or your rationale for it — any more complicated than it needs to be. And kudos in advance to anyone quoting Occam’s Razor (in Latin) the next time your team is discussing feature creep.

9. Test the hypothesis

“Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable, are not worth much.”

It’s rather unlikely, in the context of product ideation and development, that we’ll encounter truly untestable and unfalsifiable hypotheses of the kind that are sometimes postulated by armchair thinkers (e.g., the universe and everything in it including our memories came into existence just 10 seconds ago). However, we may still encounter ideas, assertions or arguments that cannot be tested or falsified for other reasons. Sometimes — especially in large corporations — the origin and rationale for embarking on a certain project can be a mystery to almost everyone on a team; sometimes the mandate has just descended from the higher echelons of the company and so goes unquestioned. Sometimes we build a new product for no other reason than that our competitor is building one. There's a sense in which these directives are untestable — but they can still be questioned. Why are the competitors building the product? What do they know that we don't? How can we make our version better than theirs?

Other times, an idea or hypothesis may be predicated on data that relate to questions we consider to be ‘untestable’, or questions that make no sense, or that respondents cannot reasonably be expected to answer: “What do you think doing laundry will be like 10 years from now?” or “How likely are you to buy this product?”

Always ensure that your ideas are testable and that your research questions can return valid answers.

10. Conduct experiments

“The reliance on carefully designed and controlled experiments is key. We will not learn much from mere contemplation.”

Although this is not strictly one of the thinking tools, Carl Sagan rounds out his Baloney Detection Kit by advocating that you carry out experiments. This is not just a general recommendation to ‘do some research’ it is a specific direction to conduct carefully designed experiments in order to decide among competing ideas, solutions, explanations or hypotheses (and is a key principle of Lean UX). This means having control conditions, eliminating sources of error, avoiding biases and, whenever possible, running double-blind experiments. In a double-blind experiment neither the test participants nor the experimenters know the hypothesis or the conditions. This means that if you were testing your new concept against competitor products, for example, no one involved in the study will know which is your product and which are the competitor products until after the data had been collected.

Note that surveys and focus groups are not experiments.

10% here we come!

Development teams spend a lot of time and effort debating how to build a product right, but far less time and effort debating whether they are building the right product. The situation described by Professor Royer is not uncommon. Most, if not all, projects, at some point, pass a point of no return, gathering pace like a runaway train until even the team managers have no way of changing course or stopping the momentum.

But it’s worth noting that most project teams do have skeptics onboard, people who may have concerns about a project’s direction. Some are naturally vocal, some just grumble in the background, and others may not have the confidence to stick their head above the parapet because they are unsure how to critique an idea or challenge an argument.

Sagan’s Baloney Detection Kit provides just the thinking tools that are needed. Try them out next time you attend a project kick-off meeting or read a research report pertaining to your project. And don't forget to use the Baloney Detection Kit to evaluate your own ideas and arguments before presenting them to others.

We may not be able to change the outcome of Sturgeon’s Law, but by applying these thinking tools early in the design lifecycle — ideally during the ideation and concept formation phase — and conducting experiments to test early models, we can increase the probability that our product launch will land safely among the successful 10% rather than in the… well, you know where.

About the author

Philip Hodgson

Dr. Philip Hodgson (@bpusability on Twitter) has been a UX researcher for over 25 years. His work has influenced design for the US, European and Asian markets, for everything from banking software and medical devices to store displays, packaging and even baby care products. His book, Think Like a UX Researcher, was published in January 2019.



Foundation Certificate in UX

Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details

Download the best of Userfocus. For free.

100s of pages of practical advice on user experience, in handy portable form. 'Bright Ideas' eBooks.

Related articles & resources

This article is tagged strategy.


Our services

Let us help you create great customer experiences.

Training courses

Join our community of UX professionals who get their user experience training from Userfocus. See our curriculum.

Phillip Hodgson Dr. Philip Hodgson (@bpusability on Twitter) has been a UX researcher for over 25 years. His work has influenced design for the US, European and Asian markets, for everything from banking software and medical devices to store displays, packaging and even baby care products. His book, Think Like a UX Researcher, was published in January 2019.

Get help with…

If you liked this, try…

Get our newsletter (And a free guide to usability test moderation)
No thanks