Published in 1984 by Noriaki Kano, professor of Quality Management at the Tokyo University of Science in Japan, the Kano Model proposes a way of predicting customer satisfaction. Simply put, the model claims that all features are not equal – and some are more effective than others.
To discover the most important features, Kano argued, design teams need to uncover each feature’s emotional impact on customers. You achieve this by providing each feature with a grade.
Kano’s model maps individual features onto 5 grades. Each grade indicates how customers perceive a specific feature and the how it relates to customer satisfaction. The five grades are:
- One dimensional
After you map your features to the grades, you’ll know which features will have the biggest emotional impact and so you can decide where to allocate your design effort.
Let’s look at each of the grades in turn.
Also called “delighters”, these are the “wow” features, the features that put a smile on your face. These are the differentiators that make an application or device stand out.
Think about the first time you saw a touch-enabled scrolling smartphone or asked Siri for directions to the nearest Chinese restaurant. Those are delighters.
The essence of these features is that customers don’t usually expect them. They are a pleasing surprise that, once experienced, creates a desire to have and to use them – but if never seen, they are not missed.
These are the features where a direct, one-dimensional relationship exists between improving the quality of the feature and customer satisfaction.
Consider increased bandwidth, miles-per-gallon and free voice minutes: the “more” of the feature, the greater the customers’ satisfaction.
These are basic features which an application or service must provide, if they are to stand any chance of success. When these features are not available customers will simply look for other options. But unlike ‘one-dimensional’ features, improving ‘expected’ features does not increase satisfaction.
For example, making a call with a smartphone is a basic, must-have capability. If a new phone continually drops calls, it will disappear from the market.
Another interesting aspect to note is that with time, “attractive” features — that were considered differentiators when they first appeared — can quickly become “must have” features, like a touch screen on a mobile phone.
These are features that customers simply don’t care about. Customer satisfaction is unaffected by their improvement because they are neither good nor bad.
These are particularly important features.
Though rare, these are the features that - when present – detract from customer satisfaction. These are features customers don’t want and would prefer them removed from the app or service.
One infamous example is the Microsoft Office assistant Clippit that tended to irritate users with its ceaseless insistence on helping at the stroke of every key.
Applying the Kano Model to User Experience
There are three steps to applying the Kano model.
Step 1: Define features and user groups.
Just as you would for any usability methodology, your planning, design and testing is relevant to specific user groups. So first, identify your users.
Next, you need to identify which features to test for each group. There is simply no point in trying to test too many features – it is time consuming and will exhaust your users. If you need to test a large number of features, consider testing iteratively or in separate groups.
This means that grading and insights, arrived at through the Kano model, are always in the context of the features and the user group tested.
Step 2: Get user feedback
This comprises two steps:
- Prepare your questions
- Get user feedback
Preparing the questions
The Kano model uses a survey with a very specific form of questionnaire, developed as part of the model, in which users are asked two questions about each tested feature:
- What are your feelings when the feature is included in the service/application? (Functional)
- What are your feelings when this feature is NOT included? (Dysfunctional)
This is the heart of the Kano model, and possibly the trickiest to apply, and here’s why: contrary to what you may have been taught as a UX professional, you are asking the user to imagine their feelings in a given state, rather than observing what they do.
A major issue to look out for at this point is the questions. It is very hard to phrase a good survey question so that participants understand it without assistance.
This can result in invalidating the entire effort, so it is very important to test your questions in advance, with users and colleagues, to ensure they are clear.
Getting user feedback
When answering the two questions per feature, users are always required select one of these predefined options:
- I like it this way
- It expect it this way
- I am neutral
- I can live with it this way
- I don’t like it this way
Note: I have found no “official” guidelines or best practices indicating how many users to test with. Some sources state a minimum of 12 users while other have stated that 5-10 users can provide a good indication of a feature’s grade.Note also that this isn't a repleacement for face-to-face testing. Of course, it is always best to observe actual reactions, for example by providing users with hands-on experience on both included and absent features. This is really about triangulating data.
Step 3: Map Feedback and Grading Features
The accumulated responses are then gathered in the Kano model’s grading map, which correlates the 2 responses into a final grade for each feature:
A – Attractive
E – Expected
O – One dimensional
I – Indifferent
R – Reverse
Q – Questionable (reflecting unclear results which cannot be graded).
The following example shows one user’s response to a single feature, answering “1” on the functional question and “2” on the dysfunctional question, grading this feature as “A” , i.e. “Attractive”.
The table shows how you arrive at a final grade from the functional and dysfunctional ratings.
Different users may rate the same feature differently, so the grade with the most responses is this feature’s final grade.
This is where you will do a lot of the tedious, monotonous labor of entering data and accumulating results. To help you with this, we have prepared an Excel tool which requires only entering each user’s response and automatically maps and aggregates responses for a final grade.
A Final Note
This has been an introduction to what we hope you will find to be a useful tool in your work. The web provides many additional resources on applying the Kano model.
But though mapping features is important and useful, it cannot — and is not intended to — replace actual usability testing and iterative design processes.
We cannot predict the future, but we can try and maximize our impact and minimize wasted effort in aiming for the best apps and services we can provide.
About the author
Ori Zmora is a User Experience expert with the Experience Design Center (XDC) at Amdocs Ltd, where he designs and consults on a variety of projects, from enterprise software to mobile applications. Ori has been working in the fields of user experience, design and development of digital interfaces and digital advertising since 2001. He holds an HFI CUA certification and is a member of the Israeli Bar Association.
Love it? Hate it? Join the discussioncomments powered by Disqus
Foundation Certificate in UX
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most recent articles
Our most commented articles
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 16 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 17 articles tagged ethnography
- 14 articles tagged expert review
- 2 articles tagged fitts law
- 4 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 11 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 11 articles tagged iterative design
- 2 articles tagged legal
- 11 articles tagged metrics
- 3 articles tagged mobile
- 8 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 9 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 17 articles tagged selling usability
- 12 articles tagged standards
- 47 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 21 articles tagged tools
- 56 articles tagged usability testing
- 3 articles tagged user manual