These checkpoints are based on the work of Julian Meltzoff (1997).
- Is the research question clearly stated?
- Does the introduction, statement of the problem, and overview of any literature or previous reports adequately set the background for the reader, and is this material consistent with the research question?
- Is it clear why the study was conducted?
- Given the research question and the background material are the research hypotheses and objectives appropriate and clearly stated?
- Are key terms well defined?
- Is the independent variable appropriate given the question of the study? Are the levels of the independent variable appropriate?
- Is the dependent variable appropriate for the study?
- Are the criterion measures of the dependent variable appropriate, valid, and reliable?
- Are the scoring, rating, and judging procedures valid and reliable?
- Is the measuring apparatus (if any) accurate and reliable?
- Are the controls appropriate? Can the results be affected by variables that have not been controlled? Are the controls or control groups (if used) properly selected?
- Is the research design suitable to meet the objectives of the study? Is the research design appropriate to test the hypotheses and answer the research question?
- Are the methods and procedures clearly described in sufficient detail to be understood and replicated?
- Is the presentation sequence of test stimuli (including any randomisations or counter-balancing) appropriate?
- Are the test participants properly oriented and motivated? What is their understanding of the task? Are the instructions sufficiently clear and precise?
- Are there any signs of experiment bias in the design, data collection, assessment, analysis, or reporting?
- Are the participants properly selected? Is the sample representative and unbiased? Do the procedures adhere to the guidelines for the protection and well being of participants?
- Is the sample size appropriate? Are the appropriate procedures used to assign participants to groups, treatments, or conditions? Are suitable techniques used to establish group equivalence, such as matching, equating, or randomising?
- Does participant attrition occur and if so does it bias the sample?
- Are bad data properly identified and set aside (not included in the final test data set); and are instances of bad data reported and explained as such?
- Have the data been appropriately analysed, sorted, categorised, grouped, prioritised etc.?
- Are descriptive statistics used? Are these accurate?
- Are the inferential statistical tests appropriate? Are the assumptions for their use met? Are there any errors in the calculation or presentation of statistical results?
- Are all graphs correctly labelled (both the X and Y axes)? Are data elements on graphs properly coded, and identified?
- Are tables and figures clearly labelled and accurately presented and referenced in the text? Are results and findings correctly interpreted, properly reported, given meaning and placed in context?
- Are recommendations unambiguous? Do recommendations follow clear usability, human factors, or ergonomics guidelines?
- Are recommendations supported by references to prior literature or to industry standards?
- Is the discussion section of the report reasonable in view of the data?
- Are the conclusions valid and justified by the data?
- Are the generalizations valid?
- Do references (if used) match the citations in the text?
- Have ethical standards been adhered to in all phases of the research?
- What can be done to improve or re-design the study?
Meltzoff, J. (1997) Critical Thinking About Research: Psychology and Related Fields. American Psychological Association.
Download a printable version
Use this version as a working checklist to evaluate a usability study.
About the author
Dr. Philip Hodgson (@bpusability on Twitter) holds a B.Sc., M.A., and Ph.D. in Experimental Psychology. He has over twenty years of experience as a researcher, consultant, and trainer in usability, user experience, human factors and experimental psychology. His work has influenced product and system design in the consumer, telecoms, manufacturing, packaging, public safety, web and medical domains for the North American, European, and Asian markets.
Online training in user experience
Get a job in UX and improve your web site's UI design with these in-depth, hands-on user experience training courses. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
Usability test plan toolkit
This eBook contains all you need to make sure that you're fully prepared for your next usability test. Usability test plan toolkit.
User Experience Articles
Our most popular articles
Our most commented articles
Our most recent articles
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 12 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 8 articles tagged ethnography
- 14 articles tagged expert review
- 1 article tagged fitts law
- 3 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 8 articles tagged iterative design
- 3 articles tagged layout
- 1 article tagged legal
- 10 articles tagged metrics
- 3 articles tagged mobile
- 6 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 9 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 15 articles tagged selling usability
- 12 articles tagged standards
- 39 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 21 articles tagged tools
- 46 articles tagged usability testing
- 3 articles tagged user manual