Test use is ubiquitous in contemporary society; likewise, test misuse cuts across many fields, applications, and types of tests. Test misuse can be substantially reduced through training in test policy, selection, administration, scoring, interpretation, and reporting.

Responsible Test Use: Case Studies for Assessing Human Behavior, Second Edition is an interdisciplinary reference source for promoting quality assurance in testing. It consists of 85 training cases that illustrate a variety of problems related to test use. Twenty-eight of the cases are new for this edition.

Each case is drawn from real life and includes

  • a description of the incident

  • focus questions to encourage active involvement with issues raised by the case

  • an analysis of the problems posed plus possible ways of better handling the situation

  • a listing of empirically-derived dimensions of test misuse that are likely to generalize to new testing situations

Cases developed for this edition represent three test settings that were not covered in the previous edition: computer and internet testing formats, forensic, and cross cultural-English language learners. In addition, revised cases cover counseling and training, education, employment, mental health, neuropsychology, and speech-language-hearing settings.

This book is intended for use as a supplementary textbook in undergraduate and graduate measurement courses and for trainers of specialized assessment practitioners, such as school psychologists and educational diagnosticians. It can also serve as a valuable tool for in-service training or continuing education, for licensing exam preparation, and as a resource for other individuals committed to improving assessment practices.

Table of Contents

—Kurt F. Geisinger



I. Introduction

  1. Overview
  2. Elements and Competencies of Good Test Use

II. Case Studies

Section 1: Professional Development: Training, Responsibility, and Ethics

  • Case 1. Personality Evaluation in a Scholarship Program
  • Case 2. Assessing English Language Learners
  • Case 3. This Gun for Hire
  • Case 4. Incumbent Testing for Apprenticeship Positions
  • Case 5. Assessing Police Officers
  • Case 6. Practicing Within Areas of Competence
  • Case 7. Untimely Timelines
  • Case 8. The Untrained Interpreter
  • Case 9. Use of Computer-Prepared Test Interpretations
  • Case 10. The Star-Struck Clinician
  • Case 11. Guilty for Lack of Evidence
  • Case 12. Dealing With the Press
  • Case 13. Buyer Beware
  • Case 14. When Test Preparation Goes Too Far
  • Case 15. The Uninformed Instructors
  • Case 16. Passing the Buck
  • Case 17. The Devil Is in the Details
  • Case 18. The Slick Salesman
  • Case 19. Personnel Screening for Emotional Stability
  • Case 20. Disposition of Psychological Test Reports
  • Case 21. Proper Interpretation of Test Results Requires Training
  • Case 22. Expert Testimony Can Hurt You or Help You
  • Case 23. A Sensitive School Psychologist
  • Case 24. Dealing With Parental Concern About Ethnic Bias in Testing
  • Case 25. Compromised Test

Section 2: Test Selection

  • Case 26. Using the Wrong Cutoff Score From the Wrong Test
  • Case 27. Unguided Referral
  • Case 28. Making Up Your Own Tests
  • Case 29. Poor Choices and a Misdiagnosis
  • Case 30. How Competent Do You Really Have to Be?
  • Case 31. Misleading Public Statements
  • Case 32. The Graduate School Admissions Conundrum
  • Case 33. Insufficient Assessment
  • Case 34. What Memory Problems?
  • Case 35. I Can't Read, but I Know What's Going On

Section 3: Test Administration and Scoring

  • Case 36. Standardized Administration Procedures
  • Case 37. Intellectual Assessment of a Bilingual Student
  • Case 38. The Incompetent Examiner
  • Case 39. Using Substandard Testing Equipment
  • Case 40. Unwanted Help Screens
  • Case 41. Not Following Established Protocol
  • Case 42. Testing Individuals Who Are Blind
  • Case 43. Scoring Errors Plague High-Stakes Tests
  • Case 44. Banking on the Test
  • Case 45. A Case of Failure to Communicate
  • Case 46. College Placement by Internet
  • Case 47. Testing College-Bound Students With Physical Disabilities

Section 4: Test Interpretation: Principles, Norms, and Psychometrics

  • Case 48. Pitfalls in Comparing Scores on Old and New Editions of a Test
  • Case 49. Are Observed Score Differences Trustworthy?
  • Case 50. Special Children Need Special Tests
  • Case 51. Are You Paranoid When People Really Are Against You?
  • Case 52. It's a Man's Job
  • Case 53. High School Rankings Rankle Educators
  • Case 54. The Puzzling Chinese Personality Profiles
  • Case 55. Responsible Interpretation of Test Score Differences
  • Case 56. The Use and Misuse of Psychological Testing
  • Case 57. The Lopsided High School Admissions Test
  • Case 58. The Below-Average Sicilian Gifted Students
  • Case 59. Date Matching
  • Case 60. The Missing Personality Test
  • Case 61. Don't Believe Everything You Hear
  • Case 62. A Faculty Member in Distress
  • Case 63. Test Results Without Interpretation
  • Case 64. Confusing Norm-Referenced and Criterion-Referenced Test Scores
  • Case 65. Evaluating Children With Hearing Impairments
  • Case 66. The Questionable Cutoff Score
  • Case 67. Immigrants Lose Financially
  • Case 68. Comparing Proficiency Levels on State and National Assessments
  • Case 69. An English Language Learner's Aptitude Test Anxiety
  • Case 70. Inconsistencies Between Test Results and Behavior
  • Case 71. Being Locked Up Makes A Difference
  • Case 72. Using Out-of-Level Testing and Grade-Equivalent Scores for Instructional Planning
  • Case 73. Assessing Similar Constructs in Different Cultures
  • Case 74. Narrowing Educational and Vocational Options

Section 5: Reporting Test Results to Clients and Administrative Policy Issues

  • Case 75. Borderline Practice
  • Case 76. What Does a Percentile Rank Mean?
  • Case 77. Conducting Individual Assessments
  • Case 78. The Well-Meaning Trainer
  • Case 79. The Right Test in the Wrong Way
  • Case 80. Computer Feedback in Career Counseling
  • Case 81. A Case of Speedy Selection
  • Case 82. Selecting Doctoral Students at Best State University
  • Case 83. Inappropriate Choices Affect Referral for Treatment
  • Case 84. Using Test Results to Promote a Prestigious Private School
  • Case 85. Saying Too Much on the Basis of Too Little


  1. Contributors of Incidents and Casebook Reviewers
  2. Index of Cases Classified by Competency
  3. Index of Cases Classified by Element



About the Authors

Key to Competencies and Elements

Author Bios

Lorraine D. Eyde, PhD, is a personnel research psychologist at the U.S. Office of Personnel Management, where she has been employed since 1971. She holds a diplomate in industrial–organizational psychology, ABPP, and is a fellow of four divisions of the American Psychological Association (APA): Psychologists in Public Service (Division 18); Evaluation, Measurement, and Statistics (Division 5); Society for the Psychology of Women (Division 35); and Society of Counseling Psychology (Division 17).

She is licensed to practice psychology in the District of Columbia. She has been a Visiting Mellon Fellow at Tufts University and received APA's Committee on Women in Psychology citation for Distinguished Leader for Women in Psychology. She is a charter fellow of the American Psychological Society, served on its Board of Directors, and is a member of Sigma Xi.

Dr. Eyde has more than 40 publications, including a special journal issue on computerized testing, and was an associate editor of a casebook on ethics; her publications have been cited in more than 140 sources. She has made more than 60 presentations at APA's annual convention or other conventions or conferences, including four International Congresses of Psychology. Her areas of expertise include leadership, job analysis, testing individuals with disabilities, test misuse, and ethics.

Dr. Eyde helped to organize the Joint Committee on Testing Practices and chaired the 4-year interdisciplinary research project of the Test User Qualifications Working Group and the Test User Training Work Group. She has served on APA's Membership Committee, Board of Professional Affairs, and Psychological Assessment Work Group, and she served on APA's Task Force to Revise APA's Ethical Principles in the Conduct of Research With Human Participants.

Gary J. Robertson, PhD, is a research psychologist specializing in the development of educational and psychological tests. He holds a PhD in educational and psychological measurement from Columbia University.

Over the course of his career, he has managed the development of many group and individually administered tests published by Harcourt Brace and The Psychological Corporation (now NCS Pearson); American Guidance Service, Inc. (now NCS Pearson); and Wide Range, Inc. (now Psychological Assessment Resources, Inc.). Among these are the widely used Otis-Lennon School Ability Test, Metropolitan Readiness Tests, Peabody Picture Vocabulary Test, Kaufman Assessment Battery for Children, Kaufman Test of Educational Achievement, Vineland Adaptive Behavior Scales, and the Harrington-O'Shea Career Decision Making System.

Dr. Robertson is coauthor of the Wide Range Achievement Test 4 (WRAT 4) and author of the Wide Range Achievement Test—Expanded. His professional interests are test scaling and norming, the history and development of standardized testing, intellectual assessment, assessing test user qualifications, and promoting the proper use of standardized tests and assessments.

He has served as the APA representative to the Joint Committee on Testing Practices and as a member of two of its working groups: the Test User Qualifications Working Group and the Test User Training Working Group. A fellow of APA Division 5 (Evaluation, Measurement, and Statistics), Dr. Robertson has served in various capacities on the Division 5 Executive Committee as well as on the Executive Committee of the Association for Assessment in Counseling of the American Counseling Association.

Samuel E. Krug, PhD, received both his MA and PhD in psychology from the University of Illinois, Urbana–Champaign. He is currently chairman and CEO of MetriTech, Inc., an educational testing company that works primarily with large-scale state testing programs, and president of Industrial Psychology International, a testing company that produces tests primarily for hiring and selection.

He is an adjunct professor of educational psychology at the University of Illinois. He has published more than 100 articles, books, and tests, including four editions of Psychware Sourcebook: A Reference Guide to Computer-Based Products for Assessment in Psychology, Business, and Education. His articles and books all relate to issues in applied personality assessment and educational measurement. He is a fellow of APA and the American Educational Research Association.