Simplifying Surveys

Business performance monitoring concept, businessman using smartphone Online survey filling out, digital form checklist, blue background.

Award-Winning Article by Culverhouse Professor Could Help Simplify and Streamline Survey Research

If you have ever taken a survey, you probably know the difference: some surveys are shorter and ask questions about things only once, while others are longer and seem to ask versions of the same question in different ways, over and over. In the organizational sciences, researchers debate the value of the shorter survey designs, with some advocating for single questions, known as single-item measures, and others advocating for longer survey designs, where the questions that are asked in more than one way are called multi-item measures. Many researchers argue that the benefits of single-item measures–they are faster, and might have higher completion rates–are outweighed by the drawbacks–mainly that unless the concepts being measured (called constructs) are simple and straightforward, chances are that the measures will not be valid (they will fail to measure the right thing), or will not be reliable (they will fail to measure consistently), or both.

So researchers have a tricky dilemma to solve: choose the faster and potentially less-frustrating single-item measures and run the risk of invalid or unreliable results, or choose the time-consuming multi-item measures, and run the risk of losing time, money, and research subjects due to fatigue or frustration.

In the award-winning “Normalizing the Use of Single-Item Measures: Validation of the Single-Item Compendium for Organizational Psychology,” published in the April, 2022 issue of the Journal of Business and Psychology, Culverhouse’s Dr. Russell A. Matthews and graduate student Dr. Yeong-Hyun Hong (now at University of Michigan-Dearborn), along with Dr. Laura Pineault (Wayne State University), tackled this complicated problem using five studies, where they tested single-item measures from a variety of standpoints. In the first, they investigated whether the measures accurately captured the definitions of the concepts being measured (yes); in the second, they examined whether potentially harder-to-read single-item measures would cause survey-takers more problems in comprehending the  questions being asked (no); in the third, they examined if survey-takers would remain generally consistent in their answers to single-item measures over time (yes); in the fourth, they tested whether single-item measures compare to multi-item measures in criterion validity (yes); and in the fifth, they tested whether single-item measures were less valid when assessing more complex constructs (no). They concluded with a step-by-step guide that researchers could follow when deciding to what extent to use single-item measures.

Though the researchers readily admit that single-item measures do not make sense in all research situations, they still challenge the conventional wisdom that single-item measures are less valid and reliable, and should only be used to study simple, straightforward concepts. This finding could equip more researchers and practitioners to learn valuable things faster and more cost-effectively, while still being confident that their measurement is valid and reliable.

The paper won the 2023 Wiley Award for Excellence in Survey Research, which recognizes “excellence and innovation in the design of survey research methods or techniques that will improve organizational effectiveness and performance.” One reviewer praised the authors for “questioning how we’ve always done things” and thanking them for a paper that “will really impact research going forward.” The paper also recently received an Editorial Commendation from the Editorial Board at the Journal of Business and Psychology for their exciting or remarkable contribution, with only 13 of the 1000+ papers submitted to JBP selected annually for this distinction.

Matthews and his coauthors noted that, “for decades, while common in applied practice, there has been a knee-jerk reaction in the academic community that single-item measures are ‘bad’ and should not be used. As an authorship team, we jokingly likened single-item measures to the orphaned nephew, living in the cupboard under the stairs of measurement. Our research aimed to flip this script, showing that single-item measures can be reliable and valid if intentionally and rigorously developed. We are grateful to the Society for Industrial/Organizational Psychology for being recognized with the Wiley Award, the Editorial Board at the Journal of Business and Psychology, as well as the scholarly community at large for amplifying our message and endorsing its credibility. And we are particularly excited to partner with potential organizations to see these items be used in practice.”

Authored by

Media Inquiries

Zach thomas

Director of Marketing & Communications