Too sophisticated even for highly educated survey respondents? A qualitative assessment of indirect question formats

Julia Jerke, David Johann, Heiko Rauhut, Kathrin Thomas

Research output: Contribution to journalArticle

Abstract

Misreporting of sensitive characteristics in surveys is a major concern among survey methodologists and social scientists across disciplines. Indirect question formats, such as the Item Count Technique (ICT) and the Randomized Response Techniques (RRT), including the Crosswise Model (CM) and the Triangular Model (TM), have been developed to protect respondents’ privacy by design to elicit more truthful answers. These methods have also been praised to produce more valid estimates than direct questions. However, recent research has revealed a number of problems, such as the occurrence of false negatives, false positives, and dependencies on socioeconomic characteristics, indicating that at least some respondents may still cheat or lie when asked indirectly. This article systematically investigates (1) how well respondents comprehend and (2) to what extent they trust the ICT, CM and TM. We conducted cognitive interviews with academics across disciplines, investigating how respondents perceive, think about and answer questions on academic misconduct using these indirect methods. The results indicate that most respondents comprehend the basic instructions, but many fail to understand the logic and principles of these techniques. Furthermore, the findings suggest that comprehension and honest self-reports are unrelated, thus violating core assumptions about the effectiveness of these techniques.
Original languageEnglish
Pages (from-to)319-351
Number of pages33
JournalSurvey Research Methods
Volume13
Issue number3
DOIs
Publication statusPublished - 2019

    Fingerprint

Keywords

  • Item Count Technique
  • Crosswise Model
  • Triangular Model
  • Cognitive Interviews
  • Academic Misconduct

Cite this