Using Response Times to Test the Reliability of Political Knowledge Items in the 2015 Swiss Post-Election Survey. Marquis, L. Survey Research Methods, 15(1):79–100, April, 2021. ECC: No Data (logprob: -302.563) Number: 1
Using Response Times to Test the Reliability of Political Knowledge Items in the 2015 Swiss Post-Election Survey [link]Paper  doi  abstract   bibtex   
In this article, I consider the problem of “cheating” in political knowledge tests. This problem has been made more pressing by the transition of many surveys to online interviewing, opening up the possibility of looking up the correct answers on the internet. Several methods have been proposed to deal with cheating ex-ante, including self-reports of cheating, control for internet browsing, or time limits. Against this background, “response times” (RTs, i.e., the time taken by respondents to answer a survey question) suggest themselves as a post-hoc, unobtrusive means of detecting cheating. In this paper, I propose a cross-classified multilevel model for measuring individual-specific and item-specific RTs, which are then used to identify unusually long but correct answers to knowledge questions as potential cases of cheating. I apply this procedure to the postelectoral survey for the 2015 Swiss national elections. My analysis suggests that extremely slow responses to two out of four questions (i.e., naming the president of the Swiss Confederation and the number of signatures required for a federal initiative) are definitely suspi-cious. Finally, I propose several methods for “correcting” individual knowledge scores and examine their face-value validity.
@article{marquis_using_2021,
	title = {Using {Response} {Times} to {Test} the {Reliability} of {Political} {Knowledge} {Items} in the 2015 {Swiss} {Post}-{Election} {Survey}},
	volume = {15},
	copyright = {Copyright (c) 2021 Lionel Marquis},
	issn = {1864-3361},
	url = {https://ojs.ub.uni-konstanz.de/srm/article/view/7594},
	doi = {10.18148/srm/2021.v15i1.7594},
	abstract = {In this article, I consider the problem of “cheating” in political knowledge tests. This problem has been made more pressing by the transition of many surveys to online interviewing, opening up the possibility of looking up the correct answers on the internet. Several methods have been proposed to deal with cheating ex-ante, including self-reports of cheating, control for internet browsing, or time limits. Against this background, “response times” (RTs, i.e., the time taken by respondents to answer a survey question) suggest themselves as a post-hoc, unobtrusive means of detecting cheating. In this paper, I propose a cross-classified multilevel model for measuring individual-specific and item-specific RTs, which are then used to identify unusually long but correct answers to knowledge questions as potential cases of cheating. I apply this procedure to the postelectoral survey for the 2015 Swiss national elections. My analysis suggests that extremely slow responses to two out of four questions (i.e., naming the president of the Swiss Confederation and the number of signatures required for a federal initiative) are definitely suspi-cious. Finally, I propose several methods for “correcting” individual knowledge scores and examine their face-value validity.},
	language = {en},
	number = {1},
	urldate = {2021-04-09},
	journal = {Survey Research Methods},
	author = {Marquis, Lionel},
	month = apr,
	year = {2021},
	note = {ECC: No Data (logprob: -302.563) 
Number: 1},
	keywords = {cheating, political knowledge, response times},
	pages = {79--100},
}

Downloads: 0