Data quality in (non-)probability online panels: What do we know?
Carina Cornesse from the German Institute for Economic Research (DIW Berlin) and Research Institute Social Cohesion (RISC), will give a 30-minute presentation followed by 15 minutes of Q&A. The event is in a hybrid format, you are welcome to join us for lunch from the Corner room at DIGSSCORE. Food is provided on a first-come first-served basis.
For many decades, social science researchers almost exclusively relied on probability sample surveys when aiming to draw inferences to the general population. However, probability sample surveys are expensive and data collection is often slow. With the rise of the internet in the 21st century, therefore, it became popular to conduct fast and cheap surveys via online panels, which usually rely on web-recruited nonprobability samples. In academic circles, this has led to the reignition of an old debate about the (lack of) data quality in nonprobability sample surveys. This debate is ongoing and concerns many areas of social science research and practice. Most empirical evidence so far focuses on whether nonprobability online panels can provide a similar degree of accuracy in terms of univariate estimates as probability sample surveys. This research often ignores other aspects of data quality (e.g., response quality, panel retention, offline population inclusion strategies). In this talk, I will discuss the accumulated published evidence in the (non)probability sample survey debate and zoom in on additional insights gained in a large-scale panel comparison study conducted in Germany with three waves of parallel data collection in 10 online panels.