V originále
Online surveys have become a popular way to collect data. However, response rates are low, specifically for online intercept-based surveys, which can be as low as 1%. This raises questions about the accuracy of the inferences based on these results. Furthermore, it is difficult to compare the characteristics and behavior of the responders and nonresponders as there is very limited information on nonresponders. The objective of this article is to present a unique comparison of online intercept survey responders, nonresponders, and partial responders. The sample includes 192,566 U.S.-based users who went through a research experiment during the installation process of ESET online security software. During the process, users were asked to enable or disable the detection of potentially unwanted applications. At the end, they could also opt to answer questions on a short security-related survey. The users were split into three groups: (a) nonresponders (), (b) complete responders (), and (c) partial responders (). There were only slight differences between the responder and nonresponder groups in their hardware (i.e., computer CPU quality and RAM size). Responders and nonresponders differed in their behavior. Complete responders enabled the detection of potentially unwanted applications significantly more often than nonresponders (on average by 4.5%) and spent more time on the screen that provided details about this feature. Additional comparisons showed that complete responders were slightly younger and more educated than partial responders. We conclude that there are only slight differences between online intercept survey responders and nonresponders and that these differences manifest in computer usage-related decisions. Despite the low overall response rates, online product-related surveys can provide useful insights about the user base. Nevertheless, the companies that use online surveys should be careful because behavior might differ for users in specific situations.