What Prompts College Students to Participate in Online Surveys?

Online surveys are frequently used in higher education to collect students’ opinions. This study investigated the factors associated with students’ willingness to respond to online surveys. Using 540 samples from undergraduate and graduate students in the United States, this study conducted a factor analysis to categorize the reasons that students willingly participate in online surveys. Four factors were identified: Format, Affiliation, Content, and Contact. The regression analysis revealed format was significantly associated with the undergraduate students’ online survey participation, while content was significantly related to the graduate students’ online survey participation. These findings indicate the behavior of responding to online surveys may vary depending on the participants’ educational level. They also suggest a need to develop different strategies when designing online surveys for educational purposes to enhance response rates.


Introduction
Instructors in higher education often ask students' opinions for a variety of purposes, for instance (a) students' opinions about the course materials (i.e., course evaluations), (b) students' current learning performance (i.e., tests), and (c) students' learning activities in class (i.e., in-class discussions). Specifically, in the case of course evaluation, it is important for the instructors to improve the same courses for the next terms based on the students' feedback. Therefore, the method of asking students for their feedback becomes more important.
Currently, with the development of modern technology, the online techniques (e.g., online survey) are normally used for many practical purposes (Horne & Sandmann, 2012;Park, 2011;Pezzino, 2018). Fortunately, undergraduate and graduate students are generally tech-savvy, which means that they are familiar with the online technology and environment. Experts in higher education frequently use the online survey tools for investigating the educational performance of their institution as well as their students. In this online technology, general formats resemble online survey methods (e.g., multiple choice, yes/no questions, and short-answer questions).
However, the format of an online survey raises an issue about students' willingness to answer the question correctly. The issue has been questioned in social sciences for a while (e.g., Ansolabehere & Schaffiner, 2014;Berinsky, Huber, & Lenz, 2012;Clifford & Jerit, 2016;Munzert & Selb, 2017). For instance, there are few situations in which people are not willing to answer correctly (Clifford & Jerit, 2016): (a) people do not want to answer some types of questions (e.g., political issues), (b) questions are too confusing to answer (e.g., personality questions), (c) sometimes people do not recognize what they are thinking (e.g., desires), and (d) people consider the social norms when they are asked to answer (e.g. criminal questions). This research implies that it is necessary to check how the online survey tool is working in an education domain. To sum up, this research's purpose is to investigate the factors that are associated with college students' willingness to respond through and online survey. In addition, the research categorizes college students into two groups (i.e., undergraduate and graduate students) to study how each educational level's response behavior differs.

Online Surveys: Strengths and Challenges
The term "online survey" is often used synonymously with "web survey" across the studies; however, little has specified how an online survey is defined. Callegaro, Manfreda, and Vehovar (2015) proposed two fundamental determinants for online surveys. One determinant is the use of some electronic network that supports the exchange of computerized survey questionnaires between the researchers and the respondents. The other determinant is the automatic delivery of computerized data from the respondents to the researchers. Once a respondent submits the answers, then the responses are automatically saved and transmitted to the researcher without requiring any additional activities (e.g., data input). In this sense, an online survey is technically broader in scope than a web survey since the latter is specified as being conducted via a web browser (Callegaro et al., 2015). However, researchers viewed web surveys as constituting most of online surveys so that they used the terms web survey and online survey interchangeably.
Online surveys are gaining in popularity in both academic research and business surveys because of their potential to collect a large amount of data efficiently (Carini, Hayek, Kuh, Kennedy, & Ouimet, 2003;Cole, 2005). Today, the Internet is widely used as a communication tool. Almost 89% of the U.S. adults are identified as Internet users (Pew Research Center, 2018). This trend enables researchers to reach a large population easier than the past. Extensive literature describes the major strengths of online surveys in terms of efficiency and convenience.
Time efficiency has been noted as the major strength of online surveys. That is, researchers can reach out to their potential respondents at any time without geographical barriers and acquire the data as soon as the respondents save their responses (Callegaro et al., 2015;Evans & Mathur, 2005;Ilieva, Baron, & Healey, 2002). According to an empirical study that compared the speed of turnaround time for various types of survey modes, the days needed to return online surveys were 2.8 times less as for mail surveys (Cobanoglu, Moreo, & Warde, 2001). Getting computerized data also greatly benefits researchers by reducing the burden of inputting data manually and minimizing any measurement errors in the data entry process (Callegaro et al., 2015;Evans & Mathur, 2005;Fan & Yan, 2010). This helps researchers to save time in collecting and analyzing data. Online surveys are convenient for respondents as well because they do not need to set a schedule for meeting an interviewer. They can participate in the survey at any time that is convenient for them (Callegaro et al., 2015;Evans & Mathur, 2005). Especially for respondents who are comfortable with using computer devices, online surveys are much faster tools to use.
Cost efficiency is another strength of online surveys. Compared to the traditional survey modes, online surveys can be done with less preparation and lower administration costs (Cobanoglu et al., 2001;Cook, Heath, & Thompson, 2000;Fan & Yan, 2010;Ilieva et al., 2002;Van Selm & Jankowski, 2006). Callegaro et al. (2015) pointed out costs for surveys are determined by interview time, sample size, mailing service fee, travel costs, incentives, and so on. Online surveys are known as the most cost-effective data collection method as they enable researchers to reduce variable costs. For example, Cobanoglu et al. (2001) found the total cost for online surveys was lowest among the various types of survey modes (i.e., mail and fax). Although online surveys required the highest fixed cost to establish and maintain, there were no more additional costs, whereas other survey modes needed more expensive variable costs (i.e., printing, mailing, and coding) Despite these strengths, online surveys have some challenges. First, sample representativeness has been an issue since online surveys can only cover a certain population (i.e., Internet users) (Andrews, Nonnecke, & Preece, 2003;Callegaro et al., 2015;Evans & Mathur, 2005;Fan & Yan, 2010;Manfreda, Berzelak, Vehovar, Bosnjak, & Haas, 2008;Van Selm & Jankowski, 2006). Moreover, these Internet users are likely to be younger, richer, and well-educated (Fan & Yan, 2010) Thus, the survey results can be biased as they do not contain the responses from non-Internet users or people who are less familiar with technology. Second, response rates remain a significant matter. There have been mixed results on response rates for online surveys. According to the meta-analysis done by Manfreda, Berzelak, Vehovar, Bosnjak, and Haas (2008), the average response rates for online surveys were 11% lower than those for other types of surveys. This contrasts with the result of a study by Cobanoglu et al. (2001) reporting that the response rates for web surveys were the highest among the three types of survey modes (i.e., mail, fax, and web-based). This gap might have originated from the methodology that the researchers used. Still, enhancing response rates is taken into consideration when designing online surveys among the researchers (Evans & Mathur, 2005;Van Selm & Jankowski 2006).

Factors Associated with Online Survey Participation
Fan and Yan (2010) reviewed more than 300 studies and systematically organized factors influencing the ies.ccsenet.org International Education Studies Vol. 12, No. 1;2019 response rates of the online surveys. The researchers classified the four stages in the process of an online survey: development, delivery, completion, and return. This classification was made in the researchers' perspective in order to increase the response rate when conducting web-based surveys; however, their work also provided a rich set of explanations about why people participate in online surveys. According to the researchers' review of the literature, the response rate was associated with the topic of the survey, sponsorship of the survey, the length of the survey, presentation of the questionnaires (e.g., wording, ordering, visual display/layouts), personalized invitation, control for access, use of pre-notifications and reminders, and incentives. In practice, these are the significant factors related to the response rate of other types of survey modes as well as online surveys; however, the effect of technical issues, including the order of questionnaires, visual layouts of questionnaires, and the access control, become more critical for the response rates of online surveys as those are conducted via screen displays (Fan & Yan, 2010).
One of the reasons people respond to a survey is because of the topic of interest. Researchers found people were more likely to participate in the survey when the topic of the survey was interesting (Groves, Presser, & Dipko, 2004;Haunberger, 2011;Zillman, Schmitz, Skopek, & Blossfeld, 2014). Galesic (2006) discovered respondents who experienced higher interest on topics were less likely to drop out from a survey because they felt a lower burden to complete it.
The length of the survey exhibited a negative relationship with survey participation. For example, Galesic and Bosnjak (2009) found people were less likely to complete the survey when it required 30 minutes than when it took 10 minutes. The researchers noted the longer a questionnaire was, the greater burden people felt. In a study by Yan, Conrad, Tourangeau, and Couper (2011), there were fewer break-offs when the questionnaire was shorter than the respondents expected. Analyzing the completion rates among the 25,080 surveys, Liu and Wronski (2018) also discovered the survey length was negatively associated with the completion rate.
Presentation formats such as visual display, layout, or the ordering of questions were influential not only the response rate of the item but also to the quality of the answers. Maloshonok and Terentev (2016) examined how visual design features affected data quality in online surveys. Through the factorial experiments, the researchers found the user interface and size of response options helped the respondents to provide more quality answers (e.g., lower percentages in choosing the "don't know" options, longer comments to open-ended questions). Galesic and Bosnjak (2009) pointed out that researchers should be thoughtful when ordering questions because respondents were less willing to invest the effort to answer as they felt fatigue at the end of the survey. Therefore, the visual design of the online survey may encourage or discourage the respondents' survey completion.
Personalized invitations and reminders were the significant factors influencing participation in surveys (Manzo & Burke, 2012;Sánchez-Fernández, Muñoz-Leiva, & Montoro-Ríos, 2012;Sauermann & Roach, 2013;Van Mol, 2017). Using samples from Spanish, Sánchez-Fernández, Muñoz-Leiva, and Montoro-Ríos (2012) explored the conditions that improved retention rate and response quality. The results showed the personalized invitation was significantly associated with the retention rate. Personalization of the messages not only motivates people to participate in the survey but to also complete the task. They also found follow-up mailings were helpful for the completion of the survey of the respondents. However, they noted any additional reminders following the second message (e.g., more than three or four messages) had no significant effect on retention rate. The significant of using reminders also reported in Van Mol's (2017) study because the response rate increased from 6.2% at the initial contact to 31.2% as the final reminder (fourth contact).
In the meta-analysis of 49 studies about online surveys, Cook et al. (2000) found the number of contacts, the salience of the survey issues, incentives, and academic sponsorship were noteworthy factors affecting response rates, while survey length and control for access were not associated with response rates. By reviewing several studies in the literature, Manzo and Burke (2012) emphasized researchers should consider material incentives, pre-notifications, personalized invitations, reminders, access method to the survey, and survey layouts when conducting online surveys to improve response rates. Empirically, Keusch (2012) found the number of contacts and questionnaire layout did significantly increase response rates and decrease break-offs.
To sum up, the previous literature has listed the following as factors affecting survey participation: length, design (layout, wording, organization), contact (personalized invitation, pre-notifications, and reminders), content (salience of topic), sponsorship, incentive, and accessibility.

College Student and Online Surveys
As mentioned earlier, an online survey involves a sampling issue in that it can only reach a population of technology-savvy Internet users. Statistically, 97% of college students use the Internet today (Pew Research Center, 2018). Manfreda et al. (2008) pointed out a type of the target population including students may influence the magnitude of response rate differences. Thus, online surveys are an appealing method among young people to generate higher response rates (Van Selm & Jankowski, 2006). Carini et al. (2003) examined how college students responded differently to the web-based surveys versus the paper-based surveys. The findings from the research showed college students who participated in the survey via online were more favorable for all domains of students' engagement experience. The researchers viewed the ease of use of the Internet might lead to more favorable responses.
Previous literature implies college students are more likely to participate in the surveys conducted online. However, little is known about what factors motivate college students to respond to online surveys. To conduct a quality survey online, it is necessary to investigate the responding behavior of college students who are very familiar with using technology.

Sample
The sample consisted of students enrolled in undergraduate and graduate business classes, but it was not limited to only business majors, at a state university in the Mideastern region of the United States. College students represent a vast majority of whom have Internet access either at home or through a college or university account (Branigan 1998;Carini et al., 2003;Crockett, 1999;Mosley-Matchett, 1998). These questionnaires were administered to the students in the four undergraduate classes and six graduate classes. Of the 567 questionnaires collected, 27 had a large percentage of missing values and were excluded. Thus, a total of 540 questionnaires were used for further analysis. Table 1 shows the demographic characteristics of the students. Overall, 293 (54.3%) of the students were undergraduates and 307 (56.9%) were female. More than half of the students did not attempt to conduct an online survey (56.5%) and most of the students don't have plan to conduct an online survey (87.2%). 179 (33.3%) of the students responded to less than 25% of the online survey they received; 130 (24.2%) responded to more than 25% and but less than 50%; 120 (22.3%) responded to more than 50% but less than 75%; and 108 (20.1%) responded more than 75%. Note. The usable questionnaire consisted of 540 students out of 567 collected; due to rounding, not all percents sum to 100.00%; due to missing data, not all total sum to 540.

Survey Development
The questionnaire comprised two sections. The first section measured reasons to participate in an online survey. The 20 items that describe the reasons why students would respond to an online survey were developed based on the results of meta-analysis done by Cook et al. (2000). They found that the number of contacts, personalized contacts, and precontacts were the factors most associated with higher response rates in a total of 68 electronic surveys reported in both published and unpublished research. Students were asked to indicate the importance of the reasons why they would respond to an online survey on a five-point Likert type scale, ranging from 1 = not at all important to 5 = extremely important. Intention to participate in an online survey was also measured with "How likely would you respond to an online survey?" with five response categories ranging from "Very Unlikely" to "Very Likely." The second section captured the demographic profile of the college students including gender, experience with conducting an online survey, plan to conduct an online survey, and percentage of response to online surveys. Based on the earlier pilot test with 36 graduate students in survey development class, several adjustments were made before the finalized version was administered to the undergraduate and graduate students.

Data Analysis
The data were analyzed using SPSS software (25.0 version) through multiple stages. First, descriptive statistics calculated mean scores for the 20 reasons to participate in an online survey. Second, factor analysis, with orthogonal VARIMAX rotation, was conducted to allow a grouping of 20 reasons to identify underlying factors that explain the variance of the reasons. Third, mean differences on the derived factors were analyzed to explore the differences of undergraduate and graduate students. Last, college students' intention to participate in an online survey was predicted by the regression analysis using the factor scores of reasons.

Results
Table 2 displays the descriptive analysis of 20 reasons why students would respond to an online survey. All the reasons were positively stated, and they are listed in overall mean descending order. The analysis of means values of reasons to participate in an online survey revealed that "it took less than half an hour to complete" (4.01), "the survey was easy to fill out" (3.95), and "I wanted to help the researcher" (3.92) were more important reasons than any other reasons considered in this survey overall. These reasons can be interpreted as major reasons students participate in an online survey. Of the 20 reasons, only one of these reasons-"a password was required" (1.98)-had a mean value lower than 2.00 on the five-point scale used in this study. It is a less important reason for college students to participate in an online survey. Undergraduate students had higher mean values of three reasons than graduate students. These reasons were "an incentive was offered" (3.85), "the research was interesting" (3.72), and "the survey was sponsored by my institution" (3.03). Graduate students had higher mean values of the other reasons than undergraduate students. However, the results of the independent sample t-test showed that there were no differences of undergraduate students and graduate students on the reasons to participate in an online survey (p > 0.05).

M (S.D.)
t-value Sig. Note. 1 = not at all important and 5 = extremely important; M denotes mean; S.D. is standard deviation.

I would participate in an online survey if
As shown in Table 3, factor analysis was conducted using principal axis factoring with varimax rotation to consolidate the 20 reasons into a set of underlying dimensions reflecting the reasons to participate in an online survey. Prior to conducting factor analysis, the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and Bartlett's Test of Sphericity were performed to determine the appropriateness of factor analysis. The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy indicated a practical level of common variance (KMO = 0.82) because KMO values between 0.8 and 1.0 indicate the sampling is adequate (Cerny & Kaiser, 1977). The result of the Bartlett's test was 2796.95 with a significant level of 0.00. This indicated that the factor analysis was appropriate.
The 20 reasons to participate in an online survey were reduced to four orthogonal factor dimensions. The four factors identified were chosen in terms of eigenvalue larger than 1.0 (Hair, Black, Babin, & Anderson, 2009). Six reasons were eliminated due to factor loadings less than 0.40 or because they loaded on two factors. The identified factors represent 66.51% of the variance of the variables. The reliability of each factor was assessed by coefficient alpha. Reliability analyses showed that the internal consistency of each of the four factors, ranging from 0.72 to 0.85, was relatively high and considered to be very good because, according to Nunnally (1978), the alpha value should be 0.70 or higher. The communality of each reason was relatively high, ranging from 0.48 to 0.86. This indicates that the variance of the original values was captured fairly well by these four factors. Each factor was named based on the characteristics of its composing variables.
The first factor (α = 0.85) was labeled "Format" as this factor was formed by the variables of convenient, easy to fill out, half an hour to complete, asked appropriately, and well organized. This factor explained 35.01% of the total variance and had an eigenvalue of 4.90. The second factor (α = 0.78) was named "Affiliation" as this factor was composed of same academic affiliation, same professional affiliation, and sponsored by institution variables. This factor explained 14.60% of the total variance and had an eigenvalue of 2.04. The third factor (α = 0.72) was labeled "Content" as this factor was formed by the variables of results, attitudes, and factual information. This factor explained 8.81% of the total variance and had an eigenvalue of 1.23. The fourth factor (α = 0.72) was named "Contact" as this factor was composed of personalized letter, follow-up reminders, and precontacted me variables. This factor explained 8.09% of the total variance and had an eigenvalue of 1.13.  Overall, as shown on Table 4, Format had the highest mean of 3.81 followed by Affiliation (2.99), Content (2.72), and Contact (2.67) indicating the order of importance among the reasons for college students to participate in an online survey. The rank of the important reasons for both undergraduate students and graduate students were the same as the overall rank. Format, Content, and Contact were more important reasons to participate in an online survey for graduate students than for undergraduate students. Affiliation was only factor that was more important to undergraduate students than to graduate students. However, the results of the independent sample t-test showed that there were no differences of undergraduate and graduate students on the derived factors: Format, Affiliation, Content, and Contact. Multiple regression analysis was performed to predict intention to participate in an online survey by the four factors (see Table 5). Overall, the regression model was found to be significant as indicated by the overall F-statistic (p < 0.05). Two of the four factors had a significant effect on intention to participate in an online survey. These include Format (b = 0.121; p < 0.05) and Content (b = 0.135; p < 0.01). Affiliation (b = 0.075; p > 0.05) and Contact (b = 0.062; p > 0.05) were not significant. The standardized beta values suggest that Content has a greater impact on intention to participate in an online survey than Format. Format (b = 0.169; p < 0.05) had a significant effect on intention to participate in an online survey for undergraduate students, whereas Content (b = 0.163; p < 0.05) had a significant effect for graduate students.  Note. * p < .05; ** p < .01; *** p < .001.

Discussion
Online surveys have greater strengths and potential compared to the traditional modes of surveys. Due to the time and cost efficiency, researchers gain dramatic improvements in workload with less efforts by utilizing online surveys. Instructors in higher education can get similar benefits when managing classes (Dommeyer, Baum, Hanna, & Chapman, 2004). To take advantage of online surveys, it is crucial to know how to maximize the students' response rates.
This study attempted to examine what makes students participate in online surveys. Twenty reasons for responding to online surveys were chosen based on previous studies and were categorized into four factors by exploratory factor analysis. The meaningful factors were named as Format, Affiliation, Content, and Contact. Format is comprised of attributes of the survey mode itself such as convenience, appropriateness, organization, and running time. Affiliation deals with the sponsorship of the survey. Content includes variables regarding the content of the survey. Contact covers personalized letter, pre-contacts, and follow-up reminders. The regression analysis showed that Format and Content were significantly associated with online survey participation. In sub-group analyses, Format was the significant factor for undergraduate students to participate in an online survey, while Content was the significant factor for graduate students to respond to the online survey.
As shown on results, the research found three points. First, by utilizing factor analysis, this research found four meaningful factors (Format, Affiliation, Content, and Contact) lead college students to respond to an online survey. It suggests that there are particular characteristics of online surveys that appeal to certain participants. In higher education circumstances, the Format and Content should be considered when asking students to participate. As a survey developer, instructors can be delimited with their educational purpose only. For instance, instructors can ask a few questions for the courses' effectiveness. Indeed, many of instructors are using an online format for a survey for teaching evaluations. Although online evaluations are preferred by students (Layne, DeCristoforo, & McGinty, 1999), previous studies found that the response rates of online evaluations were lower than in-class evaluations with traditional paper-based methods (Capa-Aydin, 2016;Dommeyer et al., 2004;McAlpin et al., 2014). One possible explanation of low response rates is due to the lack of understanding about what makes students answer online surveys. Even teaching evaluations, of which content seems to be obvious, can be sophisticated in format. Survey developers should be creative when crafting surveys so that students are more likely to complete them.
Second, undergraduate and graduate students showed different influential factors on their willingness to answer an online survey. It suggests that survey designers or instructors in higher education have to consider the different willingness by four factors between two groups. As shown on Table 5, undergraduate students are more likely to answer by Format. It implies that undergraduate students do care about the technical components when conducting online surveys (i.e. survey time, convenience, etc.). This finding is consistent with Carini et al.'s (2003) study that college students are more likely to reply to an online survey than a paper-based survey due to the ease of use. On the other hand, graduate students are more likely to answer by Content, which means they pay more attention to what the survey is about. This difference between two groups may be explained by their education level with their knowledge acquisition.
Third, interesting point is that Affiliation and Contact are not important for college students to respond the online survey. It brings double-sided implication, (a) possibility of utilizing institutional human resource and (b) lower level of instructors' authority in the survey method. It implies that non-expert of education (e.g., institutional staff) can join a team of evaluating the educational performance of the institute. A person who sends the survey tool is not important for college students to answer because Affiliation and Contact are not significantly important. On the other hand, it implies that instructors' ranks in higher education (e.g., assistant/associate/full professor) are not significantly influencing the willingness level of college students. The online survey does not deliver the educational authority to the students.