CHAPTER 9 Survey Research Holographic Overview Researchers have many methods for collecting data through surveys—from mail questionnaires to personal interviews to online surveys conducted over the Internet. Social researchers should know how to select an appropriate method and how to implement it effectively, l i Introduction Topics Appropriate for Survey Research Guidelines for Asking Questions Choose Appropriate Question Forms Make Items Clear Avoid Double-Barreled Questions Respondents Must Be Competent to Answer Respondents Must Be Willing to Answer Questions Should Be Relevant Short Items Are Best Avoid Negative Items Avoid Biased Items and Terms Questionnaire Construction General Questionnaire Format Formats lor Respondents Contingency Questions Matrix Questions Ordering Items in a Questionnaire Questionnaire Instructions Pretesting the Questionnaire A Composite Illustration Self-Administered Questionnaires Mail Distribution and Return Monitoring Returns Follow-up Mailings Acceptable Response Rates A Case Study Interview Surveys The Role of the Survey Interviewer General Guidelines for Survey Interviewing Coordination and Control Telephone Surveys Computer Assisted Telephone Interviewing New Technologies and Survey Research Comparison of the Different Survey Methods Strengths and Weaknesses of Survey Research Secondary Analysis MAIN POINTS KEY TERMS REVIEW QUESTIONS AND EXERCISES ADDITIONAL READINGS SOCIOLOGY WEB SITE INFOTRAC COLLEGE EDITION 237 238 - Chapter 9: Survey Research Introduction Surveys are a very old research technique. In the Old Testament, for example, we find the following: Alter the plague the Lord said to Moses and to Eleazar the son of Aaron, the priest, "Take a census of all the congregation of the people of Israel, from twenty old and upward." (Numbers 26:1-2) Ancient Egyptian rulers conducted censuses to help them administer their domains. Jesus was born away from home because Joseph and Mary were journeying to Joseph's ancestral home for a Roman census. A little-known survey was attempted among French workers in 1880. A German political sociologist mailed some 25,000 questionnaires to workers lo determine the extent of their exploitation by employers. The rather lengthy questionnaire included items such as these: Does your employer or his representative resort lo trickery in order to defraud you of a part of your earnings? If you are paid piece rates, is the quality of the article made a pretext for fraudulent deductions from your wages? The survey researcher in this case was not George Gallup but Karl Marx (] 1880] 1956:208). Though 25,000 questionnaires were mailed out, there is no record of any being returned. Today, survey research is a frequently used mode of observation in the social sciences. In a typical survey, the researcher selects a sample of respondents and administers a standardized questionnaire to them. Chapter 7 discussed sampling techniques in detail. This chapter discusses how to prepare a questionnaire and describes the various options lor administering it so that respondents answer your questions adequately. The chapter concludes with a short discussion of secondary analysis, the analysis of survey data collected by someone else. This use of survey results has become an important aspect of survey research in recent years, and it's especially useful for students and others with scarce research funds. Let's begin by looking at the kinds of topics that researchers can appropriately study by using survey research. Topics Appropriate for Survey Research Surveys may be used for descriptive, explanatory, and exploratory purposes. They are chiefly used in studies that have individual people as the units of analysis. Although this method can be used for other units of analysis, such as groups or interactions, some individual persons must serve as respondents or informants. Thus, we could undertake a survey in which divorces were the unit of analysis, but we would need to administer the survey questionnaire lo the participants in the divorces (or to some other informants). Survey research is probably the best method available to the social researcher who is interested in collecting original data for describing a population too large to observe directly. Careful probability sampling provides a group of respondents whose characteristics may be taken to reflect those of the larger population, and carefully constructed standardized questionnaires provide data in the same form from all respondents. Surveys are also excellent vehicles for measuring attitudes and orientations in a large population. Public opinion polls—for example, Gallup, Harris, Roper, and Yankelovich—are well-known examples of this use. Indeed, polls have become so prevalent that at times the public seems unsure what to think of them. Pollsters are criticized by those who don't think (or want to believe) that polls are accurate (candidates who are "losing" in polls often tell voters not to trust the polls). But polls are also criticized for being too accurate— for example, when exit polls on election day are used to predict a winner before the actual voting is complete. The general attitude toward public opinion research is further complicated by scientifically unsound "surveys" that nonetheless capture people's attention because of the topics they cover and/or Guidelines for Asking Questions . 239 their "findings." A good example is the "Hite Reports" on human sexuality. While enjoying considerable attention in the popular press, Shere Hite was roundly criticized by the research community for her data-collection methods. For example, a 1987 Hite report was based on questionnaires completed by women around the country—but which women? Hite reported that she distributed some 100,000 questionnaires through various organizations, and around 4,500 were returned. Now 4,500 and 100,000 are large numbers in the context of survey sampling. However, given Hite's research methods, her 4,500 respondents didn't necessarily represent U.S. women any more than the Literary Digest's enormous 1936 sample represented the U.S. electorate when their 2 million sample ballots indicated All' Landon would bury FDR in a landslide. Sometimes, people use the pretense of survey research for quite different purposes. For example, you may have received a telephone call indicating you've been selected for a survey, only to find the first question was "How would you like to make thousands of dollars a week right there in your own home?" Or you may have been told you could win a prize if you could name the president whose picture is on the penny. (Tell them it's Elvis.) Unfortunately, a few unscrupulous telemarketers try to prey on the general cooperation people have given to survey researchers. By the same token, political parlies and charitable organizations have begun conducting phony "surveys." Often under the guise of collecting public opinion about some issue, callers ultimately ask respondents for a monetary contribution. Recent political campaigns have produced another form of bogus survey, called the "push poll." Here's what the American Association for Public Opinion Polling had to say in condemning this practice: A "push poll" is a telemarketing technique in which telephone calls are used to canvass potential voters, feeding them false or misleading "information" about a candidate under the pretense of taking a poll to see how this "information" affects voter preferences. In fact, the in- tent is not to measure public opinion but to manipulate it—to "push" voters away from one candidate and toward the opposing candidate. Such polls defame selected candidates by spreading false or misleading information about them. The intent is to disseminate campaign propaganda under the guise of conducting a legitimate public opinion poll. (Bednarz 1996) In short, the labels "survey" and "poll" are sometimes misused. Done properly, however, survey research can be a useful tool of social inquiry. Designing useful (and trustworthy) survey research begins with formulating good questions. Let's turn to that topic now. Guidelines for Asking Questions In social research, variables are often operational-ized when researchers ask people questions as a way of getting data for analysis and interpretation. Sometimes the questions are asked by an interviewer; sometimes they are written down and given to respondents lor completion. In other cases, several general guidelines can help researchers frame and ask questions that serve as excellent operationalizations of variables while avoiding pitfalls that can result in useless or even misleading information. Surveys include the use of a questionnaire— an instrument specifically designed to elicit information that will be useful for analysis. While some of the specific points to follow are more appropriate to structured questionnaires than to the more open-ended questionnaires used in qualitative, in-depth interviewing, the underlying logic is valuable whenever we ask people questions in order to gather data. Choose Appropriate Question Forms Let's begin with some of the options available to you in creating questionnaires. These options include using questions or statements and choosing open-ended or closed-ended questions. 240 . Chapter 9: Survey Research Questions and Statements Although the term questionnaire suggests a collection of questions, an examination of a typical questionnaire will probably reveal as many statements as questions. This is not without reason. Often, the researcher is interested in determining the extent to which respondents hold a particular attitude or perspective. II you can summarize the altitude in a fairly brief statement, yoti can present that statement and ask respondents whether they agree or disagree with it. As you may remember, Rensis Likert greatly formalized this procedure through the creation of the Likert scale, a formal in which respondents are asked to strongly agree, agree, disagree, or strongly disagree, or perhaps strongly approve, approve, and so forth. Both questions and statements may be used profitably. Using both in a given questionnaire gives you more flexibility in the design of items and can make the questionnaire more interesting as well. Open-Ended and Closed-Ended Questions In asking questions, researchers have two options. They may ask open-ended questions, in which case the respondent is asked lo provide his or her own answer to the question. For example, the respondent may be asked, "What do you [eel is the most important issue lacing the United Stales today?" and be provided with a space lo write in the answer (or be asked 10 report it verbally to an interviewer). As we'll see in Chapter 10, in-depth, qualilalivc interviewing relies almost exclusively on open ended questions. However, they are also used in survey research. In the case of closed-ended questions, the respondent is asked lo select an answer from .inning, a list provided by the researcher. Closed-ended questions are very popular- in survey research because they provide a greater uniformity "I responses and are more easily processed. open ended responses must he coded before ili'-/ can he processed lor computer analysis, as will he discussed in Chapter 14. This coding process often requires thai the researcher interpret the meaning ol responses, opening the possibility of misun- derstanding and researcher bias. There is also a danger that some respondents will give answers that are essentially irrelevant to the researcher's intent. Closed-ended responses, on the other hand, can often be transferred directly into a computer formal. The chief shortcoming of closed-ended questions lies in the researcher's structuring ol' responses. When the relevant answers to a given question arc-relatively clear, there should he no problem. In oilier cases, however, the researcher's structuring of responses may overlook some important responses. In asking about "the most important issue facing the United Slates," for example, his or her checklist of issues might omit certain issues that respondents would have said were important. The construction of closed-ended questions should be guided by two structural requirements. First, the response categories provided should be exhaustive: They should include all the possible responses that might be expected. Often, researchers ensure this by adding a category such as "Other (Please specify:_)." Second, the answer categories must be mutually exclusive: The respondent should not feel compelled lo select more than one. (In some cases, you may wish to solicit multiple answers, but these may create difficulties in daia processing and analysis later on.) To ensure I hat your categories are mutually exclusive, carefully consider each combination of categories, asking yourself whether a person could reasonably choose more than one answer. In addition, it's useful to add an instruction to the question asking the respondent to select the one best answer, but this technique is not a satisfactory substitute for a carefully constructed set of responses. Make Items Clear It should go without saying that questionnaire items should be clear and unambiguous, but the broad proliferation of unclear and ambiguous questions in surveys makes the point worth emphasizing. Often we can become so deeply involved in the topic under examination that opinions and perspectives are clear to us but not to our respon- Guidelines for Asking Questions . 241 dents—many of whom have paid little or no attention to the topic. Or, if we have only a superficial understanding of the topic, we may fail to specify the intent of a question sufficiently. The question "What do you think about the proposed peace plan?" may evoke in the respondent a counter-question: "Which proposed peace plan?" Questionnaire items should be precise so that the respondent knows exactly what the researcher is asking, The possibilities for misunderstanding are endless, and no researcher is immune (Polivka and Rothgeb 1993). One of the most established research projects in the United States is the Census Bureau's ongoing "Current Population Survey" or CPS, Which measures, among other critical data, the nation's unemployment rate. A part of the measurement of employment patterns focuses on a respondent's activities during "last week," by which the Census Bureau means Sunday through Saturday. Studies undertaken to determine the accuracy of the survey found that more than hall the respondents took "last week" to include only Monday through Friday. By the same token, whereas the Census Bureau defines "working full-time" as 35 or more hours a week, the same evaluation studies showed that some respondents used the more traditional definition of 40 hours per week. As a consequence, the wording of these questions in the CPS was modified in 1994 to specify the Census Bureau's definitions. Similarly, the use of the term Native American to mean American Indian often produces an overrepre-senlation of that ethnic group in surveys. Clearly, many respondents understand the term to mean "born in the United States." A void Double-Barreled Questions Frequently, researchers ask respondents for a single answer to a question that actually has multiple pans. That seems to happen most often when the researcher has personally identified with a complex question. For example, you might ask respondents to agree or disagree with the statement "The United States should abandon its space program and spend the money on domestic programs." Although many people would unequivocally agree with the statement and others would unequivocally disagree, still others would be unable to answer. Some would want to abandon the space program and give the money back to the taxpayers. Others would want to continue the space program but also put more money into domestic programs. These latter respondents could neither agree nor disagree without misleading you. As a general rule, whenever the word and appears in a question or questionnaire statement, check whether you're asking a double-barreled question. See the box entitled "Double-Barreled and Beyond" for some imaginative variations on this theme. Respondents Must Be Competent to Answer In asking respondents to provide information, you should continually ask yourself whether (hey can do so reliably. In a study of child rearing, you might ask respondents to report the age al which they first talked back to their parents. Quite aside from the problem of defining talking back to parents, it's doubtful that most respondents would remember with any degree of accuracy. As another example, student government leaders occasionally ask their constituents to indicate how students' fees ought to be spent. Typically, respondents are asked to indicate the percentage of available funds thai should be devoted to a long list of activities. Without a fairly good knowledge of the nature of those activities and the costs involved in them, the respondents cannot provide meaningful answers. Administrative costs, lor example, will receive little support although they may be essential to the program as a whole. One group of researchers examining the driving experience of teenagers insisted on asking an open-ended question concerning the number of miles driven since receiving a license. Although consultants argued that lew drivers would be able to estimate such information with any accuracy, the question was asked nonetheless. In response, some teenagers reported driving hundreds of thousands of miles. Double-Barreled and Beyond Even established, professional researchers have sometimes created double-barreled questions and worse. Consider this question, asked of U.S. citizens in April 1986, at a time when the country's relationship with Libya was at an especially low point. Some observers suggested the United States might end up in a shooting war with the small North African nation.The Harris Poll sought to find out what U.S. public opinion was. If Libya now increases its terrorist acts against the U.S. and we keep inflicting more damage on Libya, then inevitably it will all end in the U.S. going to war and finally invading that country which would be wrong. Respondents were given the opportunity of answering "Agree,""Disagree,"or"Not sure." Notice the elements contained in the complex statement: 1. Will Libya increase its terrorist acts against the U.S.? 2. Will the U.S. inflict more damage on Libya? 3. Will the U.S. inevitably or otherwise go to war against Libya? 4. Would the U.S. invade Libya? 5. Would that be right or wrong? These several elements offer the possibility of numerous points of view—far more than the three alternatives offered respondents to the survey. Even if we were to assume hypothetical^ that Libya would "increase its terrorist attacks"and the United States would "keep inflicting more damage"in return, you might have any one of at least seven distinct expectations about the outcome: US. will War is not probable goto but not in War is war inevitable inevitable U.S. will not invade Libya 1 2 3 U.S.will invade Libya but it would be wrong 4 5 U.S.will invade Libya and it would be right 6 7 The examination of prognoses about the Libyan situation is not the only example of double-barreled questions sneaking into public opinion research. Here are some questions the Harris Poll asked in an attempt to gauge U.S. public opinion about then Soviet General Secretary Gorbachev: He looks like the kind of Russian leader who will recognize that both the Soviets and the Americans can destroy each other with nuclear missiles so it is better to come to verifiable arms control agreements. He seems to be more modern, enlightened, and attractive, which is a good sign for the peace of the world. Even though he looks much more modern and attractive, it would be a mistake to think he will be much different from other Russian leaders. How many elements can you identify in each of the questions? How many possible opinions could people have in each case? What does a simple "agree" or "disagree" really mean in such cases? Source: Reported in World Opinion Update, October 1985 and May 1986, respectively, Guidelines for Asking Questions . 243 Respondents Must Be Willing to Answer Often, we would like to learn things from people that they are unwilling to share with us. For example, Yanjie Bian indicates that it has often been difficult to get candid answers from people in China. [Here] people are generally careful about what they say on nonprivate occasions in order to survive under authoritarianism. During the Cultural Revolution between 1966 and 1976, for example, because of the radical political agenda and political intensity throughout the country, it was almost impossible to use survey techniques to collect valid and reliable data inside China about the Chinese people's life experiences, characteristics, and attitudes towards the Communist regime. (1994:19-20) Sometimes, U.S. respondents may say they're undecided when, in fact, they have an opinion but think they're in a minority. Under that condition, they may be reluctant to tell a stranger (the interviewer) what that opinion is. Given this problem, the Gallup Organization, for example, has used a "secret ballot" format, which simulates actual election conditions, in that the "voter" enjoys complete anonymity. In an analysis of the Gallup Poll election data from 1944 to 1988, Andrew Smith and G. F. Bishop (1992) have found that this technique substantially reduced the percentage of respondents who said they were undecided about how they would vote. This problem is not limited to survey research, however. Richard Mitchell (1991:100) faced a similar problem in his field research among U.S. survivalists: Survivalists, for example, are ambivalent about concealing their identities and inclinations. They realize that secrecy protects then from the ridicule of a disbelieving majority, but enforced separatism diminishes opportunities for recruitment and information exchange. . . . "Secretive" survivalists eschew telephones, launder their mail through letter exchanges, use nicknames and aliases, and carefully con- ceal their addresses from strangers. Yet once I was invited to group meetings, I found them cooperative respondents. Questions Should Be Relevant Similarly, questions asked in a questionnaire should be relevant to most respondents. When attitudes are requested on a topic that few respondents have thought about or really care about, the results are not likely to be useful. Of course, because the respondents may express attitudes even though they have never given any thought to the issue, you run the risk of being misled. This point is illustrated occasionally when researchers ask for responses relating to fictitious people and issues. In one political poll I conducted, I asked respondents whether they were familiar with each of 15 political figures in the community. As a methodological exercise, I made up a name: Tom Sakumoto. In response, 9 percent of the respondents said they were familiar with him. Of those respondents familiar with him, about half reported seeing him on television and reading about him in the newspapers. When you obtain responses to fictitious issues, you can disregard those responses. But when the issue is real, you may have no way of telling which responses genuinely reflect attitudes and which reflect meaningless answers to an irrelevant question. Ideally, we would like respondents to simply report that they don't know, have no opinion, or are undecided in those instances where that is the case. Unfortunately, however, they often make up answers. Short Items Are Best In the interests of being unambiguous and precise and of pointing to the relevance of an issue, researchers tend to create long and complicated items. That should be avoided. Respondents are often unwilling to study an item in order to understand it. The respondent should be able to read an 244 . Chapter 9: jurvoy Research item quickly, understand its intent, and select or provide an answer without difficulty. In general, assume that respondents will read items quickly and give quick answers. Accordingly, provide clear, short items that will not be misinterpreted under those conditions. Avoid Negative Items The appearance of a negation in a questionnaire item paves the way lor easy misinterpretation. Asked to agree or disagree with the statement "The United States should not recognize Cuba," a sizable portion of the respondents will read over the word not and answer on that basis. Thus, some will agree with the statement when they're in favor of recognition, and others will agree when they oppose it. And you may never know which are which. Similar considerations apply to other "negative" words. In a study of support for civil liberties, for example, respondents were asked whether they felt "the following kinds of people should be prohibited from teaching in public schools" and were presented with a list including such items as a communist, a Ku Klux Klansman, and so forth. The response categories "yes" and "no" were given beside each entry. A comparison of the responses to this item with other items reflecting support for civil liberties strongly suggested that many respondents gave the answer "yes" to indicate willingness for such a person to teach, rather than to indicate that such a person should be prohibited from teaching. (A later study in the series giving as answer categories "permit" and "prohibit" produced much clearer results.) Avoid Biased Items and Terms Recall from our discussion of conceptualization and operationalization in Chapter 5 that there are no ultimately true meanings for any of the concepts we typically study in social science. Prejudice has no ultimately correct definition; whether a given person is prejudiced depends on our definition of that term. The same general principle applies to the responses we get from people completing a questionnaire. The meaning of someone's response to a question depends in large part on its wording. This is true of every question and answer. Some questions seem to encourage particular responses more than do other questions. In the context of questionnaires, bias refers to any property of questions that encourages respondents to answer in a particular way. Most researchers recognize the likely effect of a question that begins, "Don't you agree with the President of the United States that ..." and no reputable researcher would use such an item. Unhappily, the biasing effect of items and terms is far subtler than this example suggests. The mere identification of an attitude or position with a prestigious person or agency can bias responses. The item "Do you agree or disagree with the recent Supreme Court decision that..." would have a similar effect. Such wording may not produce consensus or even a majority in support of the position identified with the prestigious person or agency, but it will likely increase the level of support over what would have been obtained without such identification. Sometimes the impact of different forms of question wording is relatively subtle. For example, when Kenneth Rasinski (1989) analyzed the results of several General Social Survey studies of attitudes toward government spending, he found that the way programs were identified had an impact on the amount of public support they received. Here are some comparisons: More Support "Assistance to the poor" "Halting rising crime rate" "Dealing with drug addiction" "Solving problems of big cities" "Improving conditions of blacks" "Protecting social security" Less Support "Welfare" "Law enforcement" "Drug rehabilitation" "Assistance to big cities" "Assistance to blacks" "Social security" In 1986, for example, 62.8 percent of the respondents said too little money was being spent on "assistance to the poor," while in a matched survey Questionnaire Construction . 245 that year, only 23.1 percent said we were spending too little on "welfare." In this context, be wary of what researchers call the social desirability of questions and answers. Whenever we ask people for information, they answer through a filter of what will make them look good. This is especially true if they're interviewed face-to-face. Thus, for example, a particular man may feel that things would be a lot better if women were kept in the kitchen, not allowed to vote, forced to be quiet in public, and so forth. Asked whether he supports equal rights for women, however, he may want to avoid looking like a chauvinist. Recognizing that his views are out of step with current thinking, he may choose to say "yes." The best way to guard against this problem is to imagine how you would feel giving each of the answers you intend to offer to respondents. If you would feel embarrassed, perverted, inhumane, stupid, irresponsible, or otherwise socially disadvantaged by any particular response, give serious thought to how willing others will be to give those answers. The biasing effect of particular wording is often difficult to anticipate. In both surveys and experiments, it is sometimes useful to ask respondents to consider hypothetical situations and say how they think they would behave. Because those situations often involve other people, the names used can affect responses. For example, researchers have long known that male names for the hypothetical people may produce different responses than do female names. Research by Joseph Kasof (1993) points to the importance of what the specific names are: whether they generally evoke positive or negative images in terms of attractiveness, age, intelligence, and so forth. Kasof's review of past research suggests there has been a tendency to use more positively valued names for men than for women. As in all other research, carefully examine the purpose of your inquiry and construct items that will be most useful to it. You should never be misled into thinking there are ultimately "right" and "wrong" ways of asking the questions. When in doubt about the best question to ask, moreover, remember that you should ask more than one. These, then, are some general guidelines for writing questions to elicit data for analysis and interpretation. Next we look at how to construct questionnaires. Questionnaire Construction Questionnaires are used in connection with many modes of observation in social research. Although structured questionnaires are essential to and most directly associated with survey research, they are also widely used in experiments, field research, and other data-collection activities. For this reason, questionnaire construction can be an important practical skill for researchers. As we discuss the established techniques for constructing questionnaires, let's begin with some issues of questionnaire format. General Questionnaire Format The format of a questionnaire is just as important as the nature and wording of the questions asked. An improperly laid out questionnaire can lead respondents to miss questions, confuse them about the nature of the data desired, and even lead them to throw the questionnaire away. As a general rule, a questionnaire should be spread out and uncluttered. Inexperienced researchers tend to fear that their questionnaire will look too long; as a result, they squeeze several questions onto a single line, abbreviate questions, and try to use as few pages as possible. These efforts are ill-advised and even dangerous. Putting more than one question on a line will cause some respondents to miss the second question altogether. Some respondents will misinterpret abbreviated questions. More generally, respondents who find they have spent considerable time on the first page of what seemed a short questionnaire will be more demoralized than respondents who quickly complete the first several pages of what initially seemed a rather long form. Moreover, the latter will have made fewer errors and will not have been forced to reread confusing, abbreviated questions. Nor will 246 , Chapter 9: Survey Research they have been forced to write a long answer in a tiny space. The desirability of spreading out questions in the questionnaire cannot be Overemphasized. Squeezed-together questionnaires are disastrous, whether completed by the respondents themselves or administered by trained interviewers. And the processing of such questionnaires is another nightmare; I'll have more to say about that in Chapter 14. Formats for Respondents In one of the most common types of questionnaire items, the respondent is expected to check one response from a series. For this purpose my experience has been that boxes adequately spaced apart are the best format. Modern word processing makes the use of boxes a practical technique these days; setting boxes in type can also be accomplished easily and neatly. You can approximate boxes by using brackets: [ ], but if you're creating a questionnaire on a computer, you should take the few extra minutes to use genuine boxes that will give your questionnaire a more professional look. Here are some easy examples: □ OP Rather than providing boxes to be checked, you might print a code number beside each response and ask the respondent to circle the appropriate number (see Figure 9-1). This method has the added advantage of specifying the code number to be entered later in the processing stage (see Chap-lei L4). II numbers are to be circled, however, you should provide clear and prominent instructions to the respondent, because many will be tempted to (toss out the appropriate number, which makes data processing even more difficult. (Note that the technique can be used more safely when interviewers administer the questionnaires, since the interviewers themselves record the responses.) Contingency Questions Quite often in questionnaires, certain questions will be relevant to some of the respondents and irrelevant to others. In a study of birth control meth- FIGURE 9-1 Circling the Answer 1. Yes © No 3 . Don't know ods, for instance, you would probably not want to ask men if they take birth control pills. This sort of situation often arises when researchers wish to ask a series of questions about a certain topic. You may want to ask whether your respondents belong to a particular organization and, if so, how often they attend meetings, whether they have held office in the organization, and so forth. Or, you might want to ask whether respondents have heard anything about a certain political issue and then learn the attitudes of those who have heard of it. Each subsequent question in series such as these is called a contingency question: Whether it is to be asked and answered is contingent on responses to the first question in the series. The proper use of contingency questions can facilitate the respondents' task in completing the questionnaire, because they are not faced with trying to answer questions irrelevant to them. There are several formats for contingency questions. The one shown in Figure 9-2 is probably the clearest and most effective. Note two key elements in this format. First, the contingency question is isolated from the other questions by being set off to the side and enclosed in a box. Second, an arrow connects the contingency question to the answer on which it is contingent. In the illustration, only those respondents answering yes are expected to answer the contingency question. The rest of the respondents should simply skip it. Note that the questions shown in Figure 9-2 could have been dealt with in a single question. The question might have read, "How many times, if any, have you smoked marijuana?" The response categories, then, might have read: "Never," "Once," "2 to 5 times," and so forth. This single question Questionnaire Construction . 247 FIGURE 9-2 Contingency Question Format 23. Have you ever smoked marijuana? If yes: About how many times have you smoked marijuana? [ ] Once [ ] 2 to 5 times [ ] 6 to 10 times [ ] 11 to 2 0 times [ ] More than 20 times I_I FIGURE 9-3 Contingency Table Have you ever been abducted by aliens? [ J Yes -1 [ ] No y If yes: Did they let you [ J Yes- [ ] No steer the ship? t If yes: How fast did you go? [ ] Warp speed [ ] Weenie speed would apply to all respondents, and each would find an appropriate answer category. Such a question, however, might put some pressure on respondents to report having smoked marijuana, because the main question asks how many times they have smoked it, even though it allows lor those exceptional cases who have never smoked marijuana even once. (Tire emphases used in the previous sentence give a fair indication of how respondents might read the question.) The contingency question format illustrated in Figure 9-2 should reduce the subtle pressure on respondents to report having smoked marijuana. Used properly, even rather complex sets of contingency questions can be constructed without confusing the respondent. Figure 9-3 illustrates a more complicated example. Sometimes a set of contingency questions is long enough to extend over several pages. Suppose you're studying political activities of college students, and you wish to ask a large number of questions of those students who have voted in a national, state, or local election. You could separate out the relevant respondents with an initial question such as "Have you ever voted in a national, stale, or local election?" but it would be confusing to place the contingency questions in a box stretching over several pages. It would make more sense to enter instructions in parentheses after each answer telling respondents to answer or skip the 248 . Chapter 9: Survey Research FIGURE 9-4 Instructions to Skip 13. Have you ever voted in a national, state, or local election? [ ] Yes (Please answer questions 14-25.) [ ] No (Please skip questions 14-25. Go directly to question 26 on page 8.) FIGURE 9-5 Matrix Question Format 17. Beside each of the statements p resented below, please indicate whether you Strongly Agree (SA), Agree (A), Disagree (D), Strongly Disagree (SD) , or are Undecided (U). SA A SD u a. What this country needs [ ) [ 1 t 1 [ ] [ ] b. The police should be [ ] [ ] [ 1 [ ] [ ] c. During riots, looters should be shot on sight.... [ ] [ ] [ ] [ ] [ ] etc. contingency questions. Figure 9-4 provides an illustration of this method. In addition to these instructions, it's worthwhile to place an instruction at the top of each page containing only the contingency questions. For example, yon might say, "This page is only for respondents who have voted in a national, state, or local election." Clear instructions such as these spare respondents the frustration of reading and puzzling over questions that are irrelevant to them and increase the likelihood of responses from those lor whom the questions are relevant. Matrix Questions Ouileolten, you'll want to ask several questions 11 Ml have the same set of answer categories. This is typically the case whenever the liken response categories arc used. In such cases, it is often possible to construct a matrix of items and answers as illustrated in figure 9-5. This Formal offers several advantages over other formats. First, ii uses space efficiently. Second, re- spondents will probably find it faster to complete a set of questions presented in this fashion. In addition, this format may increase the comparability of responses given to different questions for the respondent as well as for the researcher. Because respondents can quickly review their answers to earlier items in the set, they might choose between, say, "strongly agree" and "agree" on a given Statement by comparing the strength of their agreement with their earlier responses in the set. There are some dangers inherent in using this format, however. Its advantages may encourage you to structure an item so that the responses fit into the matrix format when a different, more idiosyncratic set of responses might be more appropriate. Also, the matrix question formal can foster a response-set among some respondents: They may develop a pattern of, say, agreeing with all the statements. This would be especially likely if the set of statements began with several that indicated a particular orientation (lor example, a liberal political perspective) with only a lew later ones representing the opposite orientation. Respondents might assume that all the statements represented Questionnaire Construction . 249 the same orientation and, reading quickly, misread some of them, thereby giving the wrong answers. This problem can be reduced somewhat by alternating statements representing different orientations and by making all statements short and clear. Ordering Items in a Questionnaire The order in which questionnaire items are presented can also affect responses. First, the appearance of one question can affect the answers given to later ones. For example, if several questions have been asked about the dangers of terrorism to the United Stales and then a question asks respondents to volunteer (open-ended) what they believe to represent dangers to the United Slates, terrorism will receive more citations than would otherwise be lire case. In this situation, it is preferable to ask the open-ended question first. Similarly, if respondents are asked to assess their overall religiosity ("How important is your religion to you in general?"), their responses to later questions concerning specific aspects of religiosity will be aimed at consistency with the prior assessment. The converse is true as well. If respondents are first asked specific questions about different aspects of their religiosity, their subsequent overall assessment will reflect the earlier answers. The impact of item order is not uniform. When J. Edwin Benton and John Daly (1991) conducted a local government survey, they found that the less educated respondents were more influenced by the order of questionnaire items than were those with more education. Some researchers attempt lo overcome this effect by randomizing the order of items. This effort is usually futile. In the first place, a randomized set of items will probably strike respondents as chaotic ajid worthless. The random order also makes it more difficult lor respondents to answer, because they must continually switch their attention from one topic to another. Finally, even a randomized ordering of items will have the effect discussed previously— except that you'll have no control over the effect. The safest solution is sensitivity to the problem. Although you cannot avoid the effect of item order, try to estimate what that effect will be so that you can interpret results meaningfully. If the order of items seems especially important in a given study, you might construct more than one version of the questionnaire with different orderings of the items. You will then be able to determine the effects by comparing responses lo the various versions. At the very least, you should pretest your questionnaire in the different forms. (We'll discuss pretesting in a moment.) The desired ordering of items differs between interviews and self-administered questionnaires. In the latter, it's usually best to begin the questionnaire with the most interesting set of items. The potential respondents who glance casually over the first few items should want to answer them. Perhaps the items will ask for attitudes they're aching to express. At the same lime, however, the initial items should not be threatening. (It might be a bad idea to begin with items about sexual behavior or drug use.) Requests for duller, demographic data (age, gender, and the like) should generally be placed at the end of a self-administered questionnaire. Placing these items at the beginning, as many inexperienced researchers are tempted lo do, gives the questionnaire the initial appearance of a routine form, and the person receiving it may not be motivated to complete it. Just the opposite is generally true lor interview surveys. When the potential respondent's door first opens, the interviewer must begin gaining rapport quickly. Altera shod introduction to the study, the interviewer can best begin by enumerating the members of the household, gelling demographic data about each. Such items are easily answered and generally nonthreatening. Once the initial rapport lias been established, the interviewer can then move into the area of altitudes and more sensitive mailers. An interview that began with the question "Do you believe in witchcraft?" would probably end rather quickly. Questionnaire Instructions Every questionnaire, whether it is lo be completed by respondents or administered by interviewers, should contain clear instructions and introductory comments where appropriate. 250 , Chapter 9: Survey Research It's useful to begin every self-administered questionnaire with basic instructions for completing it. Although many people these days have experience with forms and questionnaires, begin by telling them exactly what you want: that they are to indicate their answers to certain questions by placing a check mark or an X in the box beside the appropriate answer or by writing in their answer when asked to do so. II many open-ended questions are used, respondents should be given some guidelines about whether brief or lengthy answers are expected. If you wish to encourage your respondents to elaborate on their responses to closed-ended questions, that should be noted. If a questionnaire has subsections—political attitudes, religious attitudes, background data— introduce each with a short statement concerning its content and purpose. For example, "In this section, we would like to know what people consider the most important community problems." Demographic items at the end of a self-administered questionnaire might be introduced thus: "Finally, we would like to know just a little about you so we can see how different types of people feel about the issues we have been examining." Short introductions such as these help the respondent make sense of the questionnaire. They make the questionnaire seem less chaotic, especially when it taps a variety of data. And they help put the respondent in the proper frame of mind for answering the questions. Some questions may require special instructions to facilitate proper answering. This is especially true if a given question varies from the general instructions pertaining to the whole questionnaire. Some spec ilk examples will illustrate this situation. Despite attempts to provide mutually exclusive answers in closed-ended questions, often more than one answer will apply for respondents. II you waul a single answer, you should make this perfectly clear in the question. An example would be "From tlte list below, please check the primary reason lor your decision to attend college." Often the main question can be followed by a parenthetical note: "Please check the one best answer." If, on the other hand, you want the respondent to check as many answers as apply, you should make this clear. When a set of answer categories are to be rank-ordered by the respondent, the instructions should indicate this, and a different type of answer format should be used {for example, blanks instead of boxes). These instructions should indicate how many answers are to he ranked (for example: all; only the first and second; only the first and last; the most important and least important). These instructions should also spell out the order of ranking (for example, "Place a 1 beside the most important item, a 2 beside the next most important, and so forth"). Rank-ordering of responses is often difficult for respondents, however, because they may have to read and reread the list several times, so this technique should only be used in those situations where no other method will produce the desired result. In multiple-part matrix questions, it's useful to give special instructions unless the same format is used throughout the questionnaire. Sometimes respondents will be expected to check one answer in each column of the matrix; in other questionnaires they'll be expected to check one answer in each row. Whenever the questionnaire contains both formats, it's useful to add an instruction clarifying which is expected in each case. Pretesting the Questionnaire No matter how carefully researchers design a data-collection instrument such as a questionnaire, there is always the possibility—indeed the certainty—of error. They will always make some mistake: an ambiguous question, one that people cannot answer, or some other violation of the rules just discussed. The surest protection against such errors is to pretest the questionnaire in full or in part. Give the questionnaire to the ten people in your bowling league, for example. It's not usually essential that the pretest subjects comprise a representative sample, although you should use people to whom the questionnaire is at least relevant. By and large, it's better to ask people to complete the questionnaire than to read through it looking lor errors. All too often, a question seems to make sense on a first reading, but it proves to be impossible to answer. FIGURE 9-6 A Sample Questionnaire 10. Here are some things the government might do for the economy. Circle one number for each action to show whether you are in favor of it or against it. 11. 1. Strongly in favor of 2. In favor of 3. Neither in favor of nor against 4. Against 5. Strongly against PLEASE CIRCLE A NUMBER a. Control of wages by legislation.......... 1 b. Control of prices by legislation......... 1 c. Cuts in government spending.............. 1 d. Government financing of projects to create new jobs.......................... 1 e. Less government regulation of business... 1 f. Support for industry to develop new products and technology.................. 1 g. Supporting declining industries to protect jobs............................. 1 h. Reducing the work week to create more jobs................................ 1 2 3 4 5 28/ 2 3 4 5 29/ 2 3 4 5 30/ 2 3 4 5 31/ 2 3 4 5 32/ 2 3 4 5 33/ 2 3 4 5 34/ 2 3 4 5 35/ Listed below are various areas of government spending. Please indicate whether you would like to see more or less government spending in each area. Remember that if you say "much more," it might require a tax increase to pay for it. 1. Spend much more 2 . Spend more 3 . Spend the same as now 4 . Spend less 5 . Spend much less 8 . Can't choose PLEASE CIRCLE A a. b. c. d. e ■ f. g- h. The environment.................... 1 Health............................. 1 The police and law enforcement..... 1 Education.......................... 1 The military and defense........... 1 Retirement benefits...,............. 1 Unemployment benefits............... 1 Culture and the arts............... 1 MUM'BKR 5 5 5 5 5 5 5 5 36/ 37/ 38/ 39/ 40/ 41/ 42/ 43/ 12. If the government had to choose between keeping down inflation or keeping down unemployment, to which do you think it should give highest priority? Keeping down inflation.............................................1 44/ Keeping down unemployment.........................................2 Can ' t choose......................................................8 13. Do you think that labor unions in this country have too much power or too little power? Far too much power................................................1 Too much power....................................................2 About the right amount of power...................................3 Too little power..................................................4 Far too little power..............................................5 Can' t choose......................................................8 45/ FIGURE 9-6 A Sample Questionnaire (continued) 14. How about business and industry, do they have too much power or too little power? Far too much power................................................1 46/ Too much power....................................................2 About the right amount of power...................................3 Too little power..........................................*.......4 Far too little power..............................................5 Can' t choose......................................................8 15. And what about the federal government, does it have too much power or too little power? Far too much power................................................1 47/ Too much power.....,..............................................2 About the right amount of power...................................3 Too little power..................................................4 Far too little power..............................................5 Can't choose............................-.........................8 16. In general, how good would you say labor unions are for the country as a whole? Excellent.........................................................1 48/ Very good.........................................................2 Fairly good.......................................................3 Not very good.....................................................4 Not good at all...................................................5 Can ' t choose......................................................8 17. What do you think the government's role in each of these industries should be? 1. Own it 2. Control prices and profits but not own it 3. Neither own it nor control its prices and profits 8 . Can't choose a. Electric power............................... b. The steel industry............................. 1 c. Banking and insurance.......................... 1 PLEASE CIRCLE A NUMBER . 1 2 3 8 2 3 8 2 3 8 L8« On the whole, do you think it should or should not be the government's responsibility to . . . 1. Definitely should be 2 . Probably should be 3. Probably should not be 4. Definitely should not be 8 . Can't choose PLEASE CIRCLE A NUMBER a. b. c. d. Provide a job for everyone who wants one..1 2 3 4 Keep prices under control.................1 2 3 4 Provide health care for the sick..........12 3 4 Provide a decent standard of living for the old...................................12 3 4 49/ 50/ 51/ 52/ 53/ 54/ 55/ Self-Administered Questionnaires . 253 Stanley Presser and Johnny Blair (1994) describe several different pretesting strategies and report on the effectiveness of each. They also provide data on the cost of the various methods. There are many more tips and guidelines for questionnaire construction, but covering them all would take a book in itself. Now I'll complete this discussion with an illustration of a real questionnaire, showing how some of these comments find substance in practice. Before turning to the illustration, however, I want to mention a critical aspect of questionnaire design that I discuss in Chapter 14: precoding. Because the information collected by questionnaires is typically transformed into some type of computer format, it's usually appropriate to include data-processing instructions on the questionnaire itself. These instructions indicate where specific pieces of information will be stored in the machine-readable data files. In Chapter 15, I'll discuss the nature of such storage and point out appropriate questionnaire notations. As a preview however, notice that the following illustration has been precoded with the mysterious numbers that appear near questions and answer categories. A Composite Illustration Figure 9-6 is part of a questionnaire used by the University of Chicago's National Opinion Research Center in its General Social Survey. The questionnaire deals with people's attitudes toward the government and is designed to be self-administered. Self-Administered Questionnaires So far we've discussed how to formulate questions and how to design effective questionnaires. As important as these tasks are, the labor will be wasted unless the questionnaire produces useful data— which means that respondents actually complete the questionnaire. We turn now to the major methods for getting responses to questionnaires. I've referred several times in this chapter to interviews versus self-administered questionnaires. Actually, there are three main methods of administering survey questionnaires to a sample of respon- dents: self-administered questionnaires, in which respondents are asked to complete the questionnaire themselves; surveys administered by interviewers in face-to-face encounters; and surveys conducted by telephone. This section and the next two discuss each of these methods in turn. The most common form of self-administered questionnaire is the mail survey. However, there are several other techniques that are often used as well. At times, it may be appropriate to administer a questionnaire to a group of respondents gathered at the same place at the same time. A survey of students taking introductory psychology might be conducted in this manner during class. High school students might be surveyed during homeroom period. Some recent experimentation has been conducted with regard to the home delivery of questionnaires. A research worker delivers the questionnaire to the home of sample respondents and explains the study. Then the questionnaire is left for the respondent to complete, and the researcher picks it up later. Home delivery and the mail can also be used in combination. Questionnaires are mailed to families, and then research workers visit homes to pick up the questionnaires and check them for completeness. Just the opposite technique is to have questionnaires hand delivered by research workers with a request that the respondents mail the completed questionnaires to the research office. On the whole, when a research worker either delivers the questionnaire, picks it up, or both, the completion rate seems higher than for straightforward mail surveys. Additional experimentation with this technique is likely to point to other ways to improve completion rates while reducing costs. The remainder of this section, however, is devoted specifically to the mail survey, which is still the typical form of self-administered questionnaire. Mail Distribution and Return The basic method for collecting data through the mail has been to send a questionnaire accompanied by a letter of explanation and a self-addressed, stamped envelope for returning the questionnaire. 254 . Chapter 9: Survey Research The respondent is expected to complete the questionnaire, put it in the envelope, and return it. If, by any chance, you've received such a questionnaire and failed to return it, it would be valuable to recall the reasons you had for not returning it and keep them in mind any time you plan to send questionnaires to others. A common reason for not returning questionnaires is that it's too much trouble. To overcome this problem, researchers have developed several ways to make returning them easier. For instance, a self-mailing questionnaire requires no return envelope: When the questionnaire is folded a particular way, the return address appears on the outside. The respondent therefore doesn't have to worry about losing the envelope. More elaborate designs are available also. The university student questionnaire described later in this chapter was bound in a booklet with a special, two-panel back cover. Once the questionnaire was completed, the respondent needed only to fold out the extra panel, wrap it around the booklet, and seal the whole thing with the adhesive strip running along the edge of the panel. The foldout panel contained my return address and postage. When I repeated the study a couple of years later, I improved on the design. Both the front and back covers had foldout panels: one for sending the questionnaire out and the other for getting it back— thus avoiding the use of envelopes altogether. The point here is that anything you can do to make the job of completing and returning the questionnaire easier will improve your study. Imagine receiving a questionnaire that made no provisions for its return to the researcher. Suppose you had to {1) find an envelope, (2) write the address on it, {Jr) figure out how much postage it required, and (4) pin the stamps on it. How likely is it that yon would return the questionnaire? A lew brief comments on postal options are in order. You have options for mailing questionnaires mil and forgetting litem returned. On outgoing mail, your choices are essentially between first-class postage and bulk rate. First class is more certain, but bulk rate is far cheaper. (Check your local post office for rates and procedures.) On return mail, your choice is between postage stamps and business-reply permits. Here, the cost differential is more complicated. If you use stamps, you pay for them whether people return their questionnaires or not. With the business-reply permit, you pay for only those that are used, but you pay an additional surcharge of about a nickel. This means that stamps are cheaper if a lot of questionnaires are returned, but business-reply permits are cheaper if fewer are returned (and you won't know in advance how many will be returned). There are many other considerations involved in choosing among the several postal options. Some researchers, for example, feel that the use of postage stamps communicates more "humanness" and sincerity than bulk rate and business-reply permits. Others worry that respondents will steam off the stamps and use them for some purpose other than returning the questionnaires. Because both bulk rate and business-reply permits require establishing accounts at the post office, you'll probably find stamps much easier in small surveys. Monitoring Returns The mailing of questionnaires sets up a new research question that may prove valuable to a study. Researchers shouldn't sit back idly as questionnaires are returned; instead, they should undertake a careful recording of the varying rates of return among respondents. An invaluable tool in this activity is a return rate graph. The day on which questionnaires were mailed is labeled Day 1 on the graph, and every day thereafter the number of returned questionnaires is logged on the graph. It's usually best to compile two graphs. One shows the number returned each day—rising, then dropping. The second reports the cumulative number or percentage. In part, this activity provides the researchers with gratification, as they get to draw a picture of their successful data collection. More important, however, it is their guide to how the data collection is going. If follow-up mailings are planned, the graph provides a clue about when such mailings should be launched. (The dates of subsequent mailings should be noted on the graph.) Self-Administered Questionnaires . 255 As completed questionnaires are returned, each should be opened, scanned, and assigned an identification number. These numbers should be assigned serially as the questionnaires are returned, even if other identification (ID) numbers have already been assigned. Two examples should illustrate the important advantages of this procedure. Let's assume you're studying attitudes toward a political figure. In the middle of the data collection, the media break the story that the politician is having extramarital affairs. By knowing the date of that public disclosure and the dates when questionnaires were received, you'll be in a position to determine the effects of the disclosure. (Recall the discussion in Chapter 8 of history in connection with experiments.) In a less sensational way, serialized ID numbers can be valuable in estimating nonresponse biases in the survey. Barring more direct tests of bias, you may wish to assume that those who failed to answer the questionnaire will be more like respondents who delayed answering than like those who answered right away. An analysis of questionnaires received at different points in the data collection might then be used for estimates of sampling bias. For example, if the grade point averages (GPAs) reported by student respondents decrease steadily through the data collection, with those replying right away having higher GPAs and those replying later having lower GPAs, you might tentatively conclude that those who failed to answer at all have lower GPAs yet. Although it: would not be advisable 10 make statistical estimates of bias in this fashion, you could take advantage of approximate estimates based on the patterns you've observed. If respondents have been identified for purposes of follow-up mailing, then preparations for those mailings should be made as the questionnaires are returned. The case study later in this section discusses this process in greater detail. Follow-up Mailings Follow-up mailings may be administered in several ways. In the simplest, nonrespondents are simply sent a letter of additional encouragement to partici- pate. A better method, however, is to send a new copy of the survey questionnaire with the follow-up letter. If potential respondents have not returned their questionnaires after two or three weeks, the questionnaires have probably been lost or misplaced. Receiving a follow-up letter might encourage them to look lor the original questionnaire, but if they can't find it easily, the letter may go for naught. The methodological literature strongly suggests that follow-up mailings provide an effective method for increasing return rates in mail surveys. In general, the longer a potential respondent delays replying, the less likely he or she is to do so at all. Properly timed follow-up mailings, then, provide additional stimuli to respond. The effects of follow-up mailings will be seen in the response rate curves recorded during data collection. The initial mailings will be followed by a rise and subsequent subsiding of returns; the follow-up mailings will spur a resurgence of returns; and more follow-ups will do the same. In practice, three mailings (an original and two follow-ups) seem the most efficient. The timing of follow-up mailings is also important. Here the methodological literature offers less precise guides, but it has been my experience that two or three weeks is a reasonable space between mailings. (This period might be increased by a few days if the mailing time—out and in—is more than two or three days.) When researchers conduct several surveys of the same population over time, they can develop more-specific guidelines. The Survey Research Office at the University of Hawaii conducts frequent student surveys and has been able to refine the mailing and remailing procedure considerably. Indeed, they have found a consistent pattern of returns that appears to transcend differences of survey content, quality of instrument, and so forth. Within two weeks of the first mailing, approximately 40 percent of the questionnaires are returned; within two weeks of the First follow-up, an additional 20 percent are received; and within two weeks of the final follow-up, an additional 10 percent are received. (These response rates reflect the sending of additional questionnaires, not just 256 . Chapter 9: Survey Research letters.) Your results may vary, but this illustration should indicate the value of carefully tabulating return rates for every survey conducted. If the individuals in the survey sample are not identified on the questionnaires, it may not be possible to remail only to nonrespondents. In such a case, send your follow-up mailing to all members of the sample, thanking those who may have already participated and encouraging those who have not to do so. (The case study reported later describes another method you can use in an anonymous mail survey.) Acceptable Response Rates A question that new survey researchers frequently ask concerns the percentage return rate, or the response rate, that should be achieved in a mail survey. The body of inferential statistics used in connection with survey analysis assumes that all members of the initial sample complete and return their questionnaires. Because this almost never happens, response bias becomes a concern, with the researcher testing (and hoping) for the possibility that the respondents look essentially like a random sample of the initial sample, and thus a somewhat smaller random sample of the total population. (For more detailed discussions of response bias, you might want to read Donald [I960] and Brownlee [197513 Nevertheless, overall response rale is one guide to the representativeness of the sample respondents. II a high response rate is achieved, there is less chance of significant response bias than in a low rale. Conversely, a low response rate is a danger signal, because the nonrespondents are likely in differ from the respondents in ways other than juM their willingness to participate in your survey. Richard Bolstein (1991), for example, found that (hose who did not respond to a preelection political poll were less likely to vote that those who did .participate. Estimating the turnout rate from the survey respondents, then, would have overestimated the number who would show up at the polls. lint what is a high or low response rate? A quick review of the survey literature will uncover a wide range of response rates. Each of these may be ac- companied by a statement like "This is regarded as a relatively high response rate for a survey of this type." (A U.S. senator made this statement regarding a poll of constituents that achieved a 4-percent return rale.) Even so, it's possible to state some rules of thumb about return rates. I believe that a response rate of 50 percent: is adequate for analysis and reporting. A response of 60 percent is good; a response rate of 70 percent is very good. Bear in mind, however, that these are only rough guides; they have no statistical basis, and a demonstrated lack of response bias is far more important than a high response rate, if you want to pursue this matter further, Delbert Miller (1991 :145-55) has reviewed several specific surveys to offer a better sense of the variability of response rates. As you can imagine, one of the more persistent discussions among survey researchers concerns ways of increasing response rates. You'll recall that this was a chief concern in the earlier discussion of options for mailing out and receiving questionnaires. Survey researchers have developed many ingenious techniques addressing this problem. Some have experimented with novel formats. Others have tried paying respondents to participate. The problem with paying, of course, is that it's expensive to make meaningfully high payment to hundreds or thousands of respondents, but some imaginative afternatives have been used. Some researchers have said, "We want to get your two-cents' worth on some issues, and we're willing to pay"—enclosing two pennies. Another enclosed a quarter, suggesting that the respondent make some little child happy. Still others have enclosed paper money. Don Dillman (1978) provides an excellent review of the various techniques thai survey researchers have used to increase return rates on mail surveys, and he evaluates the impact of each. More important, Dillman stresses the necessity of paying attention to all aspects of the study—what he calls lite "Total Design Method"—rather than one or two special gimmicks. More recently, Francis Yarnmarino, Steven Skinner, and Terry Childers (1991) have undertaken an in-depth analysis of the response rates achieved in many studies using different techniques. Their findings are too complex to summarize easily, Self-Administered Questionnaires . 257 but you might find some guidance there for effective survey design. A Case Study The steps involved in the administration of a mail survey are many and can best be appreciated in a walk-through of an actual study. Accordingly, this section concludes with a detailed description of how the student survey we discussed in Chapter 7 as an illustration of systematic sampling was administered. This study did not represent the theoretical ideal for such studies, but in that regard it serves present purposes all the better. The study was conducted by the students in my graduate seminar in survey research methods. As you may recall, 1,100 students were selected from the university registration tape through a stratified, systematic sampling procedure. For each student selected, six self-adhesive mailing labels were printed by the computer. By the time we were ready to distribute the questionnaires, it became apparent that our meager research funds wouldn't cover several mailings to the entire sample of 1,100 students (questionnaire printing costs were higher than anticipated). As a result, we chose a systematic two-thirds sample of the mailing labels, yielding a subsample of 773 students. Earlier, we had decided to keep the survey anonymous in the hope of encouraging more candid responses to some sensitive questions. (Later surveys of the same issues among the same population indicated litis anonymity was unnecessary.) Thus, the questionnaires would carry no identification of students on them. At the same time, we hoped to reduce the follow-up mailing costs by mailing only to nonrespondents. To achieve both of these aims, a special postcard method was devised. Each student was mailed a questionnaire that carried no identifying marks, plus a postcard addressed to the research office— with one of the student's mailing labels affixed to the reverse side of the card. The introductory letter asked the studenl to complete and return the questionnaire—assuring anonymity—and to return the postcard simultaneously. Receiving the postcard would tell us—without indicating which questionnaire it was—that the student had returned his oilier questionnaire. This procedure would then facilitate follow-up mailings. The 32-page questionnaire was printed in booklet form. The three-panel cover described earlier in this chapter permitted the questionnaire to be returned without an additional envelope. A letter introducing the study and its purposes was printed on the front cover of the booklet. It explained why the study was being conducted (to learn how students feel about a variety of issues), how students had been selected for the study, the importance of each student's responding, and the mechanics of returning the questionnaire. Students were assured that their responses to the survey were anonymous, and the postcard method was explained. A statement followed about the auspices under which the study was being conducted, and a telephone number was provided for those who might want more information about the study. (About live students called for information.) By printing the introductory letter on the questionnaire, we avoided the necessity of enclosing a separate letter in the outgoing envelope, thereby simplifying the task of assembling mailing pieces. The materials for the initial mailing were assembled as follows. (1) One mailing label for each student was stuck on a postcard. (2) Another label was stuck on an outgoing manila envelope. (3) One postcard and one questionnaire were placed in each envelope—with a glance to ensure that the name on the postcard and on the envelope were the same in each case. The distribution of the survey questionnaires had been set up for a bulk rale mailing. Once the questionnaires had been stuffed into envelopes, they were grouped by zip code, tied in bundles, and delivered to the post office. Shortly after the initial mailing, questionnaires and postcards began arriving at the research office. Questionnaires were opened, scanned, and assigned identification numbers as described earlier in this chapter. For every postcard received, a search was made for that student's remaining labels, and they were destroyed. After two or three weeks, the remaining mailing labels were used to organize a follow-up mailing. 258 . Chapter 9: Survey Research This time a special, separate letter of appeal was included in the mailing piece. The new letter indicated that many students had returned their questionnaires already, and ii was very important for all others to do so as well. The follow-up mailing stimulated a resurgence of returns, as expected, and the same logging procedures were continued. The returned postcards told us which additional mailing labels to destroy. Unfortunately, time and financial pressures made it impossible to undertake a third mailing, as had been initially planned, but the two mailings resulted in an overall return rate of 62 percent. This illustration should give you a fairly good sense of what's involved in the execution of mailed self-administered questionnaires. Let's turn now to the second principal method of conducting surveys, in-person interviews. Interview Surveys The interview is an alternative method of collecting survey data. Rather than asking respondents to read questionnaires and enter their own answers, researchers send interviewers to ask the questions orally and record respondents' answers. Interviewing is typically done in a face-to-face encounter, bul telephone interviewing, discussed in the next section, follows most of the same guidelines. Most interview surveys require more than one interviewer, although you might undertake a small-scale interview survey yourself. Portions of this section will discuss methods for training and supervising a staff of interviewers assisting you with a survey. This section deals specifically with survey interviewing. Chapter 10 discusses the less structured, iu-deplh interviews often conducted in qualitative field research. The Role of the Survey Interviewer There are several advantages to having a questionnaire administered by an interviewer rather than a respondent. To begin with, interview surveys typi- cally attain higher response rates than do mail surveys. A properly designed and executed interview survey ought to achieve a completion rate of at least 80 to 85 percent. (Federally funded surveys often require one of these response rates.) Respondents seem more reluctant to turn down an interviewer standing on their doorstep than to throw away a mailed questionnaire.;' The presence of an interviewer also generally decreases the number of "don't knows" and "no answers." If minimizing such responses is important to the study, the interviewer can be instructed to probe for answers ("It you had to pick one of the answers, which do you think would come closest to your feelings?"). Interviewers can also serve as a guard against confusing questionnaire items. If the respondent clearly misunderstands the intent of a question or indicates that he or she does not understand, the interviewer can clarify matters, thereby obtaining relevant responses. (As we'll discuss shortly, such clarifications must be strictly controlled through formal specifications.) Finally, the interviewer can observe respon.-dents as well as ask questions. For example, the interviewer can note the respondent's race if this is considered too delicate a question to ask. Similar observations can be made regarding the quality of the dwelling, the presence of various possessions, the respondent's ability to speak English, the respondent's general reactions to the study, and so forth, In one survey of students, respondents were given a short, sell-administered questionnaire to complete—concerning sexual attitudes and behavior—during the course of the interview. While a student completed the questionnaire, the interviewer made detailed notes regarding the dress and grooming of the respondent. This procedure raises an ethical issue. Some researchers have objected that such practices violate the spirit of the agreement by which the respondent has allowed the interview. Although ethical issues seldom are clear-cut in social research, it's important to be sensitive to them. We'll examine ethical issues in detail in Chapter 18. Survey research is of necessity based on an unrealistic stimulus-response theory of cognition and behavior. Researchers must assume that a question- Interview Surveys . 259 naire item will mean the same thing to every respondent, and every given response must mean the same when given by different respondents. Although this is an impossible goal, survey questions are drafted to approximate the ideal as closely as possible. The interviewer must also (it into this ideal situation. The interviewer's presence should not affect a respondent's perception of a question or the answer given. In other words, the interviewer should be a neutral medium through which questions and answers are transmitted. As such,(different interviewers should obtain exactly the same responses from a given respondent./(Recall our earlier discussions of reliability.) This neutrality has a special importance in area samples. To save time and money, a given interviewer is typically assigned to complete all the interviews in a particular geographical area—a city block or a group of nearby blocks. II the interviewer does anything to affect the responses obtained, the bias thus interjected might be interpreted as a characteristic of that area. Let's suppose that a survey is being done to determine attitudes toward low-cost housing in order to help in the selection of a site for a new government-sponsored development. An interviewer assigned to a given neighborhood might—through word or gesture—communicate his or her own distaste for low-cost housing developments. Respondents might therefore tend to give responses in general agreement with the interviewer's own position. The results of the survey would indicate that the neighborhood in question strongly resists construction of the development in its area when in fact their apparent resistance simply reflects the interviewer's attitudes. General Guidelines for Survey Interviewing The manner in which interviews ought to be conducted will vary somewhat by survey population and will be affected to some degree by the nature of the survey content. Nevertheless, some general guidelines apply to most interviewing situations. Appearance and Demeanor As a rule, interviewers should dress in a fashion similar to that of the people they'll be interviewing. A richly dressed interviewer will probably have difficulty getting good cooperation and responses from poorer respondents; a poorly dressed interviewer will have similar difficulties with richer respondents. To the extent that the interviewer's dress and grooming differ from those of the respondents, it should be in the direction of cleanliness and neatness in modest apparel. If cleanliness is not next to godliness, it appears to be next to neutrality. Although middle-class neatness and cleanliness may not be accepted by all sectors of U.S. society, they remain the primary norm and are the most likely to be acceptable to the largest number of respondents. Dress and grooming are typically regarded as signs of a person's attitudes and orientations. At the lime this is being written, torn jeans, green hair, and razor blade earrings may communicate—correctly or incorrectly—that the interviewer is politically radical, sexually permissive, favorable to drug use, and so forth. Any of these impressions could bias responses or affect the willingness of people to be interviewed. In demeanor, interviewers should be pleasant if nothing else. Because they'll be prying into a respondent's personal life and attitudes, they must communicate a genuine interest in getting to know the respondent without appearing to spy. They must be relaxed and friendly without being too casual or clinging. Good interviewers also have the ability to determine very quickly the kind of person the respondent will feel most comfortable with, the kind of person the respondent would most enjoy talking to. Clearly, the interview will be more successful if the interviewer can become the kind of person the respondent is comfortable with. Further, because respondents are asked to volunteer a portion of their time and to divulge personal information, they deserve the most enjoyable experience the researcher and interviewer can provide. Familiarity with Questionnaire If an interviewer is unfamiliar with the questionnaire, the study suffers and an unfair burden is placed on the respondent. The interview is likely to 260 . Chapter 9: Survey Research lake more time than necessary and be unpleasant. Moreover, the interviewer cannot acquire familiarity by skimming through the questionnaire two or three limes. He or she must study it carefully, question by question, and must practice reading it aloud. Ultimately, the interviewer must be able to read the questionnaire items to respondents without error, without stumbling over words and phrases. A good model is the actor reading lines in a play or movie. The lines must be read as though they constituted a natural conversation, but that conversation must follow exactly the language set down in the questionnaire. By the same token, the interviewer must be familiar with the specifications prepared in conjunction with the questionnaire. Inevitably some questions will not exactly lit a given respondent's situation, and the interviewer must determine how the question should be interpreted in that situation. The specifications provided to the interviewer should give adequate guidance in such cases, but the interviewer must know the organization and contents of the specifications well enough to refer to them efficiently. It would be better for the interviewer to leave a given question unanswered than 10 spend five minutes searching through the specifications for clarification or trying to interpret the relevant instructions. Following Question Wording Exactly The first part of this chapter discussed the significance* of question wording for the responses obtained. A slight change in the wording of a given question may lead a respondent to answer "yes" rather than "no." It follows that interviewers must be instructed to follow the wording of questions exactly. Otherwise all the effort that the developers have put into carefully phrasing the questionnaire items to obtain the information they need and to ensure thai respondents interpret items precisely as intended will be wasted. Recording Responses Exactly Whenever the questionnaire contains open-ended questions, those soliciting the respondent's own answer, the interviewer must record that answer ex- actly as given. No attempt should be made to summarize, paraphrase, or correct bad grammar. This exactness is especially important because the interviewer will not know how the responses are to be coded. Indeed, the researchers themselves may not know the coding until they've read a hundred or so responses. For example, the questionnaire might ask respondents how they feel about the traffic situation in their community. One respondent might answer that there are too many cars on the roads and that something should be done to limit their numbers. Another might say that more roads are needed. If the interviewer recorded these two responses with the same summary—"congested traffic"—the researchers would not be able to take advantage of the important differences in the original responses. Sometimes, verbal responses are too inarticulate or ambiguous to permit interpretation. However, the interviewer may be able to understand the intent of the response through the respondent's gestures or tone. In such a situation, the interviewer should still record the exact verbal response but also add marginal comments giving both the interpretation and the reasons lor arriving at it. More generally, researchers can use any marginal comments explaining aspects of the response not conveyed in the verbal recording, such as the respondent's apparent anger, embarrassment, uncertainty in answering, and so forth. In each case, however, the exact verbal response should also be recorded. Probing for Responses Sometimes respondents in an interview will give an inappropriate or incomplete answer. In such cases, a probe, or request for an elaboration, can be useful. For example, a closed-ended question may present an attitudinal statement and ask the respondent to strongly agree, agree somewhat, disagree somewhat, or strongly disagree. The respondent, however, may reply: "I think that's true." The interviewer should follow this reply with: "Would you say you strongly agree or agree somewhat?" 11 necessary, interviewers can explain that they must check one or the other of the categories provided. If the respondent adamantly refuses to choose, Interview Surveys . 261 the interviewer should write in the exact response given by the respondent. Probes are more frequently required in eliciting responses to open-ended questions. For example, in response to a question about traffic conditions, the respondent might simply reply, "Pretty bad." The interviewer could obtain an elaboration on this response through a variety of probes. Sometimes the best probe is silence; if the interviewer sits quietly with pencil poised, the respondent will probably fill the pause with additional comments. (This technique is used effectively by newspaper reporters.) Appropriate verbal probes might be "How is that?" or "In what ways?" Perhaps the most generally useful probe is "Anything else?" Often, interviewers need to probe for answers that will be sufficiently informative for analytical purposes. In every case, however, such probes must be completely neutral; they must not in any way affect the nature of the subsequent response. Whenever you anticipate that a given question may require probing for appropriate responses, you should provide one or more useful probes next to the question in the questionnaire. This practice has two important advantages. First, you'll have more time to devise the best, most neutral probes. Second, all interviewers will use the same probes whenever they're needed. Thus, even if the probe isn't perfectly neutral, all respondents will be presented with the same stimulus. This is the same logical guideline discussed for question wording. Although a question should not be loaded or biased, it's essential that every respondent be presented with the same question, even if it's biased. Coordination and Control Most interview surveys require the assistance of several interviewers. In large-scale surveys, interviewers are hired and paid for their work. Student researchers might find themselves recruiting friends to help them interview Whenever more than one interviewer is involved in a survey, their efforts must be carefully controlled. This control has two aspects: training interviewers and supervising them after they begin work. The interviewers' training session should begin with the description of what the study is all about. Even though the interviewers may be involved only in the data-collection phase of the project, it will be useful to them to understand what will be done with the interviews they conduct and what purpose will be served. Morale and motivation are usually lower when interviewers don't know what's going on. The training on how to interview should begin with a discussion of general guidelines and procedures, such as those discussed earlier in this section. Then the whole group should go through the questionnaire together—question by question. Don't simply ask if anyone has any questions about the first page of the questionnaire. Read the first question aloud, explain the purpose of the question, and then entertain any questions or comments the interviewers may have. Once all their questions and comments have been handled, go on to the next question in the questionnaire. It's always a good idea to prepare specifications to accompany an interview questionnaire. Specifications axe explanatory and clarifying comments about handling difficult or confusing situations that may occur with regard to particular questions in the questionnaire. When drafting the questionnaire, try to think of all the problem cases that might arise—the bizarre circumstances that might make a question difficult to answer. The survey specifications should provide detailed guidelines on how to handle such situations. For example, even as simple a matter as age might present problems. Suppose a respondent says he or she will be 25 next week. The interviewer might not be sure whether to take the respondent's current age or the nearest one. The specifications for that question should explain what should be done. (Probably, you would specify that the age as of last birthday should be recorded in all cases.) If you've prepared a set of specifications, review them with the interviewers when you go over the individual questions in the questionnaire. Make sure your interviewers fully understand the specifications and the reasons for them as well as the questions themselves. This portion of the interviewer training is likely to generate many troublesome questions from your 262 . Chapter 9: Survey Research interviewers. They'll ask, "What should I do if.. . ?" In such cases, avoid giving a quick, offhand answer. If you have specifications, show how the solution to the problem could be determined from the specifications. If you do not have specifications, show how the preferred handling of the situation fits within the general logic of the question and the purpose of the study. Giving unexplained answers to such questions will only confuse the interviewers and cause them to take their work less seriously. If you don't know the answer to such a question when it is asked, admit it and ask for some time to decide on the best answer. Then think out the situation carefully and be sure to give all the interviewers your answer, explaining your reasons. Once you've gone through the whole questionnaire, conduct one or two demonstration interviews in front of everyone. Preferably, you should interview someone other than one of the interviewers. Realize that your interview will be a model for those you're training, so make it good. It would be best, moreover, if the demonstration interview were done as realistically as possible. Do not pause during the demonstration to point out how you've handled a complicated situation: Handle it, and then explain later. It is irrelevant it the person you're interviewing gives real answers or takes on some hypothetical identity for the purpose, as long as the answers are consistent. After the demonstration interviews, pair off your interviewers and have them practice on each other. When they've completed the questionnaire, have them reverse roles and do it again. Interviewing is the best training for interviewing. As your interviewers practice on each other, wander around, listening in on the practice so you'll know how well they're doing. Once the practice is completed, the whole group should discuss their experiences and ask any other questions they may have. The final stage of the training for interviewers should involve some "real" interviews. Have them conduct some interviews under the actual conditions that will pertain to the final survey. You may want to assign them people to interview, or perhaps they may be allowed to pick people themselves. Do not have them practice on people you've selected in your sample, however. After each interviewer has completed three to five interviews, have him or her check back with you. Look over the completed questionnaires for any evidence of misunderstanding. Again, answer any questions that the interviewers may have. Once you're convinced that a given interviewer knows what to do, assign some actual interviews, using the sample you've selected for the study. It's essential to continue supervising the work of interviewers over the course of the study. You should check in with them after they conduct no more than 20 or 30 interviews. You might assign 20 interviews, have the interviewer bring back those questionnaires when they're completed, look them over, and assign another 20 or so. Although this may seem overly cautious, you must continually protect yourself against misunderstandings that may not be evident early in the study. If you're the only interviewer in your study, these comments may not seem relevant. However, it would be wise, lor example, to prepare specifications for potentially troublesome questions in your questionnaire. Otherwise, you run the risk of making ad hoc decisions during the course of the study that you'll later regret or forget. Also, the emphasis on practice applies equally to the one-person project and to the complex funded survey with a large interviewing stall. Telephone Surveys For years telephone surveys had a rather bad reputation among professional researchers. Telephone surveys are limited by definition to people who have telephones. Years ago, this method produced a substantial social-class bias by excluding poor people from the surveys. This was vividly demonstrated by the Literary Digest fiasco of 1936. Recall that, even though voters were contacted by mail, the sample was partially selected from telephone subscribers, who were hardly typical in a nation just recovering from the Great Depression. By 1993, however, the Census Bureau (1996a: Table 1224) estimated that 93.4 percent of all housing units had telephones, so the earlier form of class bias has been substantially reduced. Telephone Surveys . 263 A related sampling problem involved unlisted numbers. A survey sample selected from the pages of a local telephone directory would totally omit all those people—typically richer—who requested that their numbers not be published. This potential bias has been erased through a technique that has advanced telephone sampling substantially: random-digit dialing. Telephone surveys have many advantages that underlie the growing popularity of this method. Probably the greatest advantages are money and lime, in that order. In a face-to-face, household interview, you may drive several miles to a respondent's home, find no one there, return to the research office, and drive back the next day—possibly finding no one there again. It's cheaper and quicker to let your fingers make the trips. Interviewing by telephone, you can dress any way you please without affecting the answers respondents give. And sometimes respondents will be more honest in giving socially disapproved answers if they don't have to look you in the eye. Similarly, it may be possible to probe into more sensitive areas, though this isn't necessarily the case. People are, to some extent, more suspicious when they can't see the person asking them questions—perhaps a consequence of "surveys" aimed at selling magazine subscriptions and time-share condominiums. Interviewers can communicate a lot about themselves over the phone, however, even though they can't be seen. For example, researchers worry about the impact of an interviewer's name (particularly if ethnicity is relevant to the siudy) and debate the ethics of having all interviewers use bland "stage names" such as Smith or Jones. (Female interviewers sometimes ask permission to do this, to avoid subsequent harassment from men they interview.) Telephone surveys can allow greater control over data collection if several interviewers are engaged in the project. If all the interviewers are calling from the research office, they can get clarification from the person in charge whenever problems occur, as they inevitably do. Alone in the boondocks, an interviewer may have to wing it between weekly visits with the interviewing supervisor. Finally, another important factor involved in the growing use of telephone surveys has to do with personal safety. Don Dillman (1978:4) describes the situation this way: Interviewers must be able to operate comfortably in a climate in which strangers are viewed with distrust and must successfully counter respondents' objections to being interviewed. Increasingly, interviewers must be willing to work at night to contact residents in many households. In some cases, this necessitates providing protection for interviewers working in areas of a city in which a definite threat to the safety of individuals exists. Concerns for safety, thus, work two ways to hamper face-to-face interviews. Potential respondents may refuse to be interviewed, fearing the stranger-interviewer. And the interviewers themselves may incur some risks. All this is made even worse by the possibility of the researchers being sued for huge sums if anything goes wrong. There are problems involved in telephone interviewing, however. As I've already mentioned, the method is hampered by the proliferation of bogus "surveys" that are actually sales campaigns disguised as research. If you have any questions about any such call you receive, by the way, ask the interviewer directly whether you've been selected for a survey only or if a sales "opportunity" is involved, It's also a good idea, if you have any doubts, to get the interviewer's name, phone number, and company. Hang up if the caller refuses to provide any of these. For the researcher, the ease with which people can hang up is another shortcoming of telephone surveys. Once you've been let inside someone's home for an interview, the respondent is unlikely to order you out of the house in midinterview. It's much easier to terminate a telephone interview abruptly, saying something like, "Whoops! Someone's at the door. I gotta go." or "OMIGOD! The pigs are eating my Volvo!" (That sort of thing is much harder to fake when the interviewer is sitting in your living room.) Another potential problem for telephone interviewing is the prevalence of answering machines. 264 . Chapter 9: Survey Research ice Capture1 by James E.Dannemiller SMS Research, Honolulu J he development of various CATI techniques has been a boon to survey and marketing research, though mostly it has supported the collection, coding, and analysis of "data as usual." The Voice Capture™ technique developed by Survey Systems, however, offers quite unusual possibilities, which we are only beginning to explore. In the course of a CATI-based telephone interview, the interviewer can trigger the computer to begin digitally recording the conversation with the respondent. Having determined that the respondent has recently changed his or her favorite TV news show, for example, the interviewer can ask,"Why did you change?"and begin recording the verbatim response. (Early in the interview, the interviewer has asked permission to record parts of the interview.) Later on, coders can play back the responses and code them—much as they would do with the interviewer's typescript of the responses.This offers an easier and more accurate way of accomplishing a conventional task. But that's a tame use of the new capability. It's also possible to incorporate such oral data as parts of a cross-tabulation during analysis. We may create a table of gender by age by reasons for switching TV news shows. Thus, we can hear, in turn, the responses of the young men, young women, middle-aged men, and so forth. In one such study we found the younger and older men tending to watch one TV news show, while the middle-aged men watched something else. Listening to the responses of the middle-aged men, one after another, we heard a common comment: "Well, now that I'm older..." This kind of aside might have been lost in the notes hastily typed by interviewers, but such comments stood out dramatically in the oral data. The middle-aged men seemed to be telling us they felt "maturity" required them to watch a particular show, while more years under their belts let them drift back to what they liked in the first place. These kinds of data are especially compelling to clients, particularly in customer satisfaction studies. Rather than summarize what we feel a client's customers like and don't like, we can let the respondents speak directly to the client in their own words. It's like a focus group on demand. Going one step further, we have found that letting line employees (bank tellers,for example) listen to the responses has more impact than having their supervisors tell them what they are doing right or wrong. As exciting as these experiences are, I have the strong feeling that we have scarcely begun to tap into the possibilities for such unconventional forms of data. A study conducted by Walker Research (1988) found ili,ii hall of the owners of answering machines acknowledged using their machines to "screen" calls at leasl sonic of the lime. Research by Tuckel and Feinberg (1991), however, showed that answering machines had not yet had a significant effect on the ability of telephone researchers to contact prospective respondents. Nevertheless, the researchers concluded that as answering machines continued to proliferate, "the sociodemographic characteristics of owners will change." This fact made it likely that "different behavior patterns as- New Technologies and Survey Research . 265 sociated with the utilization of the answering machine" could emerge (1991 :216). Computer Assisted Telephone Interviewing (CATI) In Chapter 14, we'll be looking at some of the ways computers have influenced the conduct of social research—particularly data processing and analysis. Computers are also changing the nature of telephone interviewing. One innovation is computer-assisted telephone interviewing (CATI). This method is increasingly used by academic, government, and commercial survey researchers. Though there are variations in practice, here's what CATI can look like. Imagine an interviewer wearing a telephone headset, sitting in front of a computer terminal and its video screen. The central computer has been programmed to select a telephone number at random and dials it. (Random-digit dialing avoids the problem of unlisted telephone numbers.) On the video screen is an introduction ("Hello, my name is . . . ") and the first question to be asked ("Could you tell me how many people live at this address?"). When the respondent answers the phone, the interviewer says hello, introduces the study, and asks the first question displayed on the screen. When the respondent answers the question, the interviewer types that answer into the computer terminal—either the verbatim response to an open-ended question or the code category for the appropriate answer to a closed-ended question. The answer is immediately stored in the computer. The second question appears on the video screen, is asked, and the answer is entered into the computer. Thus, the interview continues. In addition to the obvious advantages in terms of data collection, CATI automatically prepares the data for analysis; in fact, the researcher can begin analyzing the data before the interviewing is complete, thereby gaining an advanced view of how the analysis will turn out. Sill another innovation that computer technology makes possible is described in the box entitled "Voice Capture™." New Technologies and Survey Research As we have already seen in the case of computer-assisted telephone interviewing (CATI), many of the new technologies affecting people's lives also open new possibilities for survey research. For example, recent innovations in self-administered questionnaires make use of the computer. Among the techniques that are being tested are these (Nicholls, Baker, and Martin in press): CAP! (computer assisted personal interviewing): Similar to CATI but used in face-to-face interviews rather than over the phone. CASI (computer assisted self-interviewing): A research worker brings a computer to the respondent's home, and the respondent reads questions on the computer screen and enters his or her own answers. CSAQ (computerized self-administered questionnaire): The respondent receives the questionnaire via floppy disk, bulletin board, or other means and runs the software, which asks questions and accepts the respondent's answers. The respondent then returns the data file. TDE (touchtone data entry): The respondent initiates the process by calling a number at the research organization. This prompts a series of computerized questions, which the respondent answers by pressing keys on the telephone keypad. VR (voice recognition): Instead of asking the respondent to use the telephone keypad, as in TDE, this system accepts spoken responses. Nicholls el al. report thai such techniques are more efficient than conventional techniques, and they do not appear to result in a reduction of data quality. Jeffery Walker (1994) has explored the possibility of conducting surveys by fax machine. Questionnaires are faxed to respondents, who are asked to lax their answers back. Of course, such surveys can only represent that part of the population that has fax machines. Walker reports thai fax surveys don't achieve as high a response rate as do lace-to-lace interviews, but, because of the perceived 266 . Chapter 9: Survey Research urgency, they do produce higher response rates than do mail or telephone surveys. In one test case, all those who had ignored a mail questionnaire were sent a fax follow-up, and 83 percent responded. I've already noted that, as a consumer of social research, you should be wary of "surveys" whose apparent purpose is to raise money for the sponsor. This practice has already invaded the realm of "fax surveys," evidenced by a lax entitled, "Should Hand Guns Be Outlawed?" Two fax numbers were provided for expressing either a "Yes" or "No" opinion. The smaller print noted, "Calls to these numbers cost $2.95 per minute, a small price for greater democracy. Calls take approx. 1 or 2 minutes." You can imagine where the $2.95 went. The new technology of survey research includes the use of the Internet and the World Wide Web—two of the most far-reaching developments of the late twentieth century. Appendix B examines ways of using the Web lor literature searches and related aspects of the research process. Some researchers feel that the Internet can also be used to conduct meaningful survey research. An immediate objection that many social researchers make to online surveys concerns representativeness : Will the people who can be surveyed online be representative of meaningful populations, such as all U.S. adults, all voters, and so on? This is the criticism raised with regard to surveys via lax and, earlier, with regard to telephone surveys. Camilo Wilson (1999), founder of Cogix (www.rogix .com) points out that some populations are ideally suited lo online surveys: specifically, those who visit a particular Web site. For example, Wilson indicates thai market research for online companies should be conducted online, and his lit in has developed software, ViewsFlash, lor precisely th.it purpose. Although Web sile surveys could easily collect dala from all who visit a particular sile, Wilson suggests thai survey sampling techniques can provide sufficient consumer data without irritating thousands or millions of potential customers. but how about general population surveys? As I write ibis, a debate is brewing within the survey research community. Humphrey Taylor and George Terhanian (1999:20) prompted part of the debate with an article, "Heady Days Are Here Again." Acknowledging the need for caution, they urged that online polling be given a lair hearing: I One test of the credibility of any new data collection method hinges on its ability to reliably and accurately forecast voting behavior. For this reason, last fall we attempted to estimate the 1998 election outcomes for governor and US Senate in 14 slates on four separate occasions using internet surveys. The researchers compared their results with 52 telephone polls that addressed the same races. Online polling correctly picked 21 of the 22 winners, or 95 percent. However, simply picking the winner is not a sufficient test of effectiveness: how close did the polls come to the actual percentages received by the various candidates? Taylor and Terhanian report their online polls missed the actual vote by an average of 6.8 percentage points. The 52 telephone polls missed the same votes by an average of 6.2 percentage points. Warren Mitofsky (1999) is a critic of online jiolling. In addition to disagreeing with the way Taylor and Terhanian calculated the ranges of error just reported, he has called for a sounder, theoretical basis on which to ground the new technique. One key to online polling is the proper assessment and use of weights for different kinds of respondents—as was discussed in the context of quota sampling in Chapter 7. Taylor and Terhanian are aware of the criticisms of quota sampling, but their initial experiences witlt online polling suggest to them that the technique should be pursued. Indeed, they conclude by saying "This is an unstoppable train, and it is accelerating. Those who don't get on board run the risk of being left far behind" (1999:23). The cautions urged in relation to online surveys today are similar to those urged in relation to telephone surveys in the first edition of this book, in 1975. Whether online surveys will gain the respect and extensive use enjoyed by telephone surveys today remains to be seen. Students who consider us- Comparison of the Different Survey Methods . 267 ing ihis technique should do so in lull recognition of its potential shortcomings. Comparison of the Different Survey Methods Now that we've seen several ways to collect survey data, let's take a moment to compare them directly. Sell-administered questionnaires are generally cheaper and quicker than face-to-face interview surveys. These considerations are likely to be important for an unfunded student wishing to undertake a survey for a term paper or thesis. Moreover, if you use the sell-administered mail format, it costs no more to conduct a national survey than a local one of the same sample size. In contrast, a national interview survey (either face-to-face or by telephone) would cost far more than a local one. Also, mail surveys typically require a small stall': One person can conduct a reasonable mail survey alone, although you shouldn't underestimate the work involved. Further, respondents are sometimes reluctant to report controversial or deviant attitudes or behaviors in interviews but are willing to respond to an anonymous sell-administered questionnaire. Interview surveys also offer many advantages. For example, they generally produce fewer incomplete questionnaires. Although respondents may skip questions in a self-administered questionnaire, interviewers are trained not to do so. In CATI surveys, the computer offers a further check on this. Interview surveys, moreover, have typically achieved higher completion rates than have self-a d m iniste red question na i res. Although self-administered questionnaires may be more effective for sensitive issues, interview surveys are definitely more effective for complicated ones. Prime examples include the enumeration of household members and the determination of whether a given address corresponds to more than one housing unit. Although the concept of housing unit lias been refined and standardized by the Bureau of the Census and interviewers can be trained to deal with the concept, it's extremely difficult to communicate in a sell-administered questionnaire. This advantage of interview surveys pertains generally to all complicated contingency questions. With interviews, you can conduct a survey based on a sample of addresses or phone numbers rather than on names. An interviewer can arrive at an assigned address or call the assigned number, introduce the survey, and even—following instructions—choose the appropriate person at that address to respond to the survey. In contrast, self-administered questionnaires addressed to "occupant" receive a notoriously low response. Finally, as we've seen, interviewers questioning respondents face-to-face can make important observations aside from responses to questions asked in the interview. In a household interview, they may note the characteristics of the neighborhood, the dwelling unit, and so forth. They may also note characteristics of the respondents or the quality of their interaction with the respondents—whether the respondent had difficulty communicating, was hostile, seemed to be lying, and so on. The chief advantages of telephone surveys over those conducted face-to-face center primarily on time and money. Telephone interviews are much cheaper and can be mounted and executed quickly. Also, interviewers are safer when interviewing in high-crime areas. Moreover, the impact of the interviewers on responses is somewhat lessened when they can't be seen by the respondents. As only one indicator of the popularity of telephone interviewing, when Johnny Blair and his colleagues (1995) compiled a bibliography on sample designs for telephone interviews, they listed over 200 items. Online surveys have many of the strengths and weaknesses of mail surveys. Once the available software has been further developed, they are likely to be substantially cheaper. An important weakness, however, lies in the difficulty of assuring that respondents to an online survey will be representative of some more general population. Clearly, each survey method has its place in social research. Ultimately, you must balance the advantages and disadvantages of the different methods in relation to your research needs and your resources. 268 . Chapter 9: Survey Research Strengths and Weaknesses of Survey Research Regardless of the specific method used, surveys— like other modes of observation in social research— have special strengths and weaknesses. You should keep these in mind when determining whether a survey is appropriate for your research goals. Surveys are particularly useful in describing the characteristics of a large population. A carefully selected probability sample in combination with a standardized questionnaire offers the possibility of making refined descriptive assertions about a student body, a city, a nation, or any other large population. Surveys determine unemployment rates, voting intentions, and the like with uncanny accuracy. Although the examination of official documents—such as marriage, birth, or death records—can provide equal accuracy for a few topics, no other method of observation can provide this general capability. Surveys—especially sell-administered ones— make large samples feasible. Surveys of 2,000 respondents are not unusual. A large number of cases is very important for both descriptive and explanatory analyses, especially wherever several variables are to be analyzed simultaneously. In one sense, surveys are flexible. Many questions may be asked on a given topic, giving you considerable flexibility in your analyses. Whereas an experimental design may require you to commit yourself in advance to a particular operational definition of a concept, surveys let you develop operational definitions from actual observations. Finally, standardized questionnaires have an important strength in regard to measurement generally. Earlier chapters have discussed the ambiguous nature of most concepts: They have no ultimately real meanings. One person's religiosity is quite different from another's. Although you must be able lo define concepts in those ways most relevant to your research goals, you may not find it easy to apply the same definitions uniformly to all subjects. The survey researcher is bound to this requirement by having to ask exactly the same questions of all subjects and having to impute the same intent to all respondents giving a particular response. Survey research also has several weaknesses. First, the requirement of standardization often seems to result in the fitting of round pegs into square holes. Standardized questionnaire items often represent the least common denominator in assessing people's attitudes, orientations, circumstances, and experiences. By designing questions that will be at least minimally appropriate to all respondents, you may miss what is most appropriate lo many respondents. In this sense, surveys often appear superficial in their coverage of complex topics. Although this problem can be partly offset by sophisticated analyses, it is inherent in survey research. Similarly, survey research can seldom deal with the context of social life. Although questionnaires can provide information in this area, the survey researcher rarely develops the feel for the total life situation in which respondents are thinking and acting that, say, the participant observer can (see Chapter 10). In many ways, surveys are inflexible. Studies involving direct observation can be modified as field conditions warrant, but surveys typically require that an initial study design remain unchanged throughout. As a field researcher, lor example, you can become aware of an important new variable operating in the phenomenon you're studying and begin making careful observations of it. The survey researcher would probably be unaware of the new variable's importance and could do nothing about it in any event. Finally, surveys are subject to the artificiality mentioned earlier in connection with experiments. Finding out that a person gives conservative answers to a questionnaire does not necessarily mean the person is conservative; finding out that a person gives prejudiced answers to a questionnaire does not necessarily mean the person is prejudiced. This shortcoming is especially salient in the realm of action. Surveys cannot measure social action; they can only collect self-reports of recalled past action or of prospective or hypothetical action. The problem of artificiality has two aspects. First, the topic of study may not be amenable to Secondary Analysis . 269 measurement through questionnaires. Second, the act of studying that topic—an altitude, for example—may affect it. A survey respondent may have given no thought to whether the governor should be impeached until asked for his or her opinion by an interviewer. He or she may, at that point, form an opinion on the matter. Survey research is generally weak on validity and strong on reliability. In comparison with field research, for example, the artificiality of the survey format puts a strain on validity. As an illustration, people's opinions on issues seldom take the form of strongly agreeing, agreeing, disagreeing, or strongly disagreeing with a specific statement. Their survey responses in such cases must be regarded as approximate indicators of what the researchers had in mind when they framed the questions. This comment, however, needs to be held in the context of earlier discussions of the ambiguity of validity itself. To say something is a valid or an invalid measure assumes the existence of a "real" definition of what's being measured, and many scholars now reject that assumption. Reliability is a clearer matter. By presenting all subjects with a standardized stimulus, survey research goes a long way toward eliminating unreliability in observations made by the researcher. Moreover, careful wording of the questions can also reduce significantly the subject's own unreliability. As with all methods of observation, a full awareness of the inherent or probable weaknesses of survey research can partially resolve them in some cases. Ultimately, though, researchers are on the safest ground when they can employ several research methods in studying a given topic. Secondary Analysis As a mode of observation, survey research involves the following steps: (1) questionnaire construction, (2) sample selection, and (3) data collection, through either interviewing or self-administered questionnaires. As you've gathered, surveys are usually major undertakings. It's not unusual for a large-scale survey to take several months or even more than a year to progress from conceptualization to data in hand. (Smaller-scale surveys can, of course, be done more quickly.) Through a method called secondary analysis, however, researchers can pursue their particular social research interests— analyzing survey data from, say, a national sample of 2,000 respondents—while avoiding the enormous expenditure of time and money such a survey entails. Secondary analysis is a form of research in which the data collected and processed by one researcher are reanalyzed—often for a different purpose—by another. Beginning in the 1960s, survey researchers became aware of the potential value that lay in archiving survey data for analysis by scholars who had nothing to do with the survey design and data collection. Even when one researcher had conducted a survey and analyzed the data, those same data could be further analyzed by others, who had slightly different interests. Thus, if you were interested in the relationship between political views and attitudes toward gender equality, you could examine that research question through the analysis of any data set that happened to contain questions relating to those two variables. The initial data archives were very much like book libraries, with a couple of differences. First, instead of books, the data archives contained data sets: first as punched cards, then as magnetic tapes. Today they're typically contained on computer disks, CD-ROMs, or online servers. Second, whereas you're expected to return books to a conventional library, you can keep the data obtained from a data archive. The best-known current example of secondary analysis is the General Social Survey (GSS). Every year or two, the federal government commissions the National Opinion Research Center (NORC) at the University of Chicago to conduct a major national survey to collect data on a large number of social science variables. These surveys are conducted precisely for the purpose of making data available to scholars at little or no cost. You can learn more about the GSS at http://www.icpsr.uinich.edu/gss/. Numerous other resources are available for identifying and acquiring survey data for secondary 270 . Chapter 9: Survey fcvtuth analysis. The Roper Center for Public Opinion Research (http://www.ropercenter.uconn.edu/) at the University of Connecticut is one excellent resource. The center also publishes the journal Public Perspective on public opinion polling. Polling the Nations (lmn://www.pollingthenaiions.com/) is an online repository for thousands of polls conducted in the United States and 70 other nations. A paid subscription allows users to obtain specific data results from studies they specify, rather than obtaining whole studies. Outside the United States, the Netherlands Institute for Scientific Information Services at http:// www.niwi.lauiw.nl/cgi-hin/iiph-star search.pl allows users to track down European studies that contain variables of interest. You might also try the Central Archive for Social Science Research at the University of Cologne in Germany (hltp://www. /.a.uni-koeln.de/index-e.htnn. (Appendix B contains numerous Web sites useful to social science researchers, and you'll find many other data sources there.) The advantages of secondary analysis are obvious and enormous: It's cheaper and faster than doing original surveys, and, depending on who did the original survey, you may benefit from the work of topflight professionals. There are disadvantages, however. The key problem involves the recurrent question of validity. When one researcher collects data for one particular purpose, you have no assurance that those data will be appropriate lor your research interests. Typically, you'll find that the original researcher asked a question that "comes close" to measuring what you're interested in, but you'll wish the question had been asked just a little differently—or that another, related question had also been asked. Your question, then, is whether the question that was asked provides a valid measure of the variable you want to analyze. Nevertheless, secondary analysis can lie immensely useful. Moreover, it illustrates once again the range of possibilities available in finding the answers to questions about social life. Although no single method unlocks all puzzles, there is no limit to the ways you can find out about things. And when you zero in on an issue from several independent directions, you gain that much more expertise. MAIN POINTS • Survey research, a popular social research method, is the administration of questionnaires to a sample of respondents selected from some population. • Survey research is especially appropriate for making descriptive studies of large populations; survey data may be used for explanatory purposes as well. • Questionnaires provide a method of collecting data by (1) asking people questions or (2) asking them to agree or disagree with statements representing different points of view. Questions may be open-ended (respondents supply their own answers) or closed-ended (they select from a list of provided answers). o Items in a questionnaire should observe several guidelines: (1) The items must be clear and precise; (2) the items should ask only about one thing (i.e., double-barreled questions should be avoided); (3) respondents must be competent to answer the item; (4) respondents must be willing to answer the item; (5) questions should be relevant to the respondent; (6) items should ordinarily be short; (7) negative terms should be avoided so as not to confuse respondents; (8) the items should be worded to avoid biasing responses. • The format of a questionnaire can influence the quality of data collected. • A clear format for contingency questions is necessary to ensure that the respondents answer all the questions intended for them. The matrix question is an efficient formal for presenting several items sharing the same response categories. • The order of items in a questionnaire can influence the responses given. • Clear instructions are important for getting appropriate responses in a questionnaire. .. Questionnaires should be pretested before being administered to the study sample. o Questionnaires may be administered in three basic ways: through self-administered questionnaires, face-to-face interviews, or telephone surveys. Review Questions and Exercises • 271 It's generally advisable to plan follow-up mailings in the case of sell-administered questionnaires, sending new questionnaires to those respondents who fail to respond to the initial appeal. Properly monitoring questionnaire returns will provide a good guide to when a follow-up mailing is appropriate. The essential characteristic of interviewers is that they be neutral; their presence in the data-collection process must not have any effect on the responses given to questionnaire items. Interviewers must be carefully trained to be familiar with the questionnaire, to follow the question wording and question order exactly, and to record responses exactly as they are given. Interviewers can use probes to elicit an elaboration on an incomplete or ambiguous response. Probes should be neutral. Ideally, all interviewers should use the same probes. Telephone surveys can be cheaper and more efficient than lace-to-face interviews, and they can permit greater control over data collection. The development of computer-assisted telephone interviewing (CATI) techniques is especially promising. New technologies offer additional opportunities for social researchers. They include various kinds of computer-assisted data collection and analysis as well as the chance to conduct surveys by fax or over the Internet. The latter two methods, however, must be used with caution because respondents may not be representative of the intended population. The advantages of a self-administered questionnaire over an interview survey are economy, speed, lack of interviewer bias, and the possibility of anonymity and privacy to encourage candid responses on sensitive issues. The advantages of an interview survey over a sell-administered questionnaire are fewer incomplete questionnaires and fewer misunderstood questions, generally higher return rates, and greater flexibility in terms of sampling and special observations. The principal advantages of telephone surveys over face-to-face interviews are the savings in cost and time. Telephone interviewers are also safer than in-person interviewers, and they may have a smaller effect on the interview itself. • Online surveys have many of the strengths and weaknesses of mail surveys. Although they are cheaper to conduct, it can be difficult to ensure that the respondents represent a more general population. • Survey research in general offers advantages in terms of economy, the amount of data that can be collected, and the chance to sample a large population. The standardization of the data collected represents another special strength ol survey research. • Survey research has the weaknesses of being somewhat artificial, potentially superficial, and relatively inflexible. It's difficult to use surveys to gain a full sense of social processes in their natural settings. In general, survey research is comparatively weak on validity and strong on reliability. • Secondary analysis provides social researchers with an important option for "collecting" data cheaply and easily but at a potential cost in validity. KEY TERMS questionnaire contingency question respondent response rate open-ended questions interview closed-ended questions probe bias secondary analysis REVIEW QUESTIONS AND EXERCISES 1. Tor each of the following open-ended questions, construct a closed-ended question that could be used in a questionnaire. a. What was your family's total income last year? b. How do you feel about the space shuttle program? c. How important is religion in your life? 272 . Chapter 9: Survey Research d, Whai was your main reason for attending College? e. What do you feel is the biggest problem facing your community? 2. Construct a set of contingency questions for use in a self-administered questionnaire that would solicit the following information: a. Is the respondent employed? b. If unemployed, is the respondent looking for work? C. If the unemployed respondent is not looking lor work, is he or she retired, a student, or a homemaker? d. [f the respondent is looking for work, how long has he or she been looking? >■ Pind a questionnaire printed in a magazine or newspaper (for a reader survey, for example). Consider at least five of the questions in it and critique each one either positively or negatively. 4. Look at your appearance right now. Identify aspects of your appearance that might create a problem if you were interviewing a general cross sec-lion of the public. 5. Locate a survey being conducted on the Web. Briefly describe the survey and discuss its strengths and weaknesses. ADDITIONAL READINGS Babbie, Hail. 1990. Survey Research Methods. Belmont, CA: Wadsworth. A comprehensive overview of survey methods. (You thought I'd say it was lousy?) This textbook, although overlapping the present one somewhat, (.overs aspects of survey techniques omitted heir. Uradburn, Norman M., and Seymour Sudman. 1988. Polls and Surveys: Understanding What They Tell Us. San Francisco: Jossey-Bass. These veteran survey researchers answer questions about their craft the genera] public commonly ask. Dillman, Don A. 1978. Mail and Telephone Surveys: The Total Design Method. New York: Wiley. An excellent review ol the methodological literature on mail and telephone surveys. Dillman makes many good suggestions for improving response rales. Elder, Glen 11., Jr., Eliza K. Pavalko, and Elizabeth C. Clipp. 1993. Working with Archival Data: Studying Lives. Newbury Park, CA: Sage. This book discusses the possibilities and techniques for using existing data archives in the United States, especially those providing longitudinal data. Feick, Lawrence F. 1989. "Latent Class Analysis of Survey Questions That Include Don't Know Responses." Public Opinion Quarterly 53: 525-47. "Don't know" can mean a variety of things, as this analysis indicates. Fowler, Floyd J„ Jr. 1995. Improving Survey Questions: Design and Evaluation. Thousand Oaks, CA: Sage. A comprehensive discussion of questionnaire construction, including a number of suggestions for pretesting questions. This book discusses the logic ol obtaining information through survey questions and gives numerous guidelines for being effective. It also offers several examples of questions you might use. Groves, Robert M. 1990. "Theories and Methods of Telephone Surveys." Pp, 221-40 in Annual Review of Sociology (vol. 16), edited by W. Richard Scott and Judith Blake. Palo Alto, CA: Annual Reviews. An attempt to place telephone surveys in the context of sociological and psychological theories and to address the various kinds of errors common to this research method. Miller, Delbert. 1991. Handbook of Research Design and Social Measurement. Newbury Park, CA: Sage. A powerful reference work. This book, especially Part 6, cites and describes a wide variety of operational measures used in earlier social research. In several cases, the questionnaire formats used are presented, Though the quality of these illustrations is uneven, they provide excellent examples of possible variations. Sheatsley, Paul F. 1983. "Questionnaire Construction and Item Writing." Pp. 195-230 in Handbook of Survey Research, edited by Peter H. Rossi, James D. Wright, and Andy B. Anderson. New York: Academic Press. An excellent examination of the topic by an expert in the field. Smith, Eric R. A. N., and Peverill Squire. 1990. "The Effects of Prestige Names in Question Wording." Public Opinion Quarterly 54:97-1 16. Not only do prestigious names affect the overall responses given to survey questionnaires, they also affect such things as the correlation between education and the number of "don't know" answers. Swafford, Michael. 1992. "Soviet Survey Research: The 1970's vs. the 1990's." AAPOR News 19 (3): 3—4. The author contrasts the general repression of survey research during his first visit in 1973-74 with the renewed use of the method in more recent times. He notes, for example, that the So- InfoTrac College Edition . 273 vict government commissioned a national survey to determine public opinion on the possible reunification of Germany, Tourangeau, Roger, Kenneth A. Rasinski, Norman Bradburn, and Roy D'Andrade. 1989. "Carryover Effects in Attitude Surveys." Public Opinion Quarterly 53:495-524. The authors asked six target questions in a telephone survey of 1,100 respondents, varying the questions immediately preceding the target questions. They found substantial differences. Williams, Robin M., Jr. 1989. "The American Soldier: An Assessment, Several Wars Later." Public Opinion Quarterly 53:1 55-74. One of the classic studies in the history of survey research is reviewed by one of its authors. SOCIOLOGY WEB SITE 0Kjfy See the Wadsworth Sociology Resource Center, Virtual Society, for additional links, Internet exercises by chapter, quizzes by chapter, and Microcase-related materials: http://www.sociology.wadsworth.com INFOTRAC COLLEGE EDITION SEARCH WORD SUMMARY Go to the Wadsworth Sociology Resource Cen-* ter, Virtual Society, ro find a list of search words for each chapter. Using the search words, go to lnfo-Trac College Edition, an online library of over 900 journals where you can do online research and find readings related to your studies. To aid in your search and to gain useful tips, see the Student Guide to Info-Trac College Edition on the Virtual Society Web site: http://www.sociology.wadsworth.com