Ethics, Deception, and ‘Those Milgram Experiments’ 245 © Society for Applied Philosophy, 2001 Journal of Applied Philosophy, Vol. 18, No. 3, 2001 © Society for Applied Philosophy, 2001, Blackwell Publishers, 108 Cowley Road, Oxford, OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA. Ethics, Deception, and ‘Those Milgram Experiments’ C. D. HERRERA  Critics who allege that deception in psychology experiments is unjustified frequently cite Stanley Milgram’s ‘obedience experiments’ as evidence. These critics say that arguments for justification tend to downplay the risks involved and overstate the benefits from such research. Milgram, they add, committed both sins. Critics are right to point out that research oversight is often susceptible to self-serving abuse. But stating a priori how beneficial a given experiment will be is a tall order for psychologists, or anyone else. At the same time, critics themselves have difficulty in showing what is wrong with deception, and how subjects in these experiments suffer. Hence, it becomes unclear what the psychologists, including Milgram, are prone to downplay. There is also room to wonder how the Milgram studies can illuminate the debate over deception. Although Milgram probably exaggerated the scientific significance of his own work, critics who exaggerate its moral and historical significance do little to clarify the status of deception. Rethinking the Benefits of ‘Justified’ Deception What are we to make of that unique practice associated with some psychology experiments, the intentional deception of the research subjects? Psychologists argue that they are not using malicious or garden-variety deception, but deception of the ‘justified’ kind. They are quick to assure critics that these subjects will endure minimal risks, if any, while participating. Indeed, some give the impression that there is too much fuss over deception: many of the ethical sermons being preached to social scientists seem to assume that those participating in research projects would never encounter given discomforts if they did not participate in the research . . . deceptive information is presented at every turn, particularly in advertising and political speeches . . . If a salesman deliberately deceives a prospective customer, he makes no attempt, after the sale, to reveal this deception. If social scientists were no so honest, subjects would not be aware of the deception and, hence, not so upset about their treatment. [1] Psychologists, at least the few who resort to deception, claim further that they must conceal some details of a proposed study from prospective subjects. A fully informed subject will be a ‘reactive’ one, the thinking goes. It is hard enough to observe natural behaviour in a campus laboratory; why add to the challenge by letting subjects know what the psychologists are up to? Whole areas of human behaviour would supposedly be off limits to research if psychologists had to be completely open and honest when they seek volunteers. 246 C. D. Herrera © Society for Applied Philosophy, 2001 Critics remain unconvinced by this appeal to research needs, and by the claims about deception being innocuous. For some, the trouble starts with the way the psychologists give an accounting of their work. Although specific procedures vary across nationalities and institutions, as a general rule it falls to something like an Institutional Review Board to evaluate the psychologist’s promise of benefit over risk in these experiments. Psychologists offer a risk-benefit projection to the Review Board, and if they can win the Board members over, they then try to convince prospective subjects with roughly the same projection. It is probably true that this arrangement forces psychologists to pull off a bit of a public-relations victory. If they cannot convince the Review Board to accept the picture of risks and benefits, the process comes to a halt, and researcher never meets subject. Once they pass review, the psychologists still have to get subjects to accept the risk-benefit package, or Institutional approval becomes superfluous. It stands to reason that if at any stage in this process benefits seem meagre, the project will end before it begins. Some Boards have a hand in determining whether the psychologist receives funds to sustain the work. This is a concern in the US, for instance, where grant monies can be tied to ethical review, and where Federal Regulations can stipulate that an ethics committee review all human-subject research before any individual research receives Federal funds [2]. In this scenario, judgments about morality and methods are tied to the public dole. The practical effect is one of giving psychologists (and other researchers) a financial incentive to meet a moral guideline. All of this might work to a point, but even where finances are not at issue, one recent critic is probably right to accuse some psychologists of exaggerating. The psychologists, he argues, are liable to exaggerate the expected benefits of their work, and to minimize any talk of risks when they pitch their work to Review Boards [3]. Critics grow especially impatient when ‘assessments by social scientists of the importance of their own research amount to no more than assertion’ [4]. Clearing the Way for Researcher Modesty On one hand, hype seems particularly out of place in human research. The system of oversight clearly fails if researchers can talk-up the benefits of their research as a way to gain access to human volunteers. Nor does it help matters when the same psychologists who talk of justifying deception offer claims of benefit that are hard to take seriously. One psychologist would have critics believe that subjects in a deceptive experiment benefit by receiving such things as a ‘balanced and interesting summary of relevant knowledge at the time of the participation,’ ‘a handout that is carefully edited, clear, simple, and devoid of professional jargon,’ or ‘a cheerful and friendly offer to discuss any of the material’ [5]. This sounds either misleading or naive. Subjects could readily receive ‘benefits’ like these without deception taking place. It is not even clear why anyone would have to serve in an experiment, deceptive or not, to receive these benefits. This is important, since in the research marketplace, subjects and Review Boards are wise to guard against this kind of false advertising from psychologists. The trade-off of risks and benefits only makes sense if subjects and society receive something that offsets the deception, that would be unavailable without it. On the other hand, in trying to protect subjects we should not place the hurdles of clarity and utility so high that no psychologist can clear them. We can reasonably ask Ethics, Deception, and ‘Those Milgram Experiments’ 247 © Society for Applied Philosophy, 2001 for restraint from critics when assessing benefit claims, even those that seem far-fetched initially. It is also unfair to blame all psychologists who deceive for not being able to offer accurate, objective accounts of the benefits that will come from their work. They are advertising something yet to occur, after all. Psychologists tweak human subjects to see what happens. This restricts them to a forward-looking model of assessment, which ensures that they will not have unassailable projections of benefits. Judgments of relevance or utility vary according to a number of factors, few of which are known before the research. This follows from the way that values in society shift back and forth, not always in concert with values expressed in research programmes. There has lately been increased interest in exploration of behaviour related to drug use and ‘casual’ sex. It may be years before the value of these studies becomes apparent, or before we realize that we were wrong to reject a given study. The problem is not that psychologists are unable to read the future, or that deception hopelessly complicates things. Deception attracts scrutiny, but can have little relation to the problem of predicting benefits. Rather, the utilitarian, risk-benefit model provides for only tentative agreement on the most basic of questions, including which descriptions of present or future conditions should prevail [6]. No matter who defines ‘benefit,’ and whether or not deception occurs, there will remain an appearance of arbitrary selection until the dust settles and all can assess the research with hindsight. Experimental psychologists are also up against a pervasive, negative bias. Their methods are viewed by some as flawed from the outset. Critics warn, for example, that the null-hypothesis tests and other statistical tools that psychologists occasionally employ in their experimental designs can ‘confirm’ hypotheses that were spurious to begin with [7]. Where this kind of confirmation occurs, there will in principle be no benefit from the experiment. Critics also question the social context of experimentation itself, and the validity that laboratory findings have in the real world [8]. These complaints are not new. Doubts about method are as old as experimental social psychology. Setting aside the merits of these concerns, they cannot help but leave an impression with members of the Review Board. Put simply, psychologists who deceive are in a bind. They cannot afford to sound as though they are inflating their projection of benefits. Psychologists cannot allow their research to appear trivial either, or the project won’t seem to cover the costs of deception. Both science and its moral assessment proceed on good faith. If we are going to ask psychologists for meaningful projections about their results, they deserve a method of assessment that does not penalize modesty. We can’t, in other words, offer apparent rewards for false advertising and then punish psychologists who take the bait. Not only that, with the usual uncertainty about where an experiment will lead, critics must avoid the moral hubris of claiming to know, sometimes on the basis of few details, which research is sufficiently important to balance the use of deception. We should also bear in mind that where psychologists can anticipate a negative bias against deception, they may have still more reason to inflate their accounts of their own work. The Question of Risk This bias becomes a problem in the review process where psychologists turn to describing the risks in their studies as well. Psychologists confront a common view that 248 C. D. Herrera © Society for Applied Philosophy, 2001 their deception is more harmful than they let on. Clarke warns that ‘there is no attempt to formally quantify the benefits of any particular piece of social science research and weigh these against the costs to the participants’ [9]. Codes of Ethics do prohibit experiments that involve significant risks, Clarke notes, but he finds ‘the term ‘significant’ so vague [that] there is considerable scope for psychologists to ignore this restriction and keep their cost-benefit analyses straightforward’ [10]. Once more, the critic is onto something. We should all be wary of letting the fox guard the henhouse, and that may be what happens when we let psychologists come up with their own versions of just how significant the risks are. Subjects especially should also be able to count on a fair description of the hazards, if any, that await them in the lab. For their part, psychologists are certainly aware of the drawbacks to self-policing, and of the perils that lie in underestimating risk. They may have done a poor job of responding to critics. But we should not suggest that psychologists don’t realize the moral issues that deception raises, or that it is typical to exploit loose language about risk. Some psychologists have taken an active role in trying to find out what risks deception involves. At a time when deception was of little concern to anyone in or outside experimental psychology, E. Vinacke called for empirical, objective evidence of the effects that deception might have on subjects [11]. A proliferation of studies has since provided scant evidence that deception bothers subjects [12]. This places psychologists in another bind. If empirical evidence shows that there are no significant, harmful effects from deception, what language can psychologists use that will not make them appear to be hiding something? In the interest of good faith, just as we call upon psychologists to specify the benefits that deception provides, critics should meet some measure of clarity when talking about risks. This would give those who use deception less reason to engage in windowdressing. It is not easy to say what, specifically, is wrong with deception, and attempts in that direction should take centre stage in this debate. Speculation has its place, but should lead the way to empirical tests. Critics have claimed that deception undermines respect for honesty and institutions [13]. They have also warned that deception would lead to a negative reputation for psychologists. Yet charges like this are so vague that there is no real way to validate them. Perhaps the continued use of deception may exacerbate the trend towards cynicism and deception in all facets of life, from marketing to politics to science. Undergraduates, the subjects of choice in psychology experiments, have no doubt lost a great deal of respect for institutions and the principle of honesty over the last half-century. It is routine for these students to ‘volunteer’ in experiments, some of which involve deception, as part of their introductory psychology courses. But if a link exists between student attitudes and the participation in these experiments, critics should produce evidence for it. The available evidence shows that subjects give only passing concern to deception [14]. With decades of research that appears to show little or no harm from deception, what are psychologists to be more specific about when projecting risks? Some critics perhaps think that if the numbers are not on their side, first-person accounts might carry the day. One former graduate student in psychology describes the stress he felt while deceiving subjects [15]. After recounting the rigours of the lab, he asks why the teaching and practice of psychology should involve such things. A former experimental psychologist describes the ‘deception researcher’s personal dilemma’ this way: Ethics, Deception, and ‘Those Milgram Experiments’ 249 © Society for Applied Philosophy, 2001 either one successfully dissociates the carefully crafted manipulativeness that characterizes the relationship with research subjects from relationships with people outside the laboratory, or one does not. In the first case, we should worry about the impact of the inauthentic relationship on the subject, and about the researcher’s learning to systematically shut off ethically central aspects of his or her personality, as for example, learning to lie with a completely straight face and a clear conscience. [16] Stress and moral misgivings should never be taken lightly. But research oversight is meant to protect subjects, not researchers. And in that light, how would we integrate testimonials like this into the debate over deception? What we really seem to need are in-depth, detailed testimonials from subjects, something that a review process might require of psychologists during the debriefing phase of the research. Lacking something like that, we cannot fairly integrate references to ‘carefully crafted manipulativeness’ in the ‘relationship with research subjects’ into the case against deception, even if these do come from researchers close to the action. This is especially so, given that sentiments like those quoted seem based on a misunderstanding about how subjects truly feel about the deception. The rhetoric of a researhcer being forced to ‘systematically shut off ethically central aspects’ of his personality loses its effect when we consider that the subjects in general do not see deception as wrong. Unless purported appeals to ‘a clear conscience’ are grounded in reality, they tell us more about the critic than the underlying issues. Concern for Autonomy Some commentators charge that deceptive experiments violate the subject’s autonomy, and that deception runs counter to the ‘doctrine of informed consent’ [17]. This doctrine critics associate with ‘the biomedical sciences.’ [18] There, we are told, deception of any form is wrong. Commentators who make such arguments frequently appeal to the Kantian notion that no one should be used solely as a means to another’s end. They rightly point out the value in autonomy-based arguments, which reinforce the idea that we cannot reduce all potential risks from deception to empirical effects: Which elements are recorded in the cost-benefit ledger as benefits or costs for the persons involved and show the contract to be an overall profit or loss? . . . If we are to enumerate tentatively the costs and benefits of the experiment, intuition and nonscientific experience, introspection and speculation must supplement the scanty hard empirical data that the scientific community has agreed to consider more valuable. [19] Most importantly, these concerns about autonomy and respect highlight an underlying tension in this debate. Codes of Ethics urge psychologists to respect the dignity of subjects, to preserve the subject’s autonomy, and so on. [20] The same Codes typically allow for deception where the benefits are sufficient and risks are minimal. How are researchers to reconcile what at least sounds like a contradictory position? [21] Can psychologists cater to autonomy while deceiving? 250 C. D. Herrera © Society for Applied Philosophy, 2001 There is reason to think that if we carefully define deception, and specify the terms of its use, autonomy needn’t be threatened. In cases where autonomy is violated, there still is a need to show that the violation amounted to some form of harm to the subject. To take the extreme case, we can agree that when psychologists deceive subjects with no advance warning they are using those subjects as means to an end that the subjects cannot share. But it does not follow that this represents abuse, any more than less than complete disclosure at the time of consent has to be antithetical to autonomy. In more typical cases, psychology experiments might involve selective disclosure of information, ideally using terms of participation that the subjects have helped construct. In medical research, subjects in a clinical trial might participate knowing that researchers are purposely withholding some information from them. In the same way, psychologists could tell subjects in a psychology experiment that they are deliberately withholding or even misrepresenting details about the study. If a subject understands that she is consenting on the basis of incomplete information, her autonomy isn’t violated. This type consent is not unlike the consent that a patient gives to the use of anesthesia during surgery. Both seem at first to give up a measure of personal control, for the sake of an end that they adopt. Far from demanding that subjects relinquish autonomy, this kind of consent would allow them to express their individual conception of it [22]. Of course, Review Boards would still have to make the second-hand judgment about whether subjects are told enough to rationally decide. The key is that once subjects waive their right to complete information, deception per se is no longer involved. A critic might maintain that this kind of voluntary transfer of autonomy is inherently wrong. But that position would refuse to let subjects set their own limits on risk-taking. Such a position, coming from Kantians, is an oddly paternalistic one [23]. It would protect subjects from risks that they are willing to take, sometimes depriving them of benefits. Hesitation to allow researchers the option of selective deception and withholding can stem from a fear that psychology will thus deviate from the doctrine of informed consent. This fear is misplaced, because there is no universal ‘Doctrine’ to speak of. There are general principles (e.g., justice, non-maleficence) that apply to all human research. The Nuremberg code takes what is practically a non-deception stance in its application of these general principles. It calls for full and informed consent before an experiment begins. But the major Codes devised since Nuremberg, including the Helsinki Code, takes a more liberal stance on what researchers can withhold from subjects. [24] Not surprisingly, the rationale for this relaxed standard is the familiar trade-off between prospective risks and benefits. The more the risks appear to be minimal, the more justified the deception is. [25] There is reason enough to doubt that psychologists should be held to a standard derived from a medical-research Code. We could justify variations in the way that concern for autonomy and informed consent finds application in human studies partly on the likelihood that subjects in a clinical trial stand to gain, and to risk harm, in a very different way from that in which subjects in a psychology experiment do. A blanket application of informed consent over all areas of human research blurs important differences, and unjustly restrains psychologists. The most compelling reason to adopt separate standards is that if subjects in psychology aren’t harmed now, we won’t protect them by increasing the restriction on the use of deception. That is, we won’t make psychology experiments morally equivalent to medical research simply by Ethics, Deception, and ‘Those Milgram Experiments’ 251 © Society for Applied Philosophy, 2001 invoking a non-deception clause. I don’t suggest that all values in the research context fit readily into categories of risk or benefit, or that we have to define harm in empirical terms. There is still much to learn about the effects of deception, and research participation in general. Having conceded this, however, it remains to be seen why in our concern over deception we would want to move towards a universal standard of informed consent when there is so far no universal form of human study. The history of medical research includes some very serious abuses of deception, incomplete disclosure, and coerced participation. But applying a medical-ethics code to psychology research seems a poor response to these abuses. What of those Milgram Experiments? This gets at another common feature of anti-deception arguments. At some point in many critiques of deception, commentators refer to a well-known series of psychology experiments, the studies of obedience that Stanley Milgram conducted in the early 1960s [26]. When Clarke, for instance, claims that subjects could be harmed by learning unpleasant things about themselves in an experiment that involved deception, he reminds us of this slice of the history of psychology. ‘Many participants in the Milgram obedience studies,’ he claims found out something unexpected about themselves; that they were more prone to obey authority figures than they might have supposed. While there may sometimes be long-term benefits to individuals to be derived from gaining this information about themselves, such self-discoveries can often be harmful rather than beneficial. [27] Diana Baumrind was first to criticize Milgram, and she too charged that his research exposed subjects to an unwelcome side of themselves [28]. She described this process as ‘inflicted insight.’ This line of criticism has from the beginning appeared to have a momentum of its own. The continuing references to Milgram’s self-described studies into ‘destructive obedience’ provide even more support for the negative bias against deception. It is in some quarters taken for granted that this harm to subject self-esteem occurred during Milgram’s research, and that the potential for similar harm precludes justification of deception today. Among critics, Milgram has achieved mythical, though eminently useful, status. As with most myths, there is a complex web of fact and supposition to sort through. For example, Milgram did lead his subjects to believe that they were physically harming each other, and he did employ what was meant to look like a device that generated the electrical shocks. But the device was a phony, and no one was really shocked. This means that the only possible harm had to come to the subjects, who wrongly believed that they were shocking people. Here the case against Milgram begins to fall apart. Milgram claimed that his subjects suffered nothing beyond the ordinary stress that they might have outside of the laboratory. To hear Milgram tell it, most subjects felt positively toward the experiment . . . , four-fifths of the subjects felt that more experiments of this sort should be carried out, and 74% indicated that 252 C. D. Herrera © Society for Applied Philosophy, 2001 they had learned something of personal importance. . . . At no point were subjects exposed to danger and at no point did they run the risk of injurious effects resulting from participation. [29] Milgram also reported that a handful of subjects volunteered for future service. Far from having self-knowledge imposed or inflicted upon them, the subjects seem to have largely found the whole affair worthwhile. Now, we can suppose that the subjects Milgram polled weren’t to be trusted. For the sake of argument, imagine that they really were harmed, and either didn’t realize it or couldn’t come to terms with admitting it. The problem is, we still can’t argue that the deception in Milgram’s design caused their suffering. The several subjects who reported feelings of shame thought that they injured others. Ironically, however, this causal link is precisely what they were deceived about. Milgram’s assistants were simply acting as if they were being shocked. Since the assistants only pretended to be shocked, for all of the subjects’ alleged obedience to authority, they injured no one. Subjects may well have suffered from the realization that they would have hurt people, had the apparatus been genuine. But there is no proof that the deception would have had a role in whether they would have made the decision to do that. If anything, one would expect that the deception that Milgram provided, including sounds of people in pain, would have prevented the subjects from following orders, not enticed them. It is worth noting that Milgram accused his critics of being disturbed by his experimental results. There may be some truth in this, as it is easy to make deception the scapegoat for research findings that are unsettling [30]. Milgram’s work may have inflicted an insight upon society, and we react by blaming the messenger for the perceived bad news. Milgram might also have provided insight into the power of the experimental situation, and our inability to weigh competing moral judgments about risk and benefit in that situation. In any event, when interpreting these experiments we cannot overlook the difference between alleging that an experiment is immoral and showing that the deception made it so. I am no cheerleader for deception, and do not suggest that the Milgram studies are immune from criticism. If nothing else, we might fault Milgram for providing an insight that he had no right to explore, much less share. Milgram could also have shown more concern for the welfare of his subjects before, during, or after the research. But there is no clear connection between any harm and the deception itself. That is, among the wrongs that Milgram committed, the fact that he relied on deception appears almost secondary. The deception was designed into the experiment in such a way that it could not have led anyone into doing something that he or she, presumably, would not have done otherwise. Also, if we are going to dwell on the prospect that subjects might learn too much, we may be forced into asserting that any infliction of momentary, negative thoughts constitutes abuse. By that reasoning, classroom penciland-paper tests can inflict an unwanted insight on test-takers. Non-deceptive experiments that delve into intelligence or intellectual talent can leave subjects with doubts about themselves. Under the right (or wrong) conditions, a mirror can inflict the type of harm that some critics associate with deception. Then there is the historical question. Should ongoing discussions about research ethics draw so heavily upon experiments that were exceptional in their own time, and would be so today? Can studies that occurred nearly 40 years ago further our work towards the goal of assessing the morality of the average experiment today? The Milgram Ethics, Deception, and ‘Those Milgram Experiments’ 253 © Society for Applied Philosophy, 2001 studies could serve this purpose if we better understood what they meant in their own time, and what they should mean to us today. Clarke speaks of a ‘lack of evidence of broad acceptance for deceptive practices in social science research. If there were such acceptance,’ he hints, ‘then it seems unlikely that . . . the Milgram experiment[s] would have raised such controversy’ [31]. Yet it is easy to overstate the degree of consensus that exists on anything related to the obedience studies, including the deception. And what real ‘controversy’ did the Milgram studies cause? No one disputes the familiarity of what I have heard people refer to as ‘those Milgram experiments.’ Unfortunately, a caricature of the Milgram studies has made its way into the world’s cultural imagination; Milgram may be the only social psychologist to earn the status of pop-icon. Plays and novels are based on the Milgram myth [32]. In one episode of the popular American television cartoon, The Simpsons family visits a psycho-therapist [33]. He diagnoses the family members as suffering from pent-up hostility. He then encourages the Simpsons to release this by administering shocks to each other while they are strapped into chairs forming a circle. (The Simpsons quickly shock each other with enough intensity for the lights in the building to start to blink.) College texts in ethics, political science, and of course psychology excerpt from Milgram’s narrative of his work, Obedience to Authority. Students are thus introduced to the Milgram myth as commonly as they are to the ‘banality of evil’ slogan [34]. This specious familiarity is regrettable, since it detracts attention from the need to understand these studies in moral and methodological terms [35]. Perusing the literature since Milgram first documented an obedience study in 1963, one cannot but notice how little in fact has been written on deception since then. In the field of bioethics too, the mass of parenthetical citations of Milgram greatly outnumbers the number of careful studies. All this goes to show that popular exposure is no substitute for in-depth analysis. And while there is nothing wrong with the continued interest in Milgram’s work, parading this episode from the history of human studies whenever we argue over deception will not provide lasting results. On the contrary, the reflexive use of references to Milgram can be comparable to referring to Nazi eugenics programmes anytime the issue of human cloning arises. It startles people into silence, but rarely informs anyone. Closing Thoughts The easy way that commentators invoke the Milgram myth illustrates an interesting facet of this debate over deception. Critics and defenders of deception alike argue along very conventional, almost partisan, lines [36]. Advocates, we have seen, portray deception as a technical or design necessity. From this position, they can brand critics anti-progressives. Some defenders would shift the burden of proof, and insist that ‘those who would urge social psychologists to abandon deception must invent and promote other alternative techniques that permit the efficient and systematic study of social behavior’ [37]. This is an irresponsible evasion, and does not advance the discussion. Clarke and others are correct when they remind us of how little incentive psychologists have to look for alternatives to deception [38]. At the same time, critics typically argue that deception is harmful, but fail to provide evidence. Some look to the moral high ground of dignity and autonomy, claiming that 254 C. D. Herrera © Society for Applied Philosophy, 2001 the true harm from deception is in principle, not to be ascertained empirically. Even if the subjects in experiments like Milgram’s suffer no lasting harm, critics claim, they can be harmed in ways that we cannot weigh in the utilitarian scale of risks and benefits. These critics are on the right track, as we have seen. Neither utilitarianism nor empirical studies can capture all of the potential drawbacks from deception. But the most abstract argument must end in the practical realm of research assessment. It is there that we must turn to explaining how subjects, by their own accounts unharmed, should be given less of a say than appeal to principle or theory. To break from narrow patterns of debate, we might look for new ways to argue about what most seem to agree are the primary moral issues. Take the empirical evidence concerning the effects of deception. Gathering more data on these effects is a poor idea unless we can clarify what role these data, along with data we already have, are to play in the argument. By the same token, talk of enforcing a stricter informedconsent routine will only be productive if we can elaborate on the need for such a standard. Finally, while research ethics cannot afford to forget its history, references to history are only as useful as they are grounded in fact and logic. C. D. Herrera, Philosophy Department, Montclair State University, Upper Montclair, NJ 07043, USA. Herrerach@mail.montclair.edu NOTES [1] P. D. R (1972) On the protection of human subjects and social sciences, International Social Science Journal, 24, 693–719, p. 699. [2] See, e.g., the review procedure outlined in United States Federal Register (1991), v. 56, pp. 28013–18. [3] S. C (1999) Justifying deception in social science research, Journal of Applied Philosophy, 16, 151–166. [4] Clarke, op. cit., p. 154. [5] J. E. S (1992) Planning Ethically Responsible Research: a Guide for Students and Institutional Review Boards (London, Sage Publications), p. 101. [6] See B. N. D (1994) Appreciating a situation, Journal of Social Philosophy, 25, 139–167; and F. S, (1990) Under which description? in A. Sen & B. Williams (eds.) Utilitarianism and Beyond (New York, Cambridge University Press), pp. 251–261. [7] See, for example, the critiques by P. M (1967) Theory-testing in psychology and physics: a methodological paradox, Philosophy of Science, 34, 103–115; and J. C (1994) The earth is round (p < .05), American Psychologist, 49, 997–1003. [8] Such positions merit attention, though it is likely that deception is only part of the problem that these authors describe. J. E. D (1985) American Freedom and Social Science (New York, Columbia University Press); and P. K (1988) Psychology Exposed: Or the Emperor’s New Clothes (London, Routledge). [9] Clarke, op. cit., p. 154. [10] Clarke, op. cit., p. 152. [11] W. E. V (1954) Deceiving experimental subjects, American Psychologist, 9, 155. [12] J. H. K (1997) Illusions of Reality: A History of Deception in Social Psychology (Albany, New York, State University of New York Press). [13] See M. S. E (1977) Ethical problems in social psychological experimentation in the laboratory, Canadian Psychological Review, 18, 233–241; H. C. K (1967) Human use of human subjects: the problem of deception in social psychology experiments, Psychological Bulletin, 67, 1–11; and D. P. S (1969) The human subject in psychological research, Psychological Bulletin, 72, 214– 228. [14] Cf. C. B. F and D. F (1994) College students weigh the costs and benefits of deceptive research, American Psychologist, 49, 1–11; J. E. S, R. I, and B. R (1995) Ethics, Deception, and ‘Those Milgram Experiments’ 255 © Society for Applied Philosophy, 2001 Deception methods in psychology: have they changed in 25 years?, Ethics and Behavior 5, 67–85; and C. P. S and S. P. B (1982) Why are human subjects less concerned about ethically problematic research than human subjects committees?, Journal of Applied Social Psychology, 12, 209–221. See also B. H. S and J. G (1996) Informed consent: psychological and empirical issues, in B. Stanley, J. Sieber, and G. Melton (eds.) Research Ethics: A Psychological Approach (Lincoln, University of Nebraska Press), pp. 105–28. [15] A. O (1991) A confederate’s perspective on deception, Ethics and Behavior, 1, 14–31. [16] T. H. M (1980) Learning to deceive: the education of a social psychologist, Hastings Center Report, 10, 11–14, p. 14. See also S. M. J (1971) A letter from S to E, in J. Jung (ed.) The Experimenter’s Dilemma (New York, Harper & Row), pp. 86–88. [17] Clarke, op. cit., p. 157. [18] Clarke, op. cit., p. 156. [19] H. S (1982) Ethical Problems in Psychological Research (New York, Academic Press), p. 49. [20] See, for instance, the Code of the American Psychological Association (1992) Ethical principles of psychologists and code of conduct, American Psychologist, 47, 1597–1611. [21] For an interesting analysis of this and other apparent inconsistencies, see W. T. B (1975) The American Psychological Association’s Code of Ethics for research involving human participants: an appraisal, Southern Journal of Philosophy, 13, 407–419; and E. M (1986) Does the moral philosophy of the Belmont Report rest on a mistake?, IRB: A Review of Human Subject Research, 8, 5–6. [22] In their recent analysis of the Milgram studies, Pidgen and Gillet make this point, and show how the question of autonomous participation is in some sense distinct from the moral status of the experiment. C. R. P and G. R. G (1999) Milgram, method, and morality, Journal of Applied Philosophy, 13, 234–250. [23] Here I have in mind critiques like those by S. B (1995) Shading the truth in seeking informed consent for research purposes, Kennedy Institute of Ethics Journal, 5, 1–17; and D. W (1996) Deception in medical and behavioral research: is it ever acceptable?, Milibank Quarterly, 74, 87–114. [24] R. M. V From Nuremberg through the 1990s: the priority of autonomy, in H. Y. Vanderpool (ed.) The Ethics of Research Ivolving Human Subjects (Frederick, MD, University Publishing Group), pp. 45– 58. [25] This phrasing is preferable to asserting that whenever the risks are minimal the deception is justified. It is better to think of justification as a variable, not an all-or-nothing, quality. Much research poses few risks, but for a number of reasons deception in this research might be unjustified. [26] S. M (1963) Behavioral study of obedience, Journal of Abnormal Psychology, 67, 371–378; S. M (1964) Group pressure and action against a person, Journal of Abnormal and Social Psychology, 69, 2, 137–143. [27] Clarke, op. cit., p. 154. [28] D. B (1964) Some thoughts on ethics of research: after reading Milgram’s ‘Behavioral Study of Obedience’, American Psychologist, 19, 420–423; D. (1985) Research using intentional deception: ethical issues revisited, American Psychologist, 40, 165–174. [29] S. M (1964) Issues in the study of obedience: A reply to Baumrind, American Psychologist, 19, 848–852, p. 849. [30] S. M (1964) Issues in the study of obedience: A reply to Baumrind, American Psychologist, 19, 848–852; S. M (1977) Subject reaction: the neglected factor in the ethics of experimentation, Hastings Center Report, 7, 19–23. [31] Clarke, op. cit., p. 160. [32] A. G. M, (1986) The Obedience Studies: A Case Study of Controversy in Social Science (New York, Praeger). [33] For those interested, this is episode number 7G04, ‘There’s No Disgrace like Home,’ which premiered on 28 January 1990. [34] S. M (1974) Obedience to Authority (New York, Harper & Row). Texts frequently emphasize photos taken during Milgram’s experiments, which show the subjects enduring stress, often with the experimenter urging them on. [35] A recent anthology devoted to the experiments is T. B (ed.) (1999) Obedience to Authority: Current Perspectives on the Milgram Paradigm (Mahwah, NJ, Laurence Erlbaum). To their credit, the authors of the essays in this work do not attempt to draw comparisons between Milgram’s experiments and current research. 256 C. D. Herrera © Society for Applied Philosophy, 2001 [36] There are exceptions, of course, but these too tend to stay within the conventional lines of the debate. Cf. R. R (1994) Science and ethics in conducting, analyzing, and reporting psychological research, Psychological Science, 5, 127–134; and C. P. S (1981) How (un)acceptable is research involving deception?, IRB, 3, 1–4. [37] A. E. G and I. F (1982) Twenty years of deception in social psychology, Personality and Social Psychology Bulletin, 8, 402–408, p. 407. This reasoning is analogous to my saying that critics of abortion ought to be willing to adopt children of the unwanted pregnancies. [38] For a somewhat dated view of this problem, see D. M (1974) If you won’t deceive, what can you do?, in N. Armistead (ed.) Reconstructing Social Psychology (Baltimore, Penguin Education), pp. 72–85.