Journal of Applied Philosophy, Vol. 16, No. 2, 1999 Justifying Deception in Social Science Research STEVE CLARKE ABSTRACT The use of deceptive techniques is common in social science research. It is argued that the use of such techniques is incompatible with the standard of informed consent, which is widely employed in the ethical evaluation of research involving human subjects. A number of proposals to justify the use of deceptions in social science research are examined, in the face of its apparent incompatibility with the standard of informed consent, and found to be inadequate. An alternative method of justification is outlined, which enables some deceived participants in social science research to rationally and autonomously choose to participate in that research. The alternative method of justification appeals to the idea o/indirect consent, which is introduced. It is argued that research subjects who receive reliable testimony regarding research procedures can sometimes be placed in a position to rationally and autonomously consent indirectly to participation in experiments and studies, even if these involve significant deceptions. 1. Introduction Over the past 50 years one standard has come to dominate discussion and legislation of ethical issues involved in research on human subjects, the standard of informed consent. If not all, then the vast majority of significant codes of research practice in the Western world, demand of researchers that they obtain the consent of their human research subjects before research involving those subjects can proceed, and further demand that that consent be based on the subjects' having access to relevant information about that research and properly understanding that information. In social science research the demand for informed consent raises an apparent dilemma. Much social science research involves the deliberate deception of human subjects. A research subject who is deceived in the context of an experiment or a study does not fully understand the nature of the research that she is participating in and cannot therefore be said to be properly informed about that research. Prima facie, it appears that informed consent cannot be given by a subject who has been deceived about an important aspect of an experiment or a study. So, it seems that we must either abandon the demand for strict adherence to informed consent standards, or abandon the use of deceptive practices in social science research. Most social scientists appear to accept that strict adherence to informed consent standards and the use of deception in experiments or in other forms of research on human subjects are incompatible. However, they have not allowed this perceived incompatibility to stand in the way of their research. Social science researchers will typically allow informed consent standards to be overridden, if, in their judgment, the importance of the research justifies this, and if the potential harms to research subjects are not sufficiently severe. Perhaps the most influential statement of this position is encapsulated in the © Society for Applied Philosophy, 1999, Blackwell Publishers, 108 Cowley Road, Oxford, OX4 1JF, UK and 350 Main Street, Maiden, MA 02148, USA. 152 5. Clarke 'Ethical Principles of Psychologists and Code of Conduct' of the American Psychological Association' [1]. Not only is this the standard endorsed by the American Psychological Association (hereafter 'APA'), but it is also a standard which is effectively replicated in professional conduct codes amongst psychologists in many other countries [2]. Consulting this code we learn that American psychologists have the following responsibilities in regard to informed consent: Standard 6.11 Informed Consent (b) Using language that is reasonably understandable to participants, psychologists inform participants of the nature of the research; they inform participants that they are free to participate or to decline to participate or to withdraw from the research; they explain the foreseeable consequences of declining or withdrawing; they inform participants of significant factors that may be expected to influence their willingness to participate (such as risks, discomfort, adverse effects or limitations on confidentiality, except as provided in Standard 6.15, Deception in Research); and they explain other aspects about which the prospective participants inquire. Under heading 6.15, 'Deception in Research', we are told the following: Standard 6.15 Deception in Research (a) Psychologists do not conduct a study involving deception unless they have determined that the use of deceptive techniques is justified by the study's prospective scientific, educational, or applied value and that equally effective alternative procedures that do not use deception are not feasible. (b) Psychologists never deceive research patients about significant aspects that would affect their willingness to participate, such as physical risks, discomfort or unpleasant emotional experiences. So, while American psychologists recognise that it is desirable to obtain the informed consent of their research subjects, they are willing to allow informed consent standards to be overridden if they cannot see another effective way to conduct what they take to be valuable research, and if the costs to their subjects are not deemed to be 'significant'. In other words, the APA encourages its members to conduct a form of cost-benefit analysis to justify deception, weighing the benefits to science against the costs to the individual [3]. Of course, this will not be a straightforward cost-benefit analysis for psychologists who have a clear understanding of what constitute 'significant aspects' that would affect the willingness of potential research subjects to participate and who conscientiously adhere to the APA code. For them, some research can never be conducted, no matter how prospectively valuable its results. However, because the term 'significant' is so vague, there is considerable scope for psychologists to ignore this restriction and keep their cost-benefit analyses straightforward. © Society for Applied Philosophy, 1999 Justifying Deception in Social Science Research 153 2. The Pervasiveness of Deception The psychologist's laboratory is where the use of deceptive techniques has raised the most controversy. The most notorious instances are the Milgram obedience studies [4]. In the Milgram obedience studies a subject is deceived into believing that she is administering a learning test on another experimental subject, and that this involves the use of electric shocks as a punishment for wrong answers. In actual fact the experiment is an elaborate hoax and the subject's propensity to obey an authority figure and willingness to inflict severe pain on another person are being examined. Another well known and controversial series of laboratory experiments involving deception is that of Bramel, in which male subjects were falsely informed that they had exhibited indications of sexual arousal upon seeing photographs of 'handsome men in states of undress' [5]. Experiments involving similar deceptions of female as well as male subjects have also been performed [6]. A range of non-laboratory social science research procedures also involve deception. Here are two types of example: (a) Emergency bystander studies. In these studies a researcher engages an assistant to fake an emergency, such as a heart attack, in order to observe the reactions of members of the public to the emergency situation. Some such studies have been conducted in controlled laboratory situations. Most often, however, they have been conducted in public domains. (b) Group infiltrations [7]. In this form of research a social scientist becomes a member of a group in order to surreptitiously study its activities. Typically, such groups are marginal political or religious organisations within a larger society, which have an interest in keeping their affairs secret from the general public. In all of these examples the motivation for deception is methodological [8]. If it were public knowledge that an emergency was faked then it would be unlikely that members of the public would respond in the same way as they would in a real emergency. Their responses would almost inevitably lack the same sense of urgency. If a group being infiltrated knew that its new member was actually a social scientist attempting to study them, then it is hard to believe that they would behave as they would have done otherwise. Similar methodological justifications can be made on behalf of the use of deception in Milgram's obedience studies and in Bramel's studies. It is sometimes said that deceptive techniques are on the wane in social science research. However, this view is not borne out by empirical evidence. Although social scientists may be more aware of ethical considerations when planning research than they were in the 1960s, when Milgram and Bramel first published their major studies, the rate of deception in social science research shows no definite evidence of being in decline. A 1982 survey of over 1000 research articles published in four leading social psychology journals over a twenty year period (1959-79) found that 58% of the studies discussed involved some form of deception. The authors interpret their data as indicating a dramatic increase in the prevalence of deception in social psychology in the 1960s, and no subsequent decrease from these high levels in the 1970s [9]. More recent studies have also failed to indicate a clear reduction [10]. Deceptive techniques, ranging from the violation © Society for Applied Philosophy, 1999 154 5. Clarke of the promise of anonymity and the use of unacknowledged concealed observers, to the misrepresentation of research purposes and false statements about the researcher's identity, are endemic in a wide variety of the social sciences and multiple forms of deception are not uncommon in the one experiment or study. 3. Cost-Benefit Analysis Social science researchers who employ deceptive techniques typically endorse conse-quentialist arguments to justify their use of deception. However, there is no attempt, which I have been able to locate, to formally quantify the benefits of any particular piece of social science research and weigh these against the costs to its participants. Usually, a combination of some or all of the following three argumentative strategies is employed by social scientists in favour of the conclusion that precise, formal cost-benefit analyses for their favoured research projects are unnecessary, because benefits vastly outweigh costs: [a] talking up the value of particular research projects and of social science research in general; [b] downplaying the harms associated with deception; [c] instituting research protocols intended to minimise potential harms. We will now consider the three argumentative strategies and assess their effectiveness. [a] Talking up the value of social science research. Many arguments in favour of the benefits of social science research start from the assumption that all knowledge is valuable. Often this line of reasoning is enhanced with the further presumption that knowledge gained in the social sciences is of particular value, because it is knowledge about ourselves and its dissemination will lead to greater self-understanding, which is held to be inherently valuable. The easy assumption that self-understanding is in itself valuable is nowadays challenged by post-modernists who see this presumption as an unjustified piece of enlightenment thinking [11]. On a more practical level it is easy to see that some self-understanding can have harmful consequences. Many participants in the Milgram obedience studies found out something unexpected about themselves; that they were more prone to obey authority figures than they might have supposed. While there may sometimes be long-term benefits to individuals to be derived from gaining this information about themselves, such self-discoveries can often be harmful rather than beneficial. Subjects who make unexpected and unwelcome discoveries about themselves can be subjected to lowered self-esteem, and other negative feelings. In addition to the benefits said to flow to individuals from gains in self-understanding, particular researchers will argue for potential beneficial consequences for society as a whole as a result of their research. Milgram made such a case for the obedience studies, suggesting that a society which was more aware of the disposition of its members to obey authority figures could be better motivated to develop ways of ensuring that leaders did not abuse their authority [12]. In many cases, assessments by social scientists of the importance of their own research amount to no more than assertion. Given the understandable propensity of people to estimate the importance of their own chosen activities more highly than the importance of the activities of others, it is easy to see why those outside the social sciences have often been inclined to dismiss such assertions. [b] Downplaying the harms associated with deception. Some defenders of the status quo in academic research reject, out of hand, the possibility that harms serious enough to be worth considering might result from social science research conducted at reputable © Society for Applied Philosophy, 1999 Justifying Deception in Social Science Research 155 institutions. Alan Elms is representative of a common position amongst social science researchers when he asserts that 'the principal danger to the typical subject is boredom' [13]. He also claims, on the basis of interviews with participants in the Milgram obedience studies, that 'the remarkable thing about the Milgram subjects was not that they suffered great persisting harm, but that they suffered so little, given the intensity of their emotional reactions during the experiment itself [14]. Elms concedes that occasionally individuals may suffer long-term distress, as a result of the effects of unexpected revelations about themselves, but he counters this concession with the following comparison: 'a psychologically fragile individual's reactions to a carefully managed research participation are unlikely to be any worse than to an emotionally involving movie, a fire-and-brimstone sermon, or a disappointing job interview' [15]. Elms may be right about all of this. However, he has not said enough to entitle him to dismiss consideration of harms resulting from deceptive practices in academic research. Even if it is true that the majority of participants in the Milgram experiment suffered no long-term harms, they surely did suffer short-term psychological harms, as anyone observing footage of the experiment should be able to confirm. Short-term harms are still harms and need to be considered in cost-benefit analyses of experiments. Elms may also be right to hold that the few who do suffer long-term harms from self-revelatory experiments are quite likely to be 'highly strung' and might suffer equally in circumstances where the average person typically would not. But this observation is not relevant to cost-benefit analysis. Experimenters have a responsibility to consider the suffering of experimental subjects regardless of their propensity to suffer in other situations. For consequentialist-style analyses it is overall harm that is to be minimised, not harm moderated against the propensity of subjects to be harmed. Dismissive assertions in a similar vein to the quotes from Elms are sometimes also made by social scientists when asked to consider potential harms in social science field studies. However, in field studies deceptions can lead to various harms over which researchers have very little control. Consider group infiltration. Presumably the intended research output of a study involving infiltration of a marginal group is publication in the public domain. If so, then members of the group can be harmed in at least two ways. First, they will typically suffer from feelings of betrayal when they realise that a person whom they took to be a group member, and may well have trusted, was in fact a researcher studying them. Second, whatever benefits they derived from being secretive — and presumable there were some such benefits, or why else would they have gone to the trouble of being secretive? — will almost certainly be lost as their activities become public. A marginal religious or political group, living within an intolerant society, may have very good reasons for being secretive and its members could suffer greatly from having an account of their activities made public. fcj Instituting research protocols intended to minimise potential harms. A number of strategies are pursued by social scientists to minimise potential harms in research which does involve deception. The most important of these is 'debriefing' which is now compulsory for those who adhere to the APA standard on deception in research [16] as the following clause makes clear: Standard 6.15 Deception in Research (c) Any other deception that is an integral feature of the design and conduct of an © Society for Applied Philosophy, 1999 156 5. Clarke experiment must be explained to participants as early as is feasible, preferably at the conclusion of their participation, but no later than at the conclusion of the research. Available evidence suggests that although debriefing can be effective in easing the discomfort caused during a study or experiment involving deception it is insufficient to fully reverse negative feelings experienced by those research subjects who are prone to having negative feelings about themselves, as a result of unexpected revelations about themselves in experiments [17]. Also, the benefits of debriefing can be lost in the not uncommon situations where experimental subjects do not distinguish clearly between the experiment itself and the debriefing [18], and in situations where the revelation of the use of deception in an experiment results in a research subject ceasing to trust the researcher. Another strategy which has been suggested to reduce the harms associated with deception is to ensure that research subjects understand that they have the option of withdrawing from an experiment or study at any stage. Elms refers to situations where this is realised as situations of 'ongoing informed consent' [19]. Elms' advocacy of the option to withdraw, and his use of it in defence of deception, appears to be based on a conflation of that option with what is sometimes referred to as 'informed participation'. Sometimes it is unrealistic to expect that sufficient information required to enable informed consent to be obtained can be processed by a research subject at one sitting. In such situations, consent needs to be obtained before an experiment or study is commenced, and then updated as the research continues and the subject comes to understand the experiment or study, as well as her own reactions to it, in greater detail. Hence the phrase 'informed participation'. Reassuring research subjects that they may discontinue an experiment at any stage may well be comforting to them; however, consent gained during an experiment or study while a deception is taking place is not based on a proper disclosure of relevant information about the experiment or study and is therefore not informed participation and it is not informed consent. Undoubtedly the two strategies discussed for minimising harms can succeed in reducing some of the harms caused to subjects in controlled laboratory research. However, as we have seen, there is a lack of evidence to suggest that they fully alleviate all harms associated with laboratory research. Furthermore, they are largely impractical in field research. Consequentialist arguments in favour of the use of deceptive techniques in social science research are typically arguments to the effect that benefits, which are held to be very large, swamp costs, which are held to be very minor and which can be effectively minimised anyway. I have examined some of the more common arguments put forward for these claims and have found them to be less than convincing. In any case, there is something very dissatisfying and odd about the way in which proponents of deceptive techniques in social science research attempt to balance costs and benefits. The benefits weighed accrue mostly to social science researchers and to science in general, whereas the costs accrue almost exclusively to the subjects deceived. This is a striking situation. In the biomedical sciences, the other main area of research on human subjects, it is, nowadays, considered unacceptable to violate standards of informed consent simply on the ground that great benefits to science are at stake which substantially outweigh harms to research subjects. Intuitively, it should be apparent that the form of cost-benefit analysis being discussed © Society for Applied Philosophy, 1999 Justifying Deception in Social Science Research 157 is not a form that the concept of informed consent was developed to encourage. We could very well imagine that a potential participant in an experiment might appreciate that the benefits to science of that experiment could substantially outweigh the harms to her and might nevertheless choose not to consent to participation in that experiment. On the cost-benefit analysis model of justification of deception in social science research, which has been discussed, social scientists can be entitled to deceive such a person even though it is not the case that that person would have consented to the procedure had they been in a position to make an informed decision regarding participation. We will now examine the doctrine of informed consent more closely and see where the standard social scientist's treatment of the topic has gone astray. 4. The Doctrine of Informed Consent Modern moral and legal doctrines of informed consent have an ancestry in the reaction against the use of non-consenting subjects in experiments in German concentration camps during the Second World War; a reaction which is encapsulated in the Nuremberg Code and the Helsinki Declaration [20]. The Nuremberg code, developed specifically in an attempt to avoid the repeat of such occurrences, states that: The voluntary consent of the human subject is absolutely essential. This means that the person involved should have the legal capacity to give consent; should be so situated as to be able to exercise free power of choice, without the intervention of any element of force, fraud, deceit, duress, over-reaching, or other ulterior form of constraint or coercion; and should have sufficient knowledge and comprehension of the subject matter involved as to enable him to make an understanding and enlightened decision. (Nuremberg Code, Rule 1). Through a process of testing in courtrooms and articulation in ethics committees at hospitals and universities, and through academic research, the modern doctrine of informed consent has gradually emerged. The most concise articulation of this doctrine is Faden and Beauchamp's definition: Action X is an informed consent by person P to intervention I if and only if: 1. P receives a thorough disclosure regarding I 2. P comprehends the disclosure 3. P acts voluntarily in performing X 4. P is competent to perform X 5. P consents to I [21]. Faden and Beauchamp understand their account of informed consent to be grounded squarely in the view that we should respect the autonomy of others. As they put it: 'informed consent is rooted in concerns about protecting and enabling autonomous or self-determining choice by patients and subjects' [22]. Contemporary users of such language stand in direct lineage from the authors of the Nuremberg code whose stress on the importance of free power of choice indicates an overriding concern to ensure that human subjects are able to act autonomously. The emphasis on respect for autonomy as a core element of contemporary moral theory is strongly associated with the Kantian tradition. For Kantians, persons have uncondi- © Society for Applied Philosophy, 1999 158 5. Clarke tional worth and ought to be treated as autonomous ends and never merely as means. Only a participant who is in a position to rationally give her consent to research participation, a subject who is able to act as a self-legislating participant in an experiment or other form of research, can properly be said to be treated as an autonomous end in that research. Austere Kantians, and others for whom respect for autonomy is an overriding virtue, will be unmoved by any consequentialist arguments in favour of overriding informed consent standards, no matter how compelling. More moderate Kantians may be willing to accept the violation of the autonomy of particular individuals in circumstances where the benefits to the community of doing so overwhelmingly outweigh the harms. However, such concessions are unlikely to be relevant here because, as we have already seen, it would be very difficult to make such a case for any particular piece of social science research involving deception. Consequentialism is usually distinguished sharply from the Kantian tradition; indeed this distinction is sometimes held to be the cardinal divide between moral theories. Given that the arguments in favour of deception in social science research are consequentialist ones it is important to realise that there are also countervailing reasons against overriding individual autonomy which are relevant to consequentialist calculations. The major consequentialist argument in favour of instituting inviolable standards of informed consent is one which starts from the very plausible assumption that individuals are better placed to decide as to how to go about maximising the satisfaction of their own preferences than are other individuals and social institutions. They are also better motivated to do so. If that's right, then a society which acts so as to protect individual autonomy will be one in which the satisfaction of individual preferences is maximised. In addition to the protection of individual autonomy, there are various other beneficial consequences which follow from maintaining high standards of informed consent. These include the avoidance of fraud, the encouragement of self scrutiny by professionals and the promotion of rational decisions [23]. I have already argued that cost-benefit analyses of the form encouraged by the APA do not succeed in clearly favouring the use of any particular deception in social science research. But even if you were a thoroughgoing consequentialist and you disagreed with my assessment of the situation, the consequentialist considerations which I have now introduced, in favour of respecting autonomy, ought to make you pause to reconsider. And, of course, Kantians, whose moral inclinations are in favour of respecting autonomy anyway, will mostly be unmoved by consequentialist arguments in either direction. Perhaps one reason why the doctrine of informed consent has continued to dominate contemporary human research ethics is that it appeals to both consequentialist and Kantian considerations through its expression of the value of individual autonomy. In general, it is a good thing that public policy appeals to a broad base of intellectual underpinnings and established widespread community support. The doctrine of informed consent expresses the sentiments of Western societies in which respect for the value of autonomy is paramount. As a matter of public policy then, there is strong reason for us to reject consequentialist treatments of deception in social science research which do not adequately address considerations of autonomy. © Society for Applied Philosophy, 1999 Justifying Deception in Social Science Research 159 5. Autonomy and Deception Some defenders of deceptive practices in social science research appear to recognise a general requirement to respect the autonomy of human subjects, but hold that social science researchers are entitled to override that requirement, in virtue of certain features of social science research. This line of argument is often developed by pointing to differences between the biomedical sciences, the context in which the doctrine of informed consent has largely been developed, and the social sciences. Two specific differences between the social and the biomedical sciences, which are sometimes highlighted in attempts to argue that adherence to strict informed consent standards should not be required of social science researchers, are the following [24]: First, it is sometimes claimed that there is a difference in the relative power relations between doctors and their patients, and social scientists and their research subjects, a difference that renders disclosure standards which are appropriate in biomedicine, inappropriate in social science research [25]. If you held the view that a situation of equalised power relations was sufficient to enable rational autonomous agency then this consideration might tell against the enforcement of strict informed consent requirements in the social sciences. However, such a view does not stand up to much examination. Levelling out power relations may help promote individual autonomy, but a situation of level power relations will not always enable rational autonomous agency. A research subject who does not have sufficient information to make an informed decision about his or her participation in an experiment or other form of research cannot be said to be an informed self-legislator, regardless of how disempowered the researcher is, relative to the subject. Second, it is sometimes pointed out that there are group issues to be considered in the social sciences in addition to issues of direct concern to individuals. Recall the case of group infiltration discussed earlier. I considered harms to individual members of the group, but additionally there are potential harms to the group as a whole which can result from public exposure of a group's activities. D'Agostino is one writer who considers the biomedical model of ethical evaluation to be inappropriate, because of its emphasis on informed consent and because it fails to take group issues into account [26]. While he is right that informed consent is a consideration appropriate to individuals rather than groups, the admission that ethical thinking about the social sciences ought to take account of group interests does nothing to render informed consent standards irrelevant to the treatment of individuals in social science research. If we consider both individual and group interests we will still be considering individual interests and this will involve considering the autonomy of individuals. A communitarian position which legitimated the overriding of the autonomy of individuals, when it conflicted with the interests of the group, is a position which might be able to provide a basis for downplaying the importance of informed consent in social science research; perhaps by promoting the argument that individual interests in enforcing informed consent standards are outweighed by the needs of society to conduct research involving deception. However, modern Western societies, as communitarians often complain, are societies in which people understand themselves in an individualistic and not a communitarian mode. In other societies it might be agreed that there is an entitlement of the group to override informed consent, for the sake of social science © Society for Applied Philosophy, 1999 160 5. Clarke research, but we Westerners are not currently members of societies where such a view would command widespread assent. A particular way of developing a communitarian argument for the overriding of informed consent standards in social science research, is to argue that social scientists have some sort of implicit power to override individual autonomy, a 'licence to deceive' as part of their job description [27]. Just as the Australian police are licensed, in certain circumstances, to override my autonomy and force me to submit to a blood alcohol test, in the interests of the community, the social scientist is held to have an implicit authority to conduct certain types of research in the interest of the community, even if research subjects do not give their informed consent to that research. Now I agree that it might be possible for there to be a communitarian society where social scientists had such an authority, but the suggestion that modern Western societies are such societies is not very credible. Although it is not implausible to believe that some professions come to acquire implicit powers, in doing so they presumably gain the broad acceptance of the general community that they should have such powers. There is a lack of evidence of broad acceptance for deceptive practices in social science research. If there were such acceptance then it seems unlikely that experiments such as the Milgram experiment would raised such ethical controversy. Also, it seems that when we grant the power to override individual autonomy to a particular profession, we typically insist that its practitioners be trained to use that power responsibly. Yet social scientists are not specifically trained to use their alleged implicit power to override individual autonomy responsibly. The claim of implicit licensing is very convenient for social scientists to make, but does not stand up to much scrutiny. 6. Substitutes for Informed Consent Some other defenders of deception in social science research concede that social scientists should aim to meet generally accepted informed consent standards and attempt to find adequate substitutes for informed consent which are compatible with deceptive practices. One such proposed substitute is that of 'after the fact consent' [28]. Research subjects are not in a position to be informed about the details of an experiment or other form of research which involves deception before the research commences. However, they can be given a measure of autonomy if their subsequent consent is required to allow use of the results of the research after it has been completed and debriefing has taken place, or so the proposal has it. While it may give research subjects who are unhappy about being deceived some satisfaction to withhold data about themselves derived during research, this power is not an adequate substitute for informed consent. A research subject who has participated in an experiment or a study that they would not have participated in, had they been informed about what it involved, has had their autonomy violated, and has not given an equivalent to informed consent to that experiment or study, regardless of what happens after it has been completed [29]. A second proposed substitute is 'anticipated consent' [30]. Although it appears that I cannot informedly consent to being deceived, another person or persons who had similar preferences to mine could be asked to make a decision on my behalf about my participation in the proposed experiment or study involving deception. If we thought © Society for Applied Philosophy, 1999 Justifying Deception in Social Science Research 161 that my being respected as an autonomous agent just amounted to having my preferences satisfied, and if we believed that another person could be completely informed about my preferences, then we could perhaps accept that such a decision amounted to informed consent. But not many people would accept all of this. Autonomy is usually understood to involve my being the author of my own actions. If that's right then anticipated consent can never be an adequate substitute for informed consent. In any case, any practical application of anticipated consent will inevitably involve representatives who are less than fully informed about the preferences of the particular individuals involved in an experiment or study. One way in which the application of anticipated consent has been proposed, is to have randomly chosen members of the general class of people who are to be studied act as 'peer consultants', advising as to whether or not a deception within an experiment or study would be acceptable to them and their peers [31]. There will almost inevitably be people who are atypical members of the group being studied, who do not have preferences which peer consultants or others might be able to anticipate. These atypical group members may suffer as a result of experiments or studies involving deception, in ways that their peers will not (recall the case of the highly strung people discussed in Section Two). Anticipated consent appears to be an impractical way to address the concerns of such people. 7. Deceptions Justified I have examined various proposals which are aimed at reconciling informed consent standards with deceptive practices in the social sciences and found all of them to be inadequate. The first sort of proposals, examined in Section Three, were based on an assumption that informed consent standards could legitimately be overridden on consequentialist grounds. Attempts to establish those consequentialist grounds were unconvincing, and in any case they failed to properly address the issue of autonomy which is at the heart of the doctrine of informed consent. A second set of proposals, examined in Section Five, attempted to show why exceptions to standards of informed consent should be made for the social sciences. It was argued that these were unacceptable because they failed to pick out a relevant feature of the social sciences that would entitle social scientists to the required exceptions. A third sort of proposal, examined in Section Six, was to look for a substitute for informed consent in the social sciences. As we saw, the proposed substitutes were inadequate. A large part of the reason why attempts to justify deceptive practices in the social sciences have been unsuccessful is that their proponents have failed to look at the broader context in which the standard of informed consent is promoted. Informed consent is primarily promoted in order to ensure that people who might not otherwise be in a position to make rational autonomous decisions can do so. If we can place the deceived subject in a position to make a rational autonomous decision about participation in an experiment or study then we can be in a position to dispense with formal informed consent requirements, the major motive for insisting on these having been satisfied. To see how rational autonomous decisions can be made in the absence of formal informed consent requirements consider the case of an ordinary person who decides to try a new food product available at their local convenience store. The ordinary person either does not know what the constituents of the product are, or, if the product's © Society for Applied Philosophy, 1999 162 5. Clarke chemical components are listed on its packet, she probably does not understand what effects these can have on her. Despite her ignorance about the product she can rationally decide that the product is safe to eat on the basis of compliance of the product's manufacturers with the Australian Department of Health's regulations (or in America the Food and Drug Administration's [hereafter 'FDA'] regulations). The person who decides, in this way, that the product is safe can make a rational decision to eat the product, based on consideration of the reliability of the testimony of others. She decides that the Department of Health or the FDA is sufficiently reputable to trust about food safety issues, on the basis of (perhaps very limited) information about its current membership and its past performance. She is not informed about the product itself, but about the reputation of others who are informed about the product. She has not given the exact equivalent to informed consent about the product, at least not on Faden and Beauchamp's definition of informed consent, because she has not received a thorough disclosure of information about the product relevant to her decision. Nevertheless she can rationally and autonomously decide to use that product. I take it that it is uncontroversial that a decision based on testimony, rather than direct evidence, can be a rational decision [32]. As Hume notes, 'there is no species of reasoning more common, more useful, and even necessary to human life, than that which is derived from the testimony of men . . .' [33]. But can a decision based on testimony really be an autonomous decision? There is a sense in which a decision based on testimony fails to be autonomous, which is that it fails to be a decision made independently of others. I depend on others when I base a decision on testimony, and I am therefore not epistemically autonomous. However, this sense of autonomy is not a relevant one. As we saw in Section Four, the relevant sense of autonomy is the sense of being placed in a position to make rational decisions for myself, to be a self-legislator. I can be a self-legislator when I base my decision on the testimony of others, provided that I am freely able to make rational decisions about the epistemic weight to give their testimony. Testimony can be used to enable rational autonomous decisions in certain contexts. Why do we insist on formal informed consent standards in medical contexts and not accept that the testimony of doctors is sufficient grounds for the decision of patients to consent to operations? In large part this is because we are not convinced that doctors have their patients' interests fully at heart when they advise patients. The FDA and the Australian Department of Health exist primarily to protect customers, and have acquired reputations for successfully and reliably doing so [34]. Individual doctors and medical researchers may have their patients' interests fully at heart and may be judged to be trustworthy; however, other doctors and medical researchers may have interests which motivate their advice to patients, apart from a concern with those patients' welfare, and they may not be well placed to know exactly what their patients' interests are in any case. The same is true of social science researchers. A social science researcher may have an interest in persuading potential subjects to consent to participate in research, and this motivation can lead them to discount the importance of the interests of those potential subjects, which they may not be sufficiently aware of anyway. Suppose now that I have a trusted relative who understands me well and has my interests at heart. Call her Aunt Mabel. If I am contemplating participation in a social science experiment or study then I can call on Aunt Mabel to help me decide whether or not to participate, without revealing the nature of the deception in the experiment or study to me. Aunt Mabel can receive a thorough disclosure of the nature of the deception © Society for Applied Philosophy, 1999 Justifying Deception in Social Science Research 163 within the experiment or study from the social scientist, and, knowing what I am like, as she does, can consider the potential benefits to me of participation as well as the likelihood of my being harmed. If Aunt Mabel then advises me that she considers the experiment or study, on balance, beneficial for me to participate in, then it can be rational for me to choose to participate in the experiment or study on the basis of that advice. I have not been thoroughly informed about the experiment or study itself, so I have not formally given my informed consent to participation. Nevertheless I have rationally and autonomously decided to participate in the experiment or study. My decision is not based on a thorough disclosure of information about the experiment or study. However it is based on sufficient information to be rational consent. Call this form of consent indirect consent. The Nuremberg code is commonly thought of as the progenitor of the modern conception of informed consent. However, the Nuremberg code was not simply an early statement of the doctrine. Rather, it outlined a more general requirement. The Nuremberg code does not demand that human subjects receive a disclosure of all relevant information, only that they be placed in a position where they have 'sufficient knowledge and comprehension of the subject matter involved as to enable ... an understanding and enlightened decision' (rule 1). Indirect consent is a way of realising this goal without the disclosure of all relevant information. The Nuremberg code had it right. What is important is not that people are directly informed about every piece of information which is relevant to their decisions, a thought that excessive focusing on the modern doctrine of informed consent (as exemplified by Faden and Beauchamp's definition) has perhaps encouraged, but that our institutions and practices are set up so as to enable rational autonomous decisions based on sufficient information. Unfortunately not everyone has an Aunt Mabel available, whom they can trust to act as an intermediary and inform them as to whether or not a social science experiment or study involving deception is safe to participate in. Perhaps, however, there are institutional equivalents to Aunt Mabel which we can adapt or can set up. The major difference between the institutions which I envisage and institutions such as university ethics committees, as they are currently constituted, is that the institutions I envisage will act as providers of testimony so as to enable individuals rationally to choose to participate in experiments and studies involving deception. Currently, institutions such as university ethics committees, which examine social science research proposals involving deception, typically act as substitutes for individual research subjects, diminishing their autonomy when they anticipate their consent, or when they decide that particular experiments and studies should be conducted regardless of consent. By acting as sources of testimony, such institutions can enable individuals to make rational decisions based on indirect consent. Instead of diminishing the autonomy of individuals they can act so as to enhance autonomy [35]. There is a disanalogy between the case of indirectly consenting to participate in a social science experiment or study authorised by an institution and indirectly consenting to participate in a social science experiment or study based on the testimony of a trusted individual, such as the Aunt Mabel [36]. Because Aunt Mabel knows me well she is in a position to (at least roughly) track my thoughts. She is in a position to decide whether or not, for example, learning potential information about myself, during the course of an experiment or study, is likely to be harmful to me. The fact that I know that she knows me well is a crucial part of the reason why I can rationally accept her judgment. Institutions © Society for Applied Philosophy, 1999 164 5. Clarke such as university ethics committees would not be in a position to know such explicit information about individuals. They cannot realistically hope to track my thinking. Therefore, it would seem that they are less reliable at providing me with useful advice than Aunt Mabel, even if they are otherwise reputable bodies. Despite this disanalogy, I believe that institutional providers of testimony can provide sufficient information to enable rational indirect consent in some cases. What I envisage is a situation where the institution is able to determine which types of people will and will not suffer as a result of participation in a social science experiment or study involving deception. Suppose, for example that the Milgram experiment is being considered. It may be determined that the average, psychologically robust person will benefit from the self-knowledge acquired as a result of the experiment and that this benefit will outweigh harms experienced as a result of participation in the experiment. It may, however, be determined that people with certain types of personality will experience harms that outweigh benefits. An institution cannot provide advice tailored to me particularly, in the way that Aunt Mabel can, but it can provide conditional information which I can use. It can recommend that a particular experiment or study is or is not suitable for certain types of people to participate in. If information about psychological types is sufficiently finegrained to be suitable for me, and if I can recognise my psychological type, then I can use this information to help make a rational autonomous decision regarding participation in that experiment or study. In some cases this will not be a practical possibility, because of the current limits of our understanding of human psychology. However, in many cases information which is sufficiently fine-grained to be suitable for most individuals need not be very fine-grained. It may be that some experiments or studies involving deception are suitable for everyone to participate in, and it may be that others are unsuitable only for a very narrow band of personality types. Deceptive techniques can be ethically employed in social science experiments and studies in situations where appropriate intermediaries are available and in situations where an appropriate institutional framework is put into place to enable indirect consent. This method will not work for non-laboratory deception. There is no realistic prospect of obtaining the indirect consent of members of the public to deceptive research practices outside the laboratory. If the autonomy of research subjects is to be respected, then such research should either be reproduced in the laboratory or not conducted [37]. Steve Clarke, School of Philosophy, La Trobe University, Bundoora VIC 3083, Australia. NOTES [1] American Psychological Association (1992) Ethical Principles of Psychologists and Code of Conduct, American Psychologist, 47, pp. 1597-1611. [2] Kimmel surveys research ethics codes for psychologists in 11 different countries and geographical regions. According to him, many of these are modelled directly on the APA code. A. J. Kimmel (1996) Ethical Issues in Behavioural Research (Cambridge MA, Blackwell) pp. 325-346. [3] I am simplifying matters here. There are costs to researchers involved in conducting experiments as well as some potential benefits to research subjects. Research subjects who are paid for their time can benefit further. [4] S. Milgram (1974) Obedience to Authority (New York, Harper and Row). [5] D. Bramel (1962) A dissonance theory approach to defensive projection, Journal of Abnormal and Social Psychology, 64, pp. 121-9. © Society for Applied Philosophy, 1999 Justifying Deception in Social Science Research 165 [6] A. E. Bergin (1962) The effect of dissonant persuasive communications upon changes in a self-referring attitude, Journal of Personality, 30, pp. 423-36. [7] Group infiltrations are typically performed for the purpose of passive observation rather than active experimentation, and for this reason are rather different from the other forms of research discussed. [8] An influential methodological case for deception in social science research was made in H. A. Murray (1938) Explorations in Personality (New York, Oxford University Press). [9] A. E. Gross and I. Fleming (1982) Twenty years of deception in social psychology, Personality and Social Psychology Bulletin, 8, pp. 402-8. [10] Kemmel op. cit., pp. 75-82, includes a survey of literature on the frequency of deception in recent social science experiments involving human research subjects. [11] A possible way of justifying the assumption would be by demonstrating that increases in self-knowledge are all beneficial because they all enhance autonomy. Pigden and Gillett assert just this in a defence of Milgram: C. R. Pigden and G. R. Gillett (1996) Milgram, method and morality, Journal of Applied Philosophy, 13, pp. 233-250. However, it is far from obvious that all increases in self-knowledge will enhance autonomy. Increases in self-knowledge which have the effect of eroding one's self-confidence can decrease one's competence as a self-governor. This topic deserves further investigation. [12] S. Milgram (1977) The Individual in a Social World (Reading Ma, Addison-Wesley), p. 14. [13] A. C. Elms (1982) Keeping deception honest, in T. L. Beauchamp, R. R. Faden, R. J. Wallace and L. Walters, eds., Ethical Issues in Social Science Research (Baltimore, The Johns Hopkins University Press), p. 237. [14] Elms op. cit. [15] Elms op. cit. [16] American Psychological Association op. cit. [17] See E. Walster, E. Berscheid, D. Abrahams and V. Aronson (1967) Effectiveness of debriefing following deception experiments, Journal of Personality and Social Psychology, 6, pp. 371-80. [18] F. Tesch (1977) Debriefing research participants: though this be method there is a madness to it, Journal of Personality and Social Psychology, 35, p. 217-24. [19] Elms op. cit., p. 241. [20] See R. R. Faden and T. L. Beauchamp (1986) A History and Theory of Informed Consent (New York, Oxford University Press). [21] Faden and Beauchamp op. cit., p. 275. [22] Faden and Beauchamp op. cit., p. 235. A succinct discussion of the concept of autonomy can be found in T. L. Beauchamp and J. E. Childress (1983) Principles of Biomedical Ethics (Second Edition) (New York, Oxford University Press), pp. 59-61. [23] These beneficial consequences and some others are listed in A. Capron (1974) Informed consent in catastrophic disease and treatment, University of Pennsylvania Law Review, 123, pp. 364-76. [24] These and other differences between the social and biomedical sciences are discussed in R. Macklin (1982) Keeping deception honest, in T. L. Beauchamp, R. R. Faden, R. J. Wallace and L. Walters eds., op. cit., pp. 193-218. [25] M. Wax (1977) Fieldwork and research subjects: who needs protection?, Hastings Center Report, 7, pp. 29-32. [26] See F. D'Agostino (1995) The ethics of social science research, Journal of Applied Philosophy, 12, pp. 65-76. Group issues will be of relevance to biomedicine as well, particularly in cases where a medicine or medical technique is newly introduced into a culture. [27] The idea of a 'licence to deceive' is suggested in G. Dworkin (1982) Must subjects be objects? In T. L. Beauchamp, R. R. Faden, R. J. Wallace and L. Walters, eds., op. cit., pp. 246-254. A possible objection to it would be to point out that it is, in a sense, self-defeating, in that if it is widely known that social scientists are trying to deceive us then we will be alert to their deceptions and less likely to be deceived. However, the problem of erosion of trust in social scientists may effectively be inevitable in situations where social science research involving deception is widespread. Milgram (1974) op. cit., anticipating such a problem, deliberately chose to conduct his initial obedience studies away from universities, places where research subjects would be likely to come into contact with other research subjects and report the fact and nature of deception to them. [28] Discussed in D. Baumrind (1978) Nature and definition of informed consent in research involving deception, in National Commission for the Protection of Human Subjects of Biomedical and Behavioural Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of © Society for Applied Philosophy, 1999 166 5. Clarke Research, DHEW Publication no. (OS) 78-0014 (Washington, D.C., Government Printing Office), Appendix, vol. 2, pp. 23^12. [29] After the fact consent will closely approximate to informed consent in cases of research involving only passive observation (rather than active experimentation) as the overriding way in which consent could be violated in such cases is through the subsequent unauthorised use of information gained in the research. [30] See E. Diener and R. Crandall (1978) Ethics in Social and Behavioural Research (Chicago, Chicago University Press), p. 46. [31] Discussed in Baumrind, op. cit. [32] For a discussion of the philosophical implications of accepting testimony, and a defence of its epistemic value, see C. A. J. Coady (1992) Testimony (Oxford, Oxford University Press). [33] D. Hume (1957) An Enquiry Concerning Human Understanding (New York, Oxford University Press), s. 88. [34] Whether these institutions deserve such reputations is an issue which I will not address. [35] Of course, the testimony of such institutions will be assessed by potential research subjects on the basis of the usual tests of the reliability of testimony. In the case of university ethics committees the issue of independence from the interests of researchers will loom large. In general, any institutions charged with the responsibility to provide testimony regarding the harms or benefits to be derived from participation in a social science experiment should be as independent as possible. In the context of medical experimentation it has been convincingly argued, by Cocking and Oakley, that advisers can manipulate human subjects by framing their advice in such a way as to appeal to non-rational preferences [D. Cocking and J. Oakley (1994) Medical experimentation, informed consent and using people, Bioethics 8, pp. 293-311]. Manipulation and related ethical issues regarding the relationship between institutions and the individuals who depend on their advice, are serious problems which are beyond the scope of this paper. [36] Thanks to Fred D'Agostino for clarifying this difference. [37] Thanks to Dean Cocking, Fred D'Agostino, Brian Ellis and Philip Pettit for helpful comments, as well as to audiences at the University of Melbourne Department of History and Philosophy of Science Staff Seminar and the Monash University Department of Philosophy Staff Seminar. © Society for Applied Philosophy, 1999