2007/07/31

Deceiving research participants


Article

Misleading research participants: moral aspects of deception in psychological research


Ron L.P. Berghmans

Netherlands Journal of Psychology, 63 (March 2007), p. 14-20

Trefwoorden:psychology; research; deception; ethics;

In psychological research with human subjects, it is not uncommon practice to use deceptive techniques. Deception is considered necessary when accurately informing participants about the goal of the research could bias their responses, thereby impairing the validity of the resulting data. Thus, the practice of deception can be situated in the area of tension between on the one hand the duty of researchers and the research community to treat research participants with respect, and on the other hand scientific and methodological standards which are decisive for the scientific value of the research.

In this paper, firstly different forms and topics of deception are described. Then the morally problematic character of deception is assessed and two different ethical approaches to the practice of deception are distinguished. In the final part of the paper an ethically justifiable way of dealing with deception in psychological research is proposed.

Inhoud

In psychological research with human subjects, it is not uncommon practice to use deceptive techniques (Sieber, Iannuzzo & Rodriguez, 1995). Deception is considered necessary when accurately informing participants about the goal of the research could bias their responses, thereby impairing the validity of the data.

There may be sound methodological reasons for using deception to probe for the truth about human attitudes and beliefs and their effects on behaviour. Honestly informing participants can present an obstacle to research and precludes certain kinds of research altogether (Wendler, 1996). However, when deception is used, a conflict between the means and ends of scientific investigation ensues: the end of discovering the truth is pursued by the means of deliberate untruth (Miller, Wendler & Swartzman, 2005).

Thus, deception can be situated in the area of tension between on the one hand the duty of researchers and the research community to treat research participants with respect, and on the other hand scientific and methodological standards which are decisive for the scientific value of the research.

Deception occurs whenever investigators intentionally communicate in a way that produces false beliefs in the participants (Wendler & Miller, 2004). Although deception is most commonly associated with psychological and other social scientific research, it also occurs in clinical research settings. Interestingly, in clinical settings deception seems to raise much more controversy than in social research settings. In this paper, firstly different forms and topics of deception will be described. Then the morally problematic character of deception is assessed and two different ethical approaches to the practice of deception will be distinguished. In the final part of the paper an ethically justifiable way of dealing with deception in psychological research will be proposed.

Milgram’s obedience experiments

A much debated and well-known case of deception in social scientific research is the Milgram obedience experiments (Pigden & Gillet, 1996; Herrera, 1999; 2001; Cave & Holm, 2003; Blass, 2004). In these experiments volunteer participants (so-called ‘teachers’) were made to believe that so-called ‘students’ should learn word-pairs and received electric shocks (with progressive voltages) each time they failed to produce the correct word-pairs. The volunteers administered the shocks and falsely believed that the shocks were ‘real’, while in fact the so-called students were collaborating with the researcher and were not subjected to electricity, but acted as if this was the case by yelling and screaming and begging to stop the experiment. Sixty-five percent of the ‘teachers’ were prepared to torture a fellow human being by administering shocks of more than 220 Volts, simply because they were asked to do so by the researcher, who consistently referred to the importance of the experiment and the need to continue the administration of shocks if the student gave a false or no answer.

The Milgram experiments have been much debated in the literature, but it should be recognised that such crude forms of deception are now no longer – or seldom − used in social scientific research (as far as I can see). Deception in research now generally takes much more subtle forms, because of changes in theory, methods and ethical standards (Korn, 1998).

Forms and topics of deception

Two main ways of deceiving research participants can be distinguished (Wendler & Miller, 2004). First, investigators may deceive participants by intentionally giving them false information. This was the case in the Milgram experiments. Second, investigators may deceive participants by intentionally withholding information in order to produce false beliefs. More specifically, several kinds of deception can be distinguished (Sieber, Iannuzzo & Rodriguez, 1995). Participants may take part in one of several specified conditions of a study, without knowing which condition actually occurs (as in placebo studies); participants may waive their right to be informed but are not explicitly forewarned of the possibility of deception; participants may believe they are engaging in a truthful informed consent procedure, but are actually misinformed about some aspects of the nature of the research; participants may not know that they are taking part in research; and the aim of the research may be so different from what participants expect that they behave under incorrect assumptions.

It is clear that not all the different forms of deception should be considered equally morally problematic. This issue will be addressed further on in this paper.

Apart from different forms of deception and shading the truth, also different topics of deception and false or incomplete information can be distinguished. Subjects may be given false information about the main purpose of the study or concerning stimulus materials (bogus devices); the use of a confederate may cause them to misunderstand the actual role of some individual (role deception); they may be given false feedback about themselves or about another person; they may be kept unaware of being subjects in research or about the fact that a study was in progress at the time of manipulation or measurement, or unaware of being measured (e.g., video-taped) (Sieber, Iannuzzo & Rodriguez, 1995).

Deception as a morally questionable practice

It is obvious that all forms of deception and shading the truth from research participants undermine the moral and legal notion of free and informed consent, which is one of the foundations of contemporary research ethics (Foster, 2001; Childress, Meslin & Shapiro, 2005; Fishman, 2000). Although it is recognised that the ideal of informed consent may be an illusion and in the practice of medicine, health care and research can only be reached in suboptimal ways, it remains an ethical guiding principle researchers should strive for (Berg, Appelbaum, Lidz & Parker, 2001; O’Neill, 2003).

Given the central value of respect for persons and for their individual autonomy in contemporary Western societies, deceiving or misinforming potential research participants is an infringement of their right to choose freely what course of action they would like to take. If researchers use deception and other forms of shading the truth, prospective participants do not have the full opportunity to decide whether to take part in a particular research project. They are not told what to expect regarding goals, procedures or risks and burdens of the study. The researcher may extract kinds of information that participants might not wish to reveal (Sieber, Iannuzzo & Rodriguez, 1995).

In all these respects, deceiving or misinforming participants should be considered prima facie morally wrong and thus unjustified from an ethical point of view (Clarke, 1999).

The moral debate on deception in research: deontological and utilitarian perspectives

In the moral debate, two extreme positions on the justification of deception in social scientific research can be distinguished. These will be referred to as deontological absolutism and utilitarian absolutism.

Deontological absolutism

Deontological absolutism starts from the Kantian premise that lying is always wrong. In Die Metaphysik der Sitten, Kant speaks about the lie primarily as an injury or ‘Verletzung’ of the duty of man towards himself:

‘Die grösste Verletzung der Pflicht des Menschen gegen sich selbst, bloss als moralisches Wesen betrachtet (die Menschheit in seiner Person), ist das Widerspiel der Wahrhaftigkeit: die Lüge (…). … Die Lüge kann eine äussere (mendacium externum), oder auch eine innere sein. – Durch jene macht er sich in anderer, durch diese aber, was noch mehr ist, in seinen eigenen Augen zum Gegenstande der Verachtung, und verletzt die Würde der Menschheit in seiner eigenen Person… Die Lüge ist Wegwerfung und gleichsam Vernichtung seiner Menschenwürde.’ (Kant, 1797).

The position of deontological absolutism in the contemporary debate on deception in research is most vigorously argued for by the American philosopher Sissela Bok (1992; 1995). Bok argues firstly that what some researchers may take to be a simple trade-off between minor violations of the truth for the sake of access to far greater truths, represents a profound miscalculation with far-reaching and cumulative reverberations. And secondly she claims that in scientific truth-seeking hard choices are involved with respect to methods, and, in turn to personal integrity, not only in particular research projects but also with regard to the fragile research environment in its own right (Bok, 1995, 1-2). Bok is particularly concerned about the possible damage to trust resulting from deceptive practices, and the possible risks run by the researchers engaging in such practices. Similar to Kant, truth and truthfulness towards oneself and others are Bok’s central concerns. She claims that although surely we can never exhaust the domain of truth, what we can know far more clearly is whether we intend to deal truthfully with others or to mislead them (Bok, 1995, 14). She concludes:

‘We are misinformed often enough, blunder often enough, shield ourselves enough, and live in deep enough self-imposed shade. We must not add to those forms of distortion by intentionally choosing to engage in deception or self-deception.’ (Bok, 1995, 15).

The major problem with deontological absolutism is that it involves a high moral ideal of truth and truthfulness and as a result of this would frustrate any research project in which the researchers do not live up to this high ideal, or in which the research participant is not fully aware of all the ins and outs of the study in which he or she is participating.

Utilitarian absolutism

The opposite view to deontological absolutism is utilitarian absolutism. Here, it is defended that a trade-off can be made and be justified between the wrong or harm of deception and the benefits resulting from the research. In a case book on research ethics this approach is clearly expressed in the following citation:

‘Legitimate uses of deceptive research practices are situations in which the research cannot be conducted unless the subject is kept in the dark about the purposes of the research.’ (Penslar, 1995, 112).

As long as the balance is in favour of these benefits, research involving deception is considered justifiable. This view goes back to the classic utilitarian ideal that we should strive to maximise happiness (or health, or welfare) for the greatest number of members of society (Pettit, 1991). This may imply that individuals or particular groups or subgroups in a society may have to make sacrifices for the greater good of society at large. The example of the killing of one person in order to save the lives of 100 others (who otherwise would have been killed) is illustrative in this respect. Utilitarianism does not deny that the killing of this one person is a moral wrong, but argues that this moral wrong is justifiable in the light of the good (100 lives saved) which follows from this wrongful act. In the same way, deception in research is prima facie morally wrong, but can be justified in the light of the good consequences which may flow from the experiment in the interest (actual or future) of others.

Utilitarianism is problematic in a number of respects. Firstly, individual rights may be sacrificed for the greater good of others, which opens the way for human rights abuses, of which there have been historical examples. Even in recent history there are examples of such abuses in medical research contexts (Welie & Berghmans, 2006).

A second problem concerns the utilitarian calculus of harms and benefits. It is sometimes argued that deception in research is justifiable since in many cases there appears not to be any harm involved for the participating subjects (Wendler, 1996). However, little research has been done to gain insight into the views and experiences of participants who have been deceived in research. Do they feel that they have been harmed in any way? And if so, what is the character of these harms? How sincere are participants when interviewed about the way they feel about being deceived? And what about trust in the researcher and the research community? Is the reputation of all psychological researchers compromised? (Ortmann & Hertwig, 1997; 1998; Kimmel, 1998; Korn, 1998). Do deceived participants feel that researchers are to be mistrusted, and will they refrain from participating in research projects in the future?

It cannot be denied that such empirical questions deserve attention, but they cannot be considered decisive from a moral point of view. As mentioned, they have more weight within a utilitarian calculus of benefits and harms, but are less important within a deontological perspective.

Is it necessary to choose between one of these absolute ethical positions, or is an intermediary position possible?

Towards an intermediary position: deception and informed consent

I want to argue for an intermediary position, starting from the premise that important psychological and social scientific research questions will not be researchable if all kinds of deception are prohibited. In order to develop my intermediary position I will start with a discussion on the position of the American Psychological Association on research ethics. This position can be considered as exemplary for the ethical codes of psychological associations in a number of other countries.

The Ethics Code (Ethical Principles of Psychologists and Code of Conduct) of the American Psychological Association – in effect since June 2003 – contains a paragraph on deception in research (8.07), which runs as follows (Bersoff, 2003: 39-40):

  1. Psychologists do not conduct a study involving deception unless they have determined that the use of deceptive techniques is justified by the study’s significant prospective scientific, educational, or applied value and that effective nondeceptive alternative procedures are not feasible.

  2. Psychologists do not deceive prospective participants about research that is reasonably expected to cause physical pain or severe emotional distress.

  3. Psychologists explain any deception that is an integral feature of the design and conduct of an experiment to participants as early as is feasible, preferably at the conclusion of their participation, but no later than at the conclusion of the data collection, and permit participants to withdraw their data.

This last issue, generally qualified as ‘debriefing’, is separately addressed in the following paragraph (8.08):

  1. Psychologists provide a prompt opportunity for participants to obtain appropriate information about the nature, results, and conclusions of the research, and they take reasonable steps to correct any misconceptions that participants may have of which the psychologists are aware.

  2. If scientific or human values justify delaying or withholding this information, psychologists take reasonable measures to reduce the risk of harm.

  3. When psychologists become aware that research procedures have harmed a participant, they take reasonable steps to minimise the harm.

Considering these paragraphs of the APA Ethics Code, its predominantly utilitarian character is quite clear. The Code starts from the premise that deception is justified whenever there is a ‘significant scientific, educational or applied value’ and ‘effective nondeceptive alternatives are not feasible’. The value of accumulating knowledge takes precedence over all other considerations, some of which are considered secondary, particularly what may be called the harm principle. The only situation in which psychologists are not allowed to deceive prospective participants is if it can be ‘reasonably expected’ that the research will ‘cause physical pain or severe emotional distress’.

The only ‘compensation’ misinformed research participants may expect from the untruthful researcher is debriefing. But also here, it is acceptable to delay or even withhold information if ‘scientific or human values’ justify this. Again, scientific interests may trump considerations concerning participant autonomy, privacy, freedom and dignity.

From an ethical point of view, this Ethics Code is too close to utilitarian absolutism, and too far removed from deontological absolutism. Respect for individual autonomy and freedom of choice is subordinated to scientific interests.

Informed deception: compromise between deontology and utilitarianism

In order to develop and defend an intermediary position towards deception in psychological research, I want to address two parallel cases which involve generally accepted practices that are exceptions to the ideal of informed consent: waiver of the right to information and the use of placebo in clinical trials.

Waiver of the right to information

Informed consent involves the right of a patient or research participant to be informed and to make a free and well-considered decision based on that information (Faden & Beauchamp, 1986). However, it does not imply a duty to be informed. Participants may freely and voluntarily choose to waive their right to information (Ost, 1984; Feinberg, 1986). The patient or research participant then delegates decision-making authority to the physician or to someone else, or asks not to be informed. In effect, he or she makes a decision not to make a(n) (wholly) informed decision (Beauchamp & Childress, 2001).

Thus, it is recognised that a claimant of a right has discretion over exercising this right. This does not mean that waivers of information are without problems. A general acceptance of all waivers of information in research and therapeutic settings could make patients more vulnerable to those who have conflicts of interest or abbreviate or omit consent procedures for convenience (Beauchamp & Childress, 2001). Nevertheless, this practice of waiving information shows that consented-to forms of non-information are not necessarily in conflict with the moral value of respect for autonomy and the principle of informed consent.

Use of placebo

A second way of approaching deception in psychological research is to make a parallel with the use of placebo in medical research, in particular regarding pharmaceuticals (Wendler, 1996). It is recognised that the use of placebo can be morally justified under a number of conditions and in a number of particular circumstances (Levine, Carpenter & Appelbaum, 2003). One necessary condition is that the prospective participant is informed about the fact that he or she has a particular chance of receiving placebo, and about the fact that he or she (and possibly the researcher too) will not know whether (or when) he or she actually receives placebo. If the participant consents on the basis of this information (and a number of other necessary conditions are met), then placebo research can be considered morally justified.

Although in this way placebo use does not involve deception per se, the consent of the participant is less than fully informed. He or she is kept uninformed about the actual intervention he or she will receive.

Consent-to-deception: second-order consent

Waivers of the right to be informed and the use of placebo in research are practices which imply that patients and participants are less than optimally informed about treatment and research interventions. The moral legitimacy for these practices is rooted in the free consent of the patient or research participant. This consent can be qualified as ‘second-order consent’; it implies a reflected decision of the participant to be kept unaware of particular information which might be relevant for his or her decision-making in a particular situation.

By requiring psychology researchers to inform prospective participants about the fact that the research has a deceptive character, a balanced compromise between deontological approaches and utilitarian approaches to the ethics of this kind of research may be reached. The most straightforward way of appraising participants of the fact that a particular study is deceptive is to require precisely that information as part of the informed consent process (Wendler, 1996). Then also ‘second-order consent’ is involved. This would mean that prospective participants are informed about the fact that information is shaded or distorted, but are kept uninformed about the exact nature of this shading or distortion (Miller, 2004).

In the informed consent procedure, the following could be mentioned to the candidate participant (Wendler, 1996):

‘You should be aware that, in order to complete this study, the investigator cannot inform you of all its details. For this reason, certain details have been left out of the description of the study, or intentionally misdescribed. However, the investigator will be happy to explain these details to you [later]. In addition, you are free to choose not to participate if you do not like the use of deception, or for any other reason.’

In this way, potential research participants who are bothered by deception per se will be able to avoid participating in deceptive studies, participants with idiosyncratic concerns will have the opportunity to reveal them, and other participants will have the opportunity to consent to being deceived. In this way a morally justified balance is found between the interest of science on the one hand, and the duty of researchers and the research community to respect the individual autonomy of prospective research participants on the other.

In conclusion

Deceptive practices are frequently used in psychological research with human participants, and it is uncontested that many interesting and important research questions cannot be validly answered without using some form of deception. Nevertheless, deception is morally problematic, because it prevents participants from choosing freely and uncoerced whether or not to participate in a specific research project. This implies that deceptive practices should be used with great caution and only if it involves important research questions which cannot be answered without some form of misinforming participants, and no less deceptive or non-deceptive approaches are feasible.

Moreover, the consent-to-deception condition should be satisfied, which means that prospective research participants knowingly and willingly agree to being not fully informed about the actual goals of the research. A number of additional conditions should then also be met:

  • Participants must never be deceived about aspects of a study, such as risks, discomfort, or unpleasant emotional experiences, which would affect their willingness to participate (as far as known to the researcher or can be known based on evidence);

  • Participants should be explicitly offered the opportunity for debriefing after the study has been concluded (i.e. no later than at the conclusion of all subjects’ participation, in order to prevent ‘earlier’ participants from informing ‘later’ participants about the real purpose of the study).

In any research project involving some form of shading the truth from research participants, researchers should be explicit about this fact. The burden of proof for the justification of the necessity of not fully or truthfully informing prospective participants lies with the researcher. Shading the truth in psychological research is and should remain an exception to the general duty of truthfulness and as such requires strong justification in each particular study.

By asking consent from their participants, researchers will no longer be guilty of deceiving participants before and during the study and can thus maintain their moral integrity. Above that, public trust in psychological research is fostered and the risk of reputation spillover effects for the profession may be prevented (Ortmann & Hertwig, 1997).

References

1.Beauchamp, T.L., & Childress, J.F. (2001). Principles of biomedical ethics. Fifth edition. Oxford: Oxford University Press.
2.Berg, J.W., Appelbaum, P.S., Lidz, C.W., & Parker, L.S. (2001). Informed Consent: Legal Theory and Clinical Practice, 2nd ed. New York: Oxford University Press.
3.Bersoff, D.N. (2003). Ethical conflicts in psychology. Third edition. Washington, D.C.: American Psychological Association.
4.Blass, T. (2004). The man who shocked the world. The life and legacy of Stanley Milgram. New York: Basic Books.
5.Bok, S. (1992). Informed consent in tests of patient reliability. Journal of the American Medical Association, 267, 1118-1119.
6.Bok, S. (1995). Shading the truth in seeking informed consent for research purposes. Kennedy Institute of Ethics Journal, 5, 1-17.
7.Cave, E., & Holm, S. (2003). Milgram and Tuskegee – Paradigm research projects in bioethics. Health Care Analysis, 11, 27-40.
8.Childress, J.F., Meslin, E.M., & Shapiro, H.T. (Eds.) (2005). Belmont revisited. Ethical principles for research with human subjects. Washington, DC: Georgetown University Press.
9.Clarke, S. (1999). Justifying deception in social science research. Journal of Applied Philosophy, 16, 151-166.
10.Faden, R.R., & Beauchamp, T.L. (1986). A history and theory of informed consent. New York/Oxford: Oxford University Press.
11.Feinberg, J. (1986). The moral limits of the criminal law. Vol. III: Harm to self. New York/Oxford: Oxford University Press.
12.Fishman, M.W. (2000). Informed consent. In B.D. Sales & S. Folkman (Eds.), Ethics in research with human participants. Washington, DC: American Psychological Association, pp. 35-48.
13.Foster, C. (2001). The ethics of medical research on humans. Cambridge: Cambridge University Press.
14.Herrera, C.D. (1999). Research ethics at the empirical side. Theoretical Medicine and Bioethics, 20, 191-200.
15.Herrera, C.D. (2001). Ethics, deception, and ‘those Milgram experiments’. Journal of Applied Philosophy, 18, 245-256.
16.Kant, I. (1982). Die Metaphysik der Sitten. Werkausgabe Band VIII. Herausgegeben von Wilhelm Weischedel. Frankfurt: Suhrkamp Taschenbuch Wissenschaft (urspr. 1797).
17.Kimmel, A.J. (1998). In defense of deception. American Psychologist, 53, 803-804.
18.Korn, J.H. (1998). The reality of deception. American Psychologist, 53, 805.
19.Levine, R.J., Carpenter, W.T., & Appelbaum, P.S. (2003). Clarifying standards for using placebos. Science, 200, 1659-1661.
20.Miller, F.G. (2004). Painful deception. Science, 304, 1109-1110.
21.Miller, F.G., Wendler, D., & Swartzman, L.C. (2005). Deception in research on the placebo effect. PLos Med2 (9): e262.
22.O’Neill, O. (2003). Some limits of informed consent. Journal of Medical Ethics, 29, 4-7.
23.Ortmann, A., & Hertwig, R. (1997). Is deception acceptable? American Psychologist, 52, 746-747.
24.Ortmann, A., & Hertwig, R. (1998). The question remains: Is deception acceptable? American Psychologist, 53, 806-807.
25.Ost, D.E. (1984). The ‘right’ not to know. The Journal of Medicine and Philosophy, 9, 301-312.
26.Penslar, R.L. (Ed.) (1995). Research ethics. Cases and materials. Bloomington and Indianapolis: Indiana University Press.
27.Pettit, P. (1991). Consequentialism. In P. Singer (Ed.), A companion to ethics. Cambridge: Basil Blackwell pp. 230-240.
28.Pigden, C.R., & G.R. Gillet (1996). Milgram, method and morality. Journal of Applied Philosophy, 13, 233-250.
29.Sieber, J.E., Iannuzzo, R., & Rodriguez, B. (1995). Deception methods in psychology: have they changed in 23 years? Ethics & Behavior, 5, 67-85.
30.Welie, S.P.K., & Berghmans, R.L.P. (2006). Inclusion of patients with severe mental illness in clinical trials: Issues and recommendations surrounding informed consent. CNS Drugs, 20, 67-83.
31.Wendler, D. (1996). Deception in medical and behavioral research: is it ever acceptable? Milbank Quarterly, 74, 87-114.
32.Wendler, D., & Miller, F.G. (2004). Deception in the pursuit of science. Archives of Internal Medicine, 164, 597-600.
Copyright 2007, Bohn Stafleu van Loghum, Houten

No comments: