Home
Department of Philosophy
Department Seminar

Veli-Pekka Parkkinen: How to tell if drugs work?

The fifth and final Department Seminar this Spring will be given by Veli-Pekka Parkkinen.

Main content

Abstract

According to prevailing evidence-based medicine (EBM) guidelines, randomized controlled trials (RCTs) provide the best evidence for efficacy of medical interventions, and when available, trump other types of evidence. Some philosophers take issue with this aspect of EBM and argue for mechanistic evidence to be considered alongside RCTs  (Russo & Williamson, 2007), while others defend EBM with minor qualifications (Howick, 2011; Howick, Glasziou, & Aronson 2013). Yet others suggest that debating the merits of different types of evidence is irrelevant as long as our theories of evidence and causality idealize away bias in the evidence-base (Holman, 2017;  Stegenga, 2018). In reality, clinical evidence may be biased due to selective reporting, opportunistic data analyses that favor (commercially) desirable results, or outright fraud. On this point, Jacob Stegenga argues that clinical research is so rigged in favor of drug-based therapies that acquiring supportive evidence from the literature – whatever one takes such evidence to be – should add very little to our confidence that drugs really work (Stegenga, 2018).

This talk is an attempt to evaluate Stegenga’s argument, taking seriously bias in the evidence-base. In the picture outlined by Stegenga, objectivity of clinical research literature is compromised so that probability of evidence favoring drug-based therapies is high. Consequently, this evidence has low confirmatory value. The relevant notion of objectivity here concerns whole bodies of research literature, and can even be discontinuous with objectivity of ground-level research activities. To measure this “metaobjectivity”, one should therefore analyse whole bodies of research literature agnostically, as opposed to hand-picking examples of good or bad practices with hindsight. Fortunately, there are ways to do this; I will demonstrate one potential approach. Typically, clinical trials employ similar statistical analyses, and report the p-value for the estimated treatment effect. P-value describes the probability of obtaining, just by chance, a result at least as extreme as was actually observed. When this probability of chance occurrence is small enough, it is concluded that the treatment had a real effect. The definition of p-value implies that in a collection of trials on drugs that actually work (respectively, do not work),  the reported p-values should follow a characteristic distribution, unless the literature is biased. I will explain how this idea can be applied to probe the objectivity of bodies of clinical research literature, and present examples using collections of trials on pharmaceutical treatments of heart disease and depression. The results come with too many caveats to be conclusive, but tentatively show fairly low levels of bias: even though individual cases of bad practice are probably numerous, Stegenga’s pessimism seems unfounded with respect to this literature. However, there are reasons to believe that some biases are stronger in observational than interventional research, due in part to the institutional structure of science. Thus, the evaluation of the objectivity of whole bodies of relevant evidence should be built into procedures of establishing causal claims in medicine more so than it already is. In the end I briefly consider the implications of biased evidence for the debate about what types of evidence are required to establish causal claims in medicine.

Literature:
Holman, B. (2017). Philosophers on drugs. Synthese, 1-28.
Howick, J. (2011). Exposing the vanities—and a qualified defense—of mechanistic reasoning in health care decision making. Philosophy of Science, 78(5), 926-940.
Howick, J., Glasziou, P., & Aronson, J. K. (2013). Can understanding mechanisms solve the problem of extrapolating from study to target populations (the problem of ‘external validity’)?. Journal of the Royal Society of Medicine, 106(3), 81-86.
Russo, F., & Williamson, J. (2007). Interpreting causality in the health sciences. International studies in the philosophy of science, 21(2), 157-170.
Stegenga, J. (2018). Medical nihilism. Oxford University Press.