Remonstration

A recent conversation with some friends has me thinking about roles we can fruitfully play as philosophers of science. I just thought I'd write up on a blog post my thoughts on something that came out of that, which is a role we sometimes play that I feel is not often enough highlighted.

In philosophy we learn about tools and methods of critical thinking and argument construction and evaluation. For instance, a standard part of philosophical training is going through some basic logic. You should learn therein what it takes for an argument to be valid, and, going in the other direction, how one can demonstrate the invalidity of an argument by constructing counter-models. (If this doesn't mean anything to you, I will be going through an example later in this post!) That is just part of basic philosopher training. If you go into philosophy of science you will further specialise, perhaps learning about experimental technique, statistical methods, or theories of confirmation along the way. All of these can put somebody in a decent enough position to evaluate the cogency of arguments that scientists put forward, providing one familiarises oneself with the particular theoretical background the scientists one is evaluating are working within.

And it matters that scientists are making cogent arguments! Science has a lot of social cachet; with some well noted exceptions, folk trust scientists and will tend to believe claims that scientists put forward about the world. What scientists conclude is therefore deeply significant to our worldview and senses of self. Further, in many spheres of life we base policies on recommendations from scientific experts. Just the enterprise of science itself involves moving huge amounts of people and resources around, and the opportunity cost of having all these smart folk spend their time in this way rather than on other socially valuable tasks is itself huge. We want scientists to be basing their claims, recommendations, and activities, on sound argumentation and good reasoning, so as to ensure that this cachet and those resources are used as best we can.

So then putting these two together we get a natural thought about how philosophers of science should use our skills. We should monitor the arguments scientists make, and where we find that their methods or modes of argument are not capable of supporting the conclusions or recommendations they are making in light of those arguments, we should bring to bear our expertise in the evaluation of inferences or arguments (broadly construed) on calling this out and suggesting better practice for the future. (I recall reading, but do not recall where I was reading, E.O. Wilson once write that this is exactly what he thought of as the point of philosophers of science, people looking over his shoulder saying `Oh no I don't think this is good enough, what about such and such counter argument, eh?' He noted that while this could be pretty irritating in the moment, on reflection thought it valuable to him.) I call this kind of thing `remonstration', it's a kind of `speak truth to power!' norm, and I think we should see it as a valuable part of our mission as philosophers of science.

I am going to go through an example from my own work in a bit of detail below, but for some more illustrious examples one might want to check out: Clark Glymour's critique of the statistical reasoning that underlay the famous Bell Curve book and much of the rest of social psychology at the time, Nancy Cartwright's long running project critically evaluating the limitations of randomised control trials for medical or social research, or Roman Frigg's work (discussed, say, at the end of this excellent episode of the generally excellent Sci Phi podcast) on over-confident and over-specific claims made on the basis of models of climate change. 

But for an example of remonstration I am most familiar with (and also to allow me to explain and slightly reframe this previously published work of mine) I'd like to go through my paper On Fraud. One of the motivations for that paper was thinking about claims currently being made about how we should deal with the replication crisis in social psychology. Broadly, lots of claims in social psychology that were thought to have been securely established are being found not to stand up to sustained scrutiny when people attempt to replicate the initial experiments which led to their acceptance, or redo the statistical analyses with bigger/better data sets. In thinking about why this is occurring, a number of scientists have come to conclude that one (but not the only) source of the problem is -- scientists are not just seeking the truth for its own sake, but instead being encouraged to pursue credit (esteem, reward, glory, social recognition by their peers in the scientific community) by various features of the incentive structure of science. This pursuit of credit itself incentivises bad research practices, ranging from the careless to the outright fraudulent. If only we could remove these rival incentives which are causing the misconduct, and instead encourage pure pursuit of the truth, we'd have removed the incentive to involve oneself in such research misconduct. Since I had seen some very similar arguments come up before in my more historical scholarship on W.E.B. Du Bois, my interest was very much piqued and I got to thinking about whether this argument should be accepted as a sound basis of science policy.

I came to conclude that the psychologists and sociologists of science making these arguments were making a subtle mistake in how they reasoned about policy in light of scientific evidence. They were doing good empirical work tracing out the causes of much of the research malpractice we witness in science. But on the basis of this they were concluding that if we removed the actual causes of fraud we'd see less fraud. That is to say, they were establishing premises about the causes of fraud in the actual world, and concluding that a policy which intervened on (in fact removed or greatly lessened) these causes would mean that there would be less fraud after our intervention. After all, it's a natural thought; if X was what was causing the fraud and now there's no more (or much less) X, well you've removed the cause and so you should remove the effect, right?  Not so. Such arguments are not valid -- their premises can all be true, while their conclusion is false. So I constructed a counter-model, which is to say a model which shows that all of their premises can be true while their conclusion is false.

Without going into too much detail, I produced a model of people gathering evidence and deciding whether not to honestly reveal what evidence they received when they go to publish. Fraud is an extreme form of malpractice, of course, but it would do no harm to my arguments to interpret the agents as deciding whether or not to engage in milder forms of data fudging or other research malpractice. We can model the agents as pure credit seekers, they just want to gain the glory of being seen to make a discovery. Or we can model them as pure truth seekers, they just want the community to believe the truth about nature. (We can also consider mixed agents in the model, but set that aside.) In the model credit seeking can indeed incentivise fraud, and for the sakes of the counter-model we may grant that in the actual world all fraud is incentivised in this way. But what I show is that in this model, even if suppose that there were some policy that could successfully turn all scientists into pure truth seekers, it does not guarantee that there is less fraud -- in fact truth seeking can, in some especially worrying circumstances, actually lead to more fraud!

There is a general lesson here, in fact, that I wish I had done more to bring out in the paper. The point is: if you are basing policy on empirical research, it is tempting to think that what you need to know is whether the policy would be effective in the actual world. That, after all, is where you will be implementing the policy! But that's the wrong causal system for evaluating the effects of your proposed policy. What you need to know is whether the policy would be effective in the world (or causal system) that will exist after the policy is implemented. In the actual world -- sure, credit seeking is causing malpractice. But the fact that you remove that incentive to commit fraud does not by itself mean you've removed the incentive to commit fraud. It may be that in the world that exists after this intervention there are new temptations to commit fraud. Truth seeking itself may be one of them. Policy relevant causal information must include counter-factual information, information about the world that will exist after a not-yet-implemented policy has been carried out.

If you want the real details of my argument -- read the paper! But what I want to note here is how this is me trying to be the change I want to see in philosophy of science. I found some scientists making policy recommendations in virtue of their empirical research (in this case it was policy affecting science itself). I thought about the structure of their arguments, and realised they were making implicit assumptions about counter-factual reasoning. A general philosophy education gives you tools for reasoning about counter-factuals, so I could bring that to bear. What is more, general critical thinking (or logic) training that is part of being a philosopher points the way to counter-model construction as a means of critiquing arguments. Finally, disciplinary specific training in the philosophy of the social sciences gave me training in tools for building models of social groups, which was what was of particular relevance here. I was therefore able to remonstrate, to bring to bear my training in calling attention to an error in scientists' reasoning, and what's more an error that (since it was supposed to be the basis of policy) has the potential to be of some social and opportunity cost. I don't claim, of course, that this is the best example of remonstration in the literature (c.f. my illustrious colleagues above!) -- but I hope going through an example I am intimately familiar with in depth gives people a better example of how philosophy of science as remonstration is a good use of our disciplinary tools and expertises.

Now, it is certainly not my claim that only philosophers of science engage in this kind of remonstration. Statisticians very often engage in a very similar activity -- Andrew Gelman's blog alone is full of it. There is also a fine tradition of scientific whistleblowers who call foul when misconduct is afoot. Remonstrating with scientists whose reasoning has, for one reason or another, gone astray, ought not be, and fortunately is not in fact, left to philosophers alone. And, in case it needs to be said, nor is this (or nor ought this be) all of what philosophers of science get up to. Most of my own work, for instance, is not remonstration.

But when I see accounts of the tasks of philosophy of science they typically fall into one of three categories. Concept construction or clarification, where the goal is something like producing or improving a tool that it might help scientists do their job better. Scientific interpretation, where the goal is to do something like provide an understanding of scientific work that would make sense of the results of scientific activity, and tell us what the world would be like if our best evidenced theories were to be true. And meta-science, where the goal is to do something like provide an explanatory theory which tells us why it is that scientists reason (or ought to reason) in some ways rather than others. All of these can be valuable and I hope philosophers of science keep doing them. And I can even understand why people aren't keen to advertise the disciplinary mission of remonstration: it makes us into the stern humourless prigs of science, somewhat akin to Roosevelt's critic on the sideline hating on the folk actually getting stuff done. But, since I think it can be good and necessary, I hope that, even if it doesn't win us friends, and along with our comrades elsewhere in the academy and with our eye on the social good, we hold true to the mission of remonstrating against scientific overreach, malpractice, or just plain old error, where-ever we should see these arise.

Comments

Popular posts from this blog

How I Am A Marxist

On Not Believing In One's Work

Arguments in Philosophy