Unreliable

Research isn’t done in a vacuum. We rely on the knowledge already out there to decide what questions to ask and how to find answers. While it is possible for a research idea to spring up out of nowhere like Athena bursting forth from Zeus’ head, more commonly even if the question is novel, we use established methods to find answers. During today’s morning run, I was thinking about the many times I’ve based a large project on published evidence and methods, and nothing worked ever.

A responsible mentor will encourage grad students to use established methods and not try to do anything ground breaking and risky, because they risk never graduating. Not just use  established methods, but methods that have been used successfully in his or her lab, so the grad student can learn it from people instead of from a paper. A researcher further along in his or her career can afford to take that risk.

When I was a grad student my mentor suggested a project that required a method we hadn’t done in our lab, but a collaborator was doing successfully. She had published a couple papers on it. We spent a day in her lab and went home and I tried to get it going in our lab. We had a phone meeting or two with her, and some emails, so she was continuing to help us. Then in the middle of it all she quit her job. We never did get the experiment quite the way we wanted it, although we did manage to get some data from that project. Fortunately that wasn’t my only project, so I had other data to fall back on for my dissertation.

As a postdoc I came up with another research question and dug into the literature to find out how best to answer it. There were a few somewhat questionable models of this disorder, nothing well established and they didn’t model the disorder very well. I found a couple reports, from a German lab, that seemed promising. I tried to contact them but they never responded. I designed a project around their method anyway, got a grant, and started trying to test it. It didn’t work at all.

Next I tried another method to test a different disorder. A researcher at a university I had been at had developed this one. I knew of her, if I didn’t know her personally. I tried it out exactly as she described in her paper (which had been published in a high-impact journal). It didn’t work as she described. I called her and asked if she had any suggestions. Her response: “We’ve been having difficulties replicating our results.” Her own lab couldn’t repeat the study they had published in a highly respected journal!

When that sort of thing happens, should the authors retract their original paper? I don’t think so, because the data are the data. Statistics aren’t perfect, they are 95% perfect, or 99% (depending on where you set your level of confidence). They didn’t do anything wrong in the original study. But they should publish the follow up study too. Unfortunately, no journal is going to want to publish that. Journals will publish “Hey we’ve got this new thing!” or “We have evidence that the other guys were wrong”. But they won’t publish “We have evidence that shows the last paper we published is wrong.” It suggests that you didn’t know what you were doing last time, and if you didn’t know what you were doing then, why should they trust that you know what you are doing now? Even though it happens to everyone.

You’d think after 3 papers like that I’d have learned. I certainly felt burned, and felt like I’d learned something. I’m very cautious about other papers. But nonetheless I got in a situation where I was taking over a project that was designed long ago, based on published data and with the cooperation and assistance and advice of the lab who published that. Should be safe enough, but the original method they had published on hadn’t worked in our team’s hands. The team came up with another method, one that hadn’t really been used for this. (In fact I’m not sure where the idea came from.) It seemed, in theory, a much better method for many reasons. But it hadn’t been published on in this context. The first experiment, with just 8 animals per treatment, worked great. The treated rats performed better than the controls. Then the next set of rats came in and the effect was the opposite. They repeated the experiment on 5 sets of rats. It was at that point I joined the team and started crunching the data. There was absolutely no effect of treatment.

We never did get what we hoped for out of that study. We couldn’t find a valid positive control, among other things.

My initial reaction is to wonder how I can avoid this in the future. The German group that refused to speak to me is a big red flag. And always consulting with the authors long before I even start the project is another step, so long as they are honest with me about little things like “We haven’t been able to replicate our PNAS results”. But some of my experiences were with collaborators. They didn’t do anything wrong, but we still ended up investing a lot into projects that went no where.

My second reaction is to wonder if this isn’t part of a bigger picture. Something to do with too little rigor on publications. Not enough statistical training for authors or reviewers. A result of the pressure to publish or perish. When the result of a rejected paper is a dead career, authors fight hard against that rejection. I know, I’ve been there.

This entry was posted in Uncategorized. Bookmark the permalink.

Comments are closed.