When I first came to my institution, I inherited a funded project. The project had foundered because no one had the time to work on it. In theory, I had all the resources I needed, the project just needed someone (me) with time to pursue it. The most daunting hurdle was that the project was in a field I knew very little about. The project team had a lot of the knowledge and expertise, so it turned out that I didn’t really need to become an expert in the field. At least, the most pressing reason to do so was for my own ego, because I was uncomfortable discussing a subject that I am ignorant of.
Working on this project has given me a lot more confidence in talking about something that I don’t know much about. I don’t really like it when people do that, because they tend to have opinions that don’t hold up, but I’ve gained a new trust in myself and my ability to form–and change–opinions with the available data.
My expertise in doing research with animals had been a missing component. One of the team had some experience with animals, but not a lot, and not with a behavioral endpoint.
The next challenge on this project was to coordinate. My role was chiefly as project coordinator. Everyone else had the skills to do all the pieces. I learned some skills, and I did a lot of scheduling and organizing.
Some work had been done before I arrived. I worked on analyzing that data, with the help of the statistician. The dismaying results were that our treatment (manual therapy) did NOT improve the endpoint (running behavior), after injury. The immediate conclusion one might make is that manual therapy did not work. But there was a fatal flaw in the study design. There was no POSITIVE CONTROL.
Being a trained scientist, I spotted that immediately. (That is slightly tongue-in-cheek. There are many scientists who would miss that. In fact, all of the team members involved in designing the project missed it.) Without a positive control, our results are meaningless. We can’t say that manual therapy didn’t work because we don’t know if anything would work in our model. And so I set out to find a positive control.
The obvious choice was morphine. But that didn’t work either. The half-life of morphine is 6 hours, and rats do most of their running at night. So we changed the study design such that the morphine was given right before their running. It still didn’t work. We tried two doses. We tried a steroid. We tried a prescription NSAID known as Torodol. None of it worked. At that point, with lots of data but not much to say about it, we tried to publish. One of the reviewers also missed the importance of a positive control and said that manual therapy didn’t work. Our manuscript was rejected.
We tried many, many variations on the manual therapy protocol. It didn’t matter what the positive control was, if we could make something work we had a positive control. Even if it turned out to be manual therapy, the thing we were trying to test. None of the variations worked, either.
Trying to publish negative data is difficult. I suggested we submit to the Journal of Negative Results, but my team members didn’t like that idea. (I was being serious. I think JNR is an admirable project.) Honesty, I’m not sure we could even get into JNR. They emphasize study design, and I think they would probably reject us because we don’t have a positive control. It’s not really a negative result unless it has a positive control.
Recently, a journal editor saw our poster at a conference and expressed interest, despite all the negative data. Now that we have all these variations on the manual therapy protocol, we have even more data. So maybe we will get published.
Which brings me to another ethical issue. Have you heard of grade inflation? Well, there’s also publication inflation. There are so many journals, and so many articles published every year, that the amount of data and words is immense. Even with smart search engines, digging through it to find the one gem of information you need when you need it is challenging. There is huge pressure to publish. Scientists use tricks like breaking up an experiment into ridiculously small units sometimes called the “least publishable unit” to maximize the number of publications they can get on their cv.
You should share your data, because that fosters science and collaboration and progress and gives meaning to your work. But you should not over-share, diluting the pool of publications with meaningless data or splitting up your data into puzzle pieces that others have to put back together.
Just a note, I have only seen one researcher in the habit of the “least publishable unit”, and I’m not entirely certain she did engage in it. So I’m not sure it’s as widespread as it has been made out to be. Also, combining more data into a single publication typically makes it a stronger paper, and most researchers I know want to get into the more prestigious journals.
I conclude that publishing our negative data is not unduly diluting the publication pool. On the other hand, I also believe that if we chose to withhold our data, that would be ethical, because there is not much interest in what we have to share.