– This is a guest post written by Manisit Das (see bio at end of article).
“I’M DISAPPOINTED with the negative result,” my advisor finally blurted out after a moment of awkward silence. “This is probably the end of the project.”
My heart sank, but I wasn’t surprised. I’d been trying to create a 3D model of pancreatic cancer in the lab so that I could test how some anti-cancer drugs work, and had just handed over my latest results to my advisor. The results showed that my model didn’t work – some of the key signatures of pancreatic tumors weren’t there. Without being able to mimic the cancerous environment I’d have no idea whether or not the drugs being tested were working the same way I would expect them to work in an actual tumor.
A few months later I came across a research article reporting negative results that looked very similar to the ones I’d reluctantly presented to my advisor. The authors of the paper had done more in-depth work than I had, but had still concluded that the model was useless. While it was reassuring to know that other scientists were struggling with similar problems, I realized, with dismay, that the article had been published before I started my project. Nevertheless, it was not easy to find because the negative data was just a small section in the paper. If I had found it sooner I could have learned from their work and saved myself a lot of time and effort.
As frustrating as this was, I know negative results are an unavoidable and essential feature of science. This particular encounter wasn’t a first for me; some of my earlier work had failed too.
My first project in thesis lab had revolved around searching for new pancreatic cancer vaccines, which could fight cancer by teaching the immune system to recognize the warning signs and destroy cancerous cells.
In a cancerous tumor, cells look abnormal and produce unusual proteins, and by injecting small pieces of them into a cancer patient, the cells of the immune system are put on high alert. When they know what to look for, protective immune cells track down and destroy anything that looks similar to the abnormal cancerous proteins. My task was to find one such piece that could act as a warning and lead the immune cells to the cancerous target.
My first few months in the thesis lab were spent narrowing down the list of suitable vaccine candidates. After choosing the best options I started to test them in animals. But to my utter disappointment, none of my chosen pieces worked as a vaccine.
Disheartened, I moved on. Except for my advisor, and a brief mention during a department presentation, I didn’t show the negative results to anyone. I didn’t think anything of this at first, but later, after finding the research paper about the unsuccessful pancreatic cancer model, I started to question why I hadn’t put any effort into publishing my negative experiments. Did I believe that they were as important as my positive data? At the time – no, I did not. But wouldn’t it help others if the results were made public?
As a graduate student, I’ve learned that academic research is supposed to be objective and unbiased. However, I’ve also discovered that it is difficult to thrive – or survive – in academia. From what I can see, exciting, positive results published in high-impact journals give us the necessary edge to out-compete peers in a race to win tenured jobs and secure funding. Since starting my academic career I have realized that we must sell our research and package it in a way that makes it either impressive or interesting.
The comedian John Oliver on his show Last Week Tonight recently did a piece about science communication, and pointed out that nobody would care if a scientist showed up and told everyone that there’s “nothing up with acai berries”. To me, this makes sense, and fuels my preference for positive data; papers built on negative data are not cited as much and require far more effort to justify, so they are not as academically rewarding to write. They’re also less likely to be published.
As someone who wants to build a strong academic career, I do not feel incentivized to publish negative results, although I acknowledge that we should talk about them. My priorities focus on establishing my dissertation on the strong foundations of a rational hypothesis that is corroborated by positive rather than negative results.
To me, it always seems safer and more rewarding to invest in positive data. As well as the low impact factor of negative data, there’s also the stigma to consider. From my experience, failed experiments look more like the products of a poorly constructed hypothesis, or a lack of scientific insight – both of which reflect poorly on the researcher. To avoid this, it’s easier to let negative results rot away on a hard drive rather than share them with the scientific world.
However, since reading the paper about the pancreatic cancer model, it seems obvious that negative data isn’t something that should always be associated with sloppy science. In fact, it can provide useful insight and limit the number of times we make the same mistakes.
My advisor tells me: “Negative data are important as long as they teach you something.” This is good advice, but while I would like others to publish their negative data, I still struggle to justify whether it’s worth spending more time on sharing my own negative results. After all, if we’re all fighting for the same jobs, I’d be at a disadvantage if I slowed down to work on my negative data, while others raced ahead with their positive results.
Biology and immunology are complicated and empirical sciences, which only magnifies the problem caused by sweeping negative data under the rug. Academia is a tough industry, and although there is a lot of bias, I’m starting to see that we must all take responsibility for our own research. If we were somehow able to forget about our careers, our funding and our futures, and take a step back to focus on reporting only excellent science – regardless of whether or not it’s negative or uninteresting – I’d quickly work towards publishing all my data, rather than just the good parts. But will that ever happen? I’m not sure.
Manisit Das is a Graduate Research Assistant at University of North Carolina Chapel Hill. His current research focuses on local and transient immunotherapy in pancreatic cancer. His published research can be found in RSC Advances, ACS Applied Materials and Interfaces, and Sensors and Actuators B: Chemical.
Image source: Lego Ideas Research Institute, Brick 101 via Flickr Creative Commons