When most people picture science, they tend to think of the big moments, the breakthrough drug that changes lives, the experiment that proves a famous idea right, or the paper that shifts an entire field. But anyone who has spent real time doing research knows those moments are rare.
Most of science is slow, careful trial and error. For every study that makes it into a journal, there are dozens of experiments that did not work out, ideas that fell apart when tested, and results that did not replicate the way you expected. Many of these so-called negative results end up buried in lab notebooks or old folders on someone’s laptop. That is a loss, because they still have something to teach.
A negative result is just an outcome that did not match your prediction. It does not mean your experiment was a failure in the sense that you did something wrong. It means you asked a question, tested it as carefully as you could, and learned something you did not expect. That is still valuable.
Maybe a compound looked promising in cell cultures but did nothing in mice. Maybe you tried to replicate a well-known study and got different numbers. Or maybe you tested a clever idea and the effect just was not there at all.
Sometimes these so-called non-results point out hidden variables you did not see at first. They might raise better questions or show you that a method you trusted has limits. In fields like medicine, psychology, or climate science, knowing what does not hold up is just as important as knowing what does.
A lot of negative results do not fit neatly into yes or no boxes either. Sometimes they work under certain conditions but not others, or they appear in one group but vanish in another. These grey areas are part of real research, but they do notoften get written up because they do not feel neat enough to publish.
Most researchers have heard of the file drawer problem. It has been discussed since the nineteen seventies. The idea is simple. Studies that do not find an effect or fail to replicate a result often get tucked away instead of shared. This is not because they are bad science but because traditional publishing has always valued positive, eye-catching results more.
That leaves a big gap in the record. For example, in fields like medicine, it can lead to treatments that seem more effective than they really are, which can affect patient safety.
There is a practical side too. Labs end up repeating the same failed experiments over and over without realising someone else already did the work. No one has endless time or funding to spend chasing the same dead ends. Yet it happens all the time when results get buried.
When you deposit your work in a trusted repository (like Zenodo, OSF, or DeSci Publish), it receives a DOI, making it citable and part of the permanent scholarly record. That means other researchers can find it, read it, and cite it. You’re no longer relying on word of mouth or chance conversations at a conference to share what you learned.
Some journals have also started to encourage the publication of negative results, but the reality is that open repositories remain the fastest and simplest option for many researchers.
Sharing negative or inconclusive results isn’t just a selfless act for the greater good. It can actually benefit your own work and reputation.
First, it shows integrity. It signals that you care about telling the whole story, not just the parts that look impressive on a CV. It also demonstrates that you understand how knowledge develops - bit by bit, through teamwork and the realities of what really happens in the lab.
Second, negative results can be genuinely valuable to others. They’re often cited in meta-analyses, policy documents, or methods papers. They help others design better experiments, refine hypotheses, or avoid costly mistakes. That’s real impact.
And finally, sharing your negative results can open doors to unexpected collaborations. Maybe someone working in a related area sees your result, has a theory about why it didn’t work, and reaches out. Sharing what didn’t pan out can sometimes lead to breakthroughs you wouldn’t have found alone.
If you’re ready to give your “failed” results a second life, a few things help:
None of this fixes publication bias overnight, but it’s a start. Normalizing the sharing of negative results saves time and resources for everyone. It makes the literature more accurate. And it reminds us that good science is honest, even when the answers aren’t what we’d hoped for.
Next time an experiment doesn’t work out, don’t bury it. Put it somewhere people can find it. Someone else, maybe even your future self, will be glad you did.
If you’re ready, DeSci Publish and other trusted open-access repositories make it simple to share what you’ve learned - the good, the bad, and everything in between.
Good science is messy. Sharing the messy parts helps everyone do it better.
Have our latest blogs, stories, insights and resources straight to your inbox