How to Navigate Conflicting Evidence in Climate and Sustainability Research

hand holding plant

If you work in climate or sustainability research long enough, you stop being surprised by disagreement. Two papers can look careful, cite similar sources, and still point in different directions. Sometimes the conflict is obvious. Sometimes it only becomes clear once you try to line up assumptions.

The mistake is expecting this to go away.

Not All Disagreement Is About Results

A lot of conflict enters the literature before results are even discussed.

Different studies use the same words to mean different things. “Impact,” “resilience,” “adaptation,” “sustainability” all sound precise until you look at how they’re operationalised. One paper measures biophysical response under controlled conditions. Another looks at household behaviour five years later. Both talk about the same outcome.

They are not doing the same work.

When those findings are later compared or summarised, disagreement appears where there is really a mismatch of scope.

Models Do Most of the Arguing

In this field, models carry more weight than is usually admitted.

Integrated assessment models, crop models, land-use models, climate projections. They differ in how they handle adaptation, substitution, technological change, constraints. Those choices are often inherited rather than debated.

When two model-based studies disagree, the difference is rarely in the data. It sits in structure. Parameterisation. What is allowed to change and what is held fixed.

If you read the results without understanding those decisions, you’re reading the output, not the evidence.

Scale Changes the Story

Some results only look stable because they’re averaged.

At a global or national level, effects smooth out. At a regional level, they don’t. A policy that looks modest on average can be severe locally. A strong local effect can vanish once aggregated.

Disagreement across studies often tracks scale more closely than quality.

This is easy to miss if you only read summaries.

Time Horizons Are Doing Quiet Work

Short-run studies capture disruption. Long-run studies capture adjustment. These are not interchangeable.

Early responses often look messy. Infrastructure lags. Behaviour hasn’t changed yet. Later work may show different patterns entirely. Sometimes the direction flips. Sometimes the effect weakens.

If studies are operating on different timelines, conflict is almost guaranteed.

What Gets Studied Is Not Neutral

The literature reflects what can be measured, not necessarily what matters most.

Physical variables show up repeatedly because they’re observable. Institutional capacity, informal adaptation, long feedback loops are harder to capture and therefore thinner in the evidence base.

Some mechanisms dominate simply because they leave data behind. Others are underexplored and show up as “uncertainty” rather than disagreement.

This skews how conflicts appear.

Reviews Can Make Things Look Settled

Reviews are necessary in a fragmented field, but they compress uncertainty.

Early reviews, in particular, organise evidence around a narrative that feels reasonable at the time. Later studies tend to be read as extensions of that narrative, even when they complicate it.

If you only read reviews, disagreement appears later than it should.

Going back to primary studies often changes how coherent the field actually looks. Tools like SciWeave are useful here in a very narrow sense: not to summarise, but to move between reviews and original papers and see where assumptions start to diverge.

Policy Proximity Distorts Presentation

Climate and sustainability research rarely stays academic for long.

Results are pulled into policy debates quickly. That rewards clarity and urgency. It does not reward careful boundary-setting. Ambiguity survives in methods sections, not in executive summaries.

This doesn’t make the research dishonest, but it does affect how disagreement is communicated, and which disagreements remain visible.

Disagreement Is Usually Telling You Something

In this field, conflict is often informative.

It points to sensitivity to context, scale, or assumptions. It signals that systems behave differently across regions or over time. Treating disagreement as noise to be resolved can erase precisely the variation that matters.

Some convergence will happen. Some won’t. Expecting clean resolution too early usually leads to overconfident conclusions.

Reading Conflicts Without Forcing Resolution

When findings don’t line up, the useful questions are rarely about which paper is right.

More often they’re about what changes across settings, what assumptions are doing the work, and which results travel poorly. Conflict, read carefully, is often more informative than agreement.

That’s not a failure of the literature. It’s a feature of studying complex systems under real constraints.

Stay up to date with DeSci Insights

Have our latest blogs, stories, insights and resources straight to your inbox

Update cookies preferences