
Medical AI is no longer a single category. Tools now serve very different purposes: diagnosis support, workflow automation, research synthesis, documentation, and operational decision-making. Lumping them together obscures what they can and cannot realistically do.
This article focuses on medical AI tools that are actually in use or under serious consideration, and what each is genuinely useful for. Not what the marketing says.
Aidoc focuses on radiology triage and workflow prioritisation rather than replacing interpretation. Its strength lies in flagging time-sensitive findings and routing cases appropriately.
What it does well:
Limits:
Aidoc works best when treated as an assistive layer, not a diagnostic authority.
SciWeave is not a clinical AI system. It is a research and evidence tool, and that distinction matters.
What it does well:
Limits:
SciWeave is most useful upstream, before adoption decisions are made, particularly when AI tools are justified using review articles or consensus language rather than clear primary evidence.
Viz.ai is often cited in stroke care because it targets a narrow, well-defined clinical problem. That specificity is part of why it has gained traction.
What it does well:
Limits:
Its success illustrates how constrained clinical use cases tend to outperform general ones.
Nuance’s ambient documentation tools are among the most widely adopted AI systems in healthcare, largely because they address a clear pain point.
What it does well:
Limits:
These tools change how clinicians spend time, not how they make decisions.
Qventus focuses on hospital operations rather than direct patient care, an area where AI can be impactful without clinical risk.
What it does well:
Limits:
Operational AI often succeeds where clinical AI struggles, because the stakes are different.
The medical AI tools that see sustained use tend to share a few traits:
Tools that promise broad diagnostic intelligence or general reasoning tend to face steeper barriers, both clinically and evidentially.
Another pattern is separation of roles. Diagnostic support tools assist clinicians. Documentation tools reduce burden. Research tools like SciWeave help evaluate evidence. Problems arise when tools blur these boundaries.
The most important question is rarely whether a tool is impressive. It is whether it fits the clinical, organisational, or research context in which it will be used.
AI in medicine works best when it supports existing expertise rather than attempting to replace it. The tools that last are usually those that respect that constraint.
Have our latest blogs, stories, insights and resources straight to your inbox