AI in healthcare systems: realities, promises and contradictions of (still) imperfect adoption

Starting from the PHTI report, a critical analysis of the real impact of AI in U.S. healthcare – between hype, evidence, and industrial future.

0
68

Artificial intelligence hasn’t saved anyone yet. Yet we talk about it as if it had already done so. In conferences, in budgets, in press releases, AI is everywhere: promise of efficiency, justice, personalization. But in the wards, in the clinical agendas, in the decisions that save (or don’t save) a life, his presence is more discreet. Sometimes useful, sometimes “cosmetic”, often difficult to evaluate.

So how much of this narrative is supported by data? And above all, despite the promises and slogans, which AI is really penetrating healthcare systems, with what results and on what premises?

The new report published by the Peterson Health Technology Institute (PHTI), one of the most authoritative think tanks on the topic of health innovation in the United States, tries to answer. The title of the document is as sober as it is ambitious: Adoption of AI in Healthcare Delivery Systems. The content, however, is a lucid and well-structured analysis that dismantles various clichés on the technological maturity of clinical AI and invites the sector – public and private – to a strategic, rather than celebratory, reflection.

DOWLOAD THE REPORT

The perimeter of the investigation: from hope to data

The PHTI report does not simply outline an overview of artificial intelligence in healthcare: it does so with a rigorous methodology. “Clinically active” AIs were analysed, i.e. already adopted in real workflows – therefore far from academic suggestions or pilot cases in controlled environments. The investigation focuses on three fundamental areas:

  • AI for clinical decision support (CDS);
  • AI for risk stratification and population management;
  • AI for resource planning and optimization.

A choice that shows how reflection finally shifts from promised innovation to implemented innovation.

Adoption, yes – But selective: who adopts, what, and why

The first notable insight concerns the spread of AI: yes, it is increasing, but at highly variable rates and driven by diverse motivations. The report reveals that AI technologies are mainly adopted by large, integrated hospital systems with strong financial, digital, and organizational capabilities. Why? Because these are the only entities capable of integrating and sustaining complex solutions—solutions that must interact with heterogeneous data flows, strict regulatory frameworks, and fragmented clinical workflows.

However, even in these high-performing settings, adoption is often driven more by financial incentives and regulatory compliance than by clear clinical effectiveness. It’s a subtle yet significant paradox: the push for innovation doesn’t necessarily stem from therapeutic or managerial superiority, but rather from external dynamics—including competitive pressure and value-based reporting requirements.

Effectiveness: the elephant in the room

The critical heart of the report lies in the analysis of clinical and operational evidence of effectiveness for AI solutions already in use. Here, the PHTI speaks with clarity: only a minority of these solutions have demonstrated tangible and measurable clinical benefits. Most of the tools evaluated deliver only operational improvements (e.g., reduced triage time or improved planning), without any statistically significant impact on health outcomes.

The issue, therefore, is not whether AI can be useful, but which AI, and with what level of methodological rigor. Many solutions are based on small, unrepresentative datasets, or rely on opaque models that are difficult for clinicians to interpret. And even when improvements are reported, they often fail the test of replicability in environments outside of the original development setting.

Bias, Equity, and Systemic Risks

Another key issue, tackled unambiguously by the report, concerns equity in access to and effectiveness of healthcare AI. Some models have shown systemic bias against minority ethnic or socioeconomic groups, simply because they were trained on non-representative data. The risk is not only ethical but also operational: unreliable predictive systems lead to over- or under-treatment, with potentially serious consequences for public health and trust in emerging technologies.

At the same time, the PHTI highlights the lack of transparency in the evaluation of AI products. The absence of shared standards for validation, monitoring, and model updates exposes healthcare systems to significant technological and clinical risks. In practice, we are proceeding by trial and error, without strong governance.

What are the implications for the industry (and for Italy)?

Although the PHTI report focuses on the adoption of AI by healthcare systems, its conclusions raise important questions for the entire health industry—from pharmaceutical companies to digital technology providers. If many of today’s implemented solutions struggle to show concrete clinical benefits, the responsibility cannot fall solely on providers: those who design, develop, and bring these technologies to market must also ensure methodological rigor, algorithmic transparency, and real adaptability to their intended use cases.

For Italy, which is still in the early stages of systemic adoption of clinical AI, the U.S. scenario can serve as a litmus test. At least two implicit recommendations emerge from it:

  • Avoid copycat adoption: prioritize solutions built on local needs, compatible with national organizational models and the fragmented digital architecture of our healthcare system.

  • Demand rigor, transparency, and interoperability from vendors, while promoting a culture of effectiveness reporting and continuous algorithm performance monitoring—even post-adoption.

From this perspective, the industry is not just part of the healthcare AI ecosystem—it is co-responsible for its future quality. Investing in accountability and adaptability is no longer a matter of reputation but a condition for long-term sustainability.

Toward a New Pact Between Healthcare, Technology, and Shared Responsibility

The Peterson Health Technology Institute’s report does not aim to dampen enthusiasm for artificial intelligence in healthcare. Rather, it seeks to realign it. It warns against an overly optimistic narrative and proposes a more sober approach, grounded in evidence, transparency, and ongoing evaluation. The point is not to deny AI’s transformative potential but to shift the focus—from promises to measurable outcomes, from prototypes to systemic implementation.

In this scenario, responsibility is shared. Healthcare systems must prepare for more structured governance of digital innovation, but those who develop, market, or promote AI solutions must also demonstrate clinical robustness, attention to equity, and genuine integration into clinical workflows. The future of AI in healthcare will not be determined by the number of available solutions, but by the quality of those capable of standing the test of time, practice, and replicability.

The PHTI doesn’t offer easy answers, but it provides a valuable framework to guide complex decisions. And it is precisely from this framework that a new pact can emerge between technology, healthcare, and collective responsibility—the only pact capable of making artificial intelligence a true clinical and systemic tool, rather than just a symbol of progress.