On Wednesday 18th March in the Atlee room of the House of Lords, a senior roundtable brought together stakeholders from across health, government, and technology to explore the practical realities of AI adoption. Hosted by HETT and Cloudflare, the session was open and discussion-led, fostering peer learning, collaboration, and honest reflection on both progress and challenges.

The conversation powered to move beyond hype, focusing on how organisations can scale AI responsibly while maintaining trust, safety, and operational integrity.

 Strategic Context: Innovation vs Control 

A central theme was the structural tension between enabling innovation and maintaining governance:

  • AI as a transformation driver - Improving productivity, supporting clinical decision-making, and enhancing patient experience
  • Governance, security, and compliance as constraints - Introducing necessary friction, particularly when dealing with sensitive data

Participants agreed this tension is inevitable and ongoing, not a temporary barrier. The challenge is not to remove it, but to manage it effectively.

Conversation turned to the current state of AI adoption, with those in the room encouraged to give a picture of where AI is and isn’t being used in their organisation. The sum of experiences were that it is widespread but fragmented. AI is already being used across organisations, including: predictive modelling (e.g. non-attendance risk), diagnostics and clinical support, workflow automation and administrative efficiency, everyday tools such as copilot.

However, adoption remains uneven and inconsistent, often dependent on local leadership, capability, and risk appetite. Some organisations shared their differing team structures that enabled a different route for accountability, approval and governance compared to the standard NHS formula for board and executive responsibility.

 A key factor in many organisations is a lack of shared understanding, due to the accelerating speed of commercial AI in personal lives, there is often no common organisational definition of AI: Confusion persists between AI, machine learning, and automation. Beyond this, even widely used tools are misunderstood as standalone technologies rather than complex systems. This creates challenges for both adoption and governance.  

 Growing Excitement, Limited Scaling 

There is clear enthusiasm for AI’s potential, particularly in diagnostics and productivity.
However:

  • Many initiatives remain at pilot stage
  • A significant proportion fail to scale or are abandoned mid-way
  • Questions remain around long-term sustainability and value

Financial constraints are chronic across many technology, digital or transformation programmes and it's no different for AI.

AI requires sustained investment, not just in technology, but for: 

  • Teams
  • Transformation programmes

Current funding cycles favour short-term gains over long-term capability building.

 Governance & Risk: Evolving but Incomplete  

Emerging governance models by those in the room included examples of AI advisory groups, structured approval pathways for higher-risk use cases and the use of policies to differentiate between low-risk tools (e.g. productivity) and high-risk clinical AI. The emerging theme was trying not to block innovation and experimentation, which can hold the key to bringing people on the journey and exploring the art of the possible through providing safe environments (e.g. sandboxes) for testing ideas within defined guardrails.

What’s clear is that at a large scale the risk is not fully understood, some are known and manageable but others are emergent, probabilistic, and difficult to define in advance. The point was raised that the way AI works is a shift from traditional IT systems: AI outputs are not always deterministic and the same input may produce different outputs. This makes existing governance and assurance models challenging to apply.

There was also tension between local vs national risk

Participants highlighted:

  • The need to distinguish between local implementation risks and system-wide or supply chain risks
  • Concerns about repeating historical issues in NHS technology procurement and dependency

Governance Fragmentation

  • Variation across regions and organisations
  • Inconsistent interpretation of policies (e.g. between primary and secondary care)
  • Lack of standardisation slows progress and creates duplication

 Workforce, Culture & Behaviour  

AI is a Human Transformation, Not a Technical One

A recurring theme was that AI programmes often fail because they are treated as technology deployments rather than organisational transformations.

Key gaps include:

  • Insufficient training and enablement
  • Lack of understanding of how AI changes workflows
  • Overestimation of “plug and play” capabilities

AI was compared to a system that is easy to acquire but complex to sustain, requiring ongoing investment in people, processes, and oversight.

Conversation explored how at present, highly engaged individuals driving adoption but many are resistant or uncertain, some staff have ideas but lack execution capability, others have capability but limited motivation to change. This creates friction and uneven demand across organisations.

There are talent & skills challenges with difficulty attracting and retaining digital talent highlighted. A need for multi-year investment in workforce development, not short-term funding cycles would support with the current gap whilst opening opportunities to develop internal talent rather than relying solely on external recruitment. There is also a growing need to professionalise AI and data roles.

 Data & Infrastructure: The Critical Foundation  

Participants were clear that:

AI is only as effective as the data and infrastructure it is built on

Challenges include:

  • Fragmented and unstructured data
  • Limited interoperability
  • Weak foundational architecture

 More advanced areas (e.g. genomics) demonstrate that:  

  •  Where data is structured and standardised, AI is already routine and reliable  

There was strong consensus that data foundations must be prioritised before scaling AI.

An emerging learning from existing deployments and pilots show system-level impacts and unintended consequences are a direct result of failing to embed solid foundations. AI does not simply remove bottlenecks it often shifts them elsewhere in the system and can create new pressures in different parts of the organisation. This reinforced the common thread that a key approach is whole-system thinking and mapping of downstream impacts before scaling solutions.

 Cybersecurity & Patient Safety 

Reframing cyber as a patient safety issue was agreed as a way to ensure that despite the growing enthusiasm, and sometimes pressure to push forward AI as a tool across the health and social care sector, failures in cyber or AI systems can have direct clinical consequences and significant patient safety implications.

There was strong agreement that:

  • Cybersecurity is often undervalued at board level
  • Yet it underpins all digital and clinical operations

This was followed by a discussion of the perception challenges of cyber as a function. Often seen as a blocker to innovation, a barrier rather than an enabler, to get the right conversations with the right people can be challenging.

Participants highlighted the need to:

  • Improve communication and engagement
  • Be more transparent about risks and limitations
  • Educate users in a way that supports safe adoption

It was furthered that it's not just internal perception that presents a challenge, patient trust and public perception emerged as a critical dependency. Without trust, AI adoption will be limited and progress on data sharing and integration will stall. At present there are good levels of patient trust among many, but it can often be misplaced:

  • Public expectations often exceed current system capabilities
  • Data sharing remains complex and inconsistently understood
  • There is a need for a more open, national conversation about data use

Technical Debt is a term that was used in the room to describe some of the challenges in improving the structural issues that compound patient expectations and once uncovered can erode patient’s perception of trust in the NHS and healthcare organisations, such as:

  • Legacy systems remaining a significant barrier to efficient infrastructure foundations
  • “Zombie” applications and services increase complexity and risk

Without addressing this, AI may compound existing inefficiencies.

 Reasons for Optimism  

To bring the session to conclusion, participants switched focus to what’s working.

Despite challenges, several positive trends emerged:

  • Increasing openness and willingness to collaborate
  • Safe experimentation through sandbox environments
  • Strong early success in areas like:
    • AI scribes improving clinician–patient interaction
    • Automation of administrative processes
  • Growing recognition that:
    • AI must be clinically led
    • Governance should enable, not prevent, innovation

Participants highlighted several encouraging signals:

  • Democratisation of AI – adoption increasingly driven by end users
  • Improved collaboration and transparency across organisations
  • Proven success in data-rich domains (e.g. genomics)
  • Recognition that healthcare outcomes and capabilities are continuing to improve over time

Strategic Priorities for Leaders

  1. Treat AI as transformation, not technology
  2. Invest in data foundations before scaling solutions
  3. Enable safe experimentation within clear guardrails
  4. Develop workforce capability at scale
  5. Adopt whole-system thinking when implementing AI
  6. Reframe cyber and AI risk as patient safety issues
  7. Lead an open conversation on data, trust, and public expectations

 

Conclusion

AI presents a significant opportunity to transform health and public services - but scaling impact requires systemic change.

The key barriers are not technological. They are organisational, cultural and structural.

Progress will depend on the ability to:

  • Balance innovation with control
  • Build trust across stakeholders
  • Move from isolated pilots to sustainable, system-wide adoption

The conversation is no longer about what AI can do - but how to operationalise it responsibly, effectively, and at scale.

Join the Community
Get the latest healthtech and digital health news, reports, webinars and offers direct to your inbox.