4 min read

The Nuts & Bolts of Responsible AI for Patient Support: Building a Foundational Infrastructure

Daniela Levi Head of Marketing, Hyro
The Nuts & Bolts of Responsible AI for Patient Support: Building a Foundational Infrastructure

Generative AI is making waves in healthcare, with a whopping 92% of healthcare CIOs gearing up to adopt AI technology by 2025. But amid the excitement, concerns about the ethical accountability of generative AI have become a focal point of discussion.

Increasingly aware of AI’s potential to streamline administrative tasks and improve patient care, health IT leaders find themselves grappling with issues of security, explainability, and compliance while also grasping the transformative possibilities. Building an effective, Responsible AI strategy and laying the groundwork for successful deployment takes time. 

Health systems currently pausing and postponing the start of this journey rather than starting with foundational low-risk, high-value use cases will find themselves being left leagues behind their peers.

Our recent webinar, “The Nuts & Bolts Of Responsible AI For Patient Support,” featured Elbridge Locklear (SVP & CIO at Summa Health), Reid Stephan (VP & CIO at St. Luke’s Health System), John Brownstein (Chief Innovation Officer at Boston Children’s Hospital), and Israel Krush (CEO & Co-Founder at Hyro). This live masterclass offered valuable insights into building a sustainable and secure AI infrastructure.

Here’s a recap of their insights and key takeaways addressing the four key layers of Generative AI Architecture as established by Hyro: 

  1. The Organizational Layer
  2. The Application Layer
  3. The Infrastructure Layer
  4. The Foundational Layer

Understanding the Organizational Layer in AI Implementation

The organizational layer is crucial as it encapsulates the core concerns and strategic approaches organizations must consider when integrating AI into their operations, particularly in the healthcare sector.

A critical aspect of the organizational layer is the establishment of an AI governance committee. This body is responsible for overseeing AI accountability, ensuring compliance with AI usage guidelines, and maintaining transparency and ethical standards. For instance, Boston Children’s Hospital and other institutions have developed comprehensive AI usage guidelines and compliance criteria for vendor selection, emphasizing the importance of transparency and security in their operations. However, rather than creating separate governance structures for AI from scratch, Boston Children’s leverages existing committees and leadership teams to oversee AI adoption.

“We are not creating our infrastructure from scratch. We already have an existing data analytics steering committee. We have an AI executive leadership, and we leverage the existing teams and structure we have to not slow down the progress of implementing new applications.”

Below: John Brownstein, Chief Innovation Officer at Boston Children’s, shares the hospital’s innovative approach to AI governance.


The Application Layer: Enhancing Patient Journeys

The application layer of AI focuses on practical implementations that directly impact patient care. This layer includes various stages of the patient journey, from pre-care awareness and access to care to appointment management and post-appointment support. Appointment management, in particular, is a critical use case, encompassing scheduling, registration, verification, and rescheduling processes.

The panelists discussed a range of AI solutions they’ve implemented, including AI assistants for patient interactions, machine learning tools for clinical decision support, and AI-driven automation for back-end operations. All of these solutions aim to improve patient engagement, streamline processes, and enhance clinical outcomes.

“I think organizations that say ‘We’re going to sit back and observe and then we will decide when to jump in’, that's not a viable option, as things are moving too quickly. There will be too much ground to cover to catch up at that point.”

AI assistants, like those offered by Hyro, have been instrumental in improving patient engagement by providing information, assistance, and seamless transitions to live agents when needed. These AI-powered conversational interfaces offer human-like interactions, contributing to meeting organizational objectives. For example, Summa Health is able to handle 80-90% of patient requests automatically through Hyro’s AI assistant. 

Below: Summa Health’s SVP Chief Information Officer, Elbridge Locklear, sharing his experience deploying Hyro’s AI assistants across the system’s digital channels:


Infrastructure Layer: Ensuring Compliance and Control

Amidst the excitement surrounding generative AI adoption, the panelists emphasized the importance of maintaining a balance between enthusiasm and pragmatism. Organizations need to proactively address concerns about AI ethics, governance, and stakeholder involvement. Waiting on the sidelines is not a viable option, given the rapid evolution of AI technologies and shifting consumer expectations.

The infrastructure layer is pivotal in maintaining the integrity and reliability of AI systems. At Hyro, we emphasize the ‘Triple C’ standard for Responsible AI-powered communications—Clarity, Control, and Compliance. Compliance involves adhering to regulations such as HIPAA, SOC 2, and GDPR while also anticipating future AI regulations. Clarity addresses the need for explainability in AI operations, making AI responses investigable and understandable. Control ensures that AI systems operate within defined parameters while only using approved and internal data sources., preventing undesirable outcomes like hallucinations and maintaining the quality and safety of AI responses.

Foundational Layer and Industry Standards

The foundational layer, though less discussed, is equally important. Major technology providers like Google, OpenAI, Meta, and Microsoft are committed to advancing AI safety and best practices. They collaborate with policymakers to ensure responsible AI development. For healthcare applications, using compliant versions of AI models, such as the Azure version of OpenAI, is essential for meeting regulatory standards.

Safely Deploying AI: Methodologies and Criteria

When deploying AI projects, organizations should prioritize feasibility, effectiveness, and safety. They establish robust methodologies for project assessment, baselining, and continuous measurement. Criteria for selecting AI vendors include alignment with organizational priorities, adherence to ethical standards, security, compliance, and cost-effectiveness. Transparency and human-centered design principles should guide the selection process, ensuring that AI solutions enhance the human experience while addressing organizational needs.

Conversational AI insights,
directly to your inbox.
About the author
Daniela Levi Head of Marketing, Hyro