MEDITECH Artificial Intelligence Intervention Risk Management Practices

Medical Information Technology, Inc. (MEDITECH) is dedicated to the responsible development and deployment of Artificial Intelligence (AI) and Machine Learning (ML) technologies within the Expanse Electronic Health Record (EHR) system. To ensure the safety, effectiveness, and ethical use of AI, including predictive decision support interventions, MEDITECH has established a comprehensive AI Intervention Risk Management (IRM) framework throughout the AI system development lifecycle. This framework is built upon internationally recognized standards, outlining a systematic approach to identify, assess, and mitigate potential risks, and supplements MEDITECH’s existing risk management processes by addressing the unique challenges posed by AI technologies in healthcare. Our approach to AI intervention risk management applies to internally developed AI solutions, as well as the use and/or integration of third-party tools, models, and solutions.

The AI IRM framework is structured around three core pillars: AI Risk Assessment, AI Risk Mitigation, and AI Governance, promoting transparency and responsible AI development and deployment.

  • AI Risk Assessment involves a thorough, iterative process to identify potential risks associated with AI systems, beginning before development and continuing at each significant lifecycle milestone. Assessments are guided by predefined Responsible AI principles, including Validity, Reliability, Robustness, Fairness, Intelligibility, Safety, Security, Privacy, and Accountability. Key practices include:
    • Early stakeholder engagement
    • A best practices index and risk assessment checklist to guide risk-based discussion
    • The use of a risk matrix for impact prioritization
    • Escalation processes for pre- and post-deployment risks
    • Documented lessons learned
  • AI Risk Mitigation focuses on proactively preventing or reducing identified AI-specific risks throughout the system development lifecycle. These strategies draw on internal lessons learned and external industry resources. Strategies include design modifications, model upgrades, data quality assurance, and user training.
  • AI Governance establishes a comprehensive program that guides all AI-related development and deployment. Including:
    • A centralized AI project catalog, which tracks identified risks and actions
    • An AI-specific model card template, which details each project's purpose, intended use, validation efforts, and data sources. 

Robust data governance ensures secure, auditable storage for all AI training and testing data, with restricted and logged access. A critical component is the requirement for express written informed consent for the use of customer data for any AI model development, training, or testing by MEDITECH or third parties. Data minimization and prompt deletion of Protected Health Information (PHI) and Personally Identifiable Information (PII) are also practiced.

The framework systematically addresses and mitigates various key risks, including those related to data quality, algorithm bias and fairness, transparency and explainability, clinical validity and performance, integration and usability, and privacy and security. These are managed through dedicated practices such as data validation, bias testing, explainable AI techniques, rigorous clinical testing, user-friendly interface design, and robust data security measures.

MEDITECH end users and customers play a role in providing feedback on AI system performance and reporting safety concerns.

A multidisciplinary team, including Executive Leadership and the AI Governance Board, oversee AI/ML interventions with clear roles and responsibilities defined. The AI Governance Board plays a crucial role in overseeing risk management processes and approving significant policy changes. The governance framework also includes data governance policies and processes for incident reporting, investigation, and corrective action.

To ensure consistent application, MEDITECH maintains a structured training and awareness program for all personnel involved in AI-related activities. This includes role-based training on AI risk management and data privacy, along with continuous education to keep staff informed of emerging risks and regulatory updates.

MEDITECH is committed to continuously improving its risk management practices through a formal review and update process. The policy and related processes are reviewed at least annually by designated stakeholders, with trigger-based reviews also initiated due to significant changes in AI technology, regulatory updates, or incidents. All updates are documented and formally approved.