Last edited on January 18, 2024
Introduction
These guidelines are suggested for the responsible and ethical use of Artificial Intelligence (AI) technologies within our Macon & Joan Brock EVMS Medical Group at Old Dominion University. AI and machine learning technologies can enhance patient care, improve operational efficiency, and support clinical decision-making. However, their use must align with ethical principles, legal requirements, and best practices to ensure patient safety and data security.
Guidelines
- Ethical considerations:
- All AI applications in healthcare must prioritize patient welfare, safety, and privacy.
- Decisions made by AI systems should be transparent and interpretable to clinicians.
- Patient data used for training AI models must be de-identified and comply with relevant privacy regulations, such as HIPAA (Health Insurance Portability and Accountability Act).
- Data governance:
- Proper data governance practices must be followed for collecting, storing, sharing and managing healthcare data used in AI applications.
- Data quality, integrity, and security must be maintained throughout the AI development lifecycle.
- Accountability and oversight:
- A designated Medical Group AI oversight committee or responsible individual(s) should oversee the development, implementation, performance and monitoring of AI systems.
- Regular audits and assessments of AI applications should be conducted to ensure compliance with policies and regulatory requirements.
- Clinical validation:
- AI algorithms intended for clinical decision support must undergo rigorous testing, validation, and peer review before deployment.
- The performance of AI systems should be continuously monitored and validated against established benchmarks and to ensure at a minimum data quality and stability.
- Informed consent:
- Patients must be informed about the use of AI technologies in their care, and informed consent should be obtained when AI significantly influences clinical decisions.
Operational guidelines
- AI development and deployment:
- Identify a need for AI application, considering potential clinical benefits and feasibility.
- Establish a multidisciplinary team, including clinicians, data scientists, and IT experts.
- Select appropriate data sources and ensure data compliance with privacy regulations.
- Develop and train AI models, ensuring proper validation and testing.
- Deploy AI solutions in a controlled and monitored environment.
- Continuously assess and refine AI algorithms based on real-world performance and feedback.
- Data management:
- Implement data governance policies to ensure data quality and security.
- Establish mechanisms for data access control and encryption.
- Regularly review data sources to maintain accuracy and relevance.
- Transparency and explainability:
- Ensure that AI decision-making processes are transparent and documented.
- Provide clinicians with understandable explanations of AI-driven recommendations.
- Patient privacy and consent:
- Implement robust patient data protection measures, adhering to HIPAA and other relevant regulations.
- Educate patients about the use of AI in their healthcare and obtain informed consent when necessary.
- Training and education:
- Provide training and education to staff involved in AI development and utilization.
- Keep clinicians and staff informed about the benefits and limitations of AI technologies.
- Monitoring and evaluation:
- Regularly monitor the performance and impact of AI applications on patient care and operations.
- Conduct periodic audits to ensure compliance with policies and regulations.
AI usage guidance
Specific examples of how AI can be used in different clinical settings (e.g., diagnosis, treatment, decision-making).
- Example 1: Using AI to analyze medical images and help diagnose diseases.
- Example 2: Using AI to personalize treatment plans for patients.
- Example 3: Using AI to monitor patients' health and predict potential complications.
Potential legal and ethical implications of using AI in clinical practice
- Legal implications:
- Liability: Who is responsible for decisions made by AI-powered systems, especially if they lead to adverse outcomes?
- Data privacy and security: How can patient data be protected from unauthorized access and misuse?
- Algorithmic bias: How can bias in AI algorithms be identified and mitigated to ensure fair and equitable care for all patients?
- Certification and regulation: What regulations and standards should be in place to ensure the safety and effectiveness of AI-powered medical devices and applications?
- Ethical implications:
- Autonomy and informed consent: How can patients be informed about the use of AI in their care and retain control over their health decisions?
- Transparency and explainability: Can patients and healthcare professionals understand how AI algorithms arrive at their decisions and have access to relevant information?
- Human oversight and accountability: How can human judgment and oversight be integrated with AI to ensure responsible and ethical decision-making?
- Distributional justice: How can access to AI-powered healthcare be equitable and avoid exacerbating existing disparities in healthcare access and outcomes?