Artificial Intelligence, its models, methods and regulatory framework, are growing and changing as new and innovative AI tools are being developed and introduced to the public or incorporated into mechanisms that target various industries. The use of such technologies has the potential to both enhance and undermine human rights and, therefore, brings forth the need to address the ethical considerations and risks associated with AI. While there are currently data protection laws that govern the use of data in the AI space, there is no clear legal framework to guide the ethical use of AI. Until such framework exists, these guidelines shall ensure that members of the EVMS community understand the ramifications and expectations for the use of AI in meeting our academic, clinical care, and research missions.
For purposes of these guidelines, the following definitions shall be used:
AI: AI, or Artificial Intelligence, refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning, reasoning, problem-solving, perception, and language understanding. AI technology enables machines to execute tasks that typically require human intelligence. It's a multidisciplinary field involving computer science, statistics, mathematics, psychology, neuroscience, cognitive science, linguistics, operations research, economics, and more.
Generative AI: Generative AI refers to a type of artificial intelligence that is capable of creating new, previously unseen content, data, or information. This can include anything from text, images, videos, music, or other forms of media. It's called "generative" because it generates new data samples that are similar to a given set of training data.
Ethical use. The ethical use of AI means that AI systems shall be designed and utilized:
- With respect for the rights and dignities of individuals and protects the autonomy of humans.
- To promote the well-being and safety of our society and in a manner that garners public trust.
- To be inclusive and provide access to the widest possible audience while being free from bias.
- Such logic can be explained and understood by all individuals in the AI lifecycle (developers, experts, data providers, end-users.)
- With human oversight and not as a replacement for human judgement with the understanding that ultimate accountability rests with the AI user.
- With controls and audit functions to ensure accountability.
- Without violating laws, rules, regulations, or EVMS policies.
Data control. Individuals have the right to control their data and have their data protected through:
- Consent to the collection and use of their data.
- Ensuring that waivers of consent are only granted if in the best interest of the public after a careful consideration of the benefits and the risks to the individual.
- The use of anonymized data when possible.
- Knowledge about how their data will be provided and utilized by third parties.
- Knowledge about the IT infrastructure and controls that are in place to secure their data throughout the AI lifecycle.
- Robust cybersecurity measures to protect data and AI from unauthorized access or malicious attacks.
Continuous learning. AI is rapidly evolving, and users of AI have the obligation to:
- Regularly update AI models and be able to explain the reasoning behind AI predictions or insights.
- Ensure that they stay updated with the latest AI advancements in their field.
- Have a clear understanding of the limitations of AI and who will be responsible in the event of an AI error.
- Regularly assess their use of AI technology against the mission of EVMS.
- Understand the risks associated with the use of AI and develop a risk framework that is built into enterprise risk management.
- Stay up to date on the regulatory and compliance landscape related to AI use.
To apply the principles above, best practices and guidelines are recommended for users of AI in the education, research, clinical and administration domains. Please consult the guidelines that best apply to your role.