Last edited on January 18, 2024


Introduction

Artificial Intelligence, its models, methods and regulatory framework, are growing and changing as new and innovative AI tools are being developed and introduced to the public or incorporated into mechanisms that target various industries. The use of such technologies has the potential to both enhance and undermine human rights and, therefore, brings forth the need to address the ethical considerations and risks associated with AI. While there are currently data protection laws that govern the use of data in the AI space, there is no clear legal framework to guide the ethical use of AI. Until such framework exists, these guidelines shall ensure that members of the EVMS community understand the ramifications and expectations for the use of AI in meeting our academic, clinical care, and research missions. 

Definitions

For purposes of these guidelines, the following definitions shall be used:

AI: AI, or Artificial Intelligence, refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning, reasoning, problem-solving, perception, and language understanding. AI technology enables machines to execute tasks that typically require human intelligence. It's a multidisciplinary field involving computer science, statistics, mathematics, psychology, neuroscience, cognitive science, linguistics, operations research, economics, and more.

Generative AI: Generative AI refers to a type of artificial intelligence that is capable of creating new, previously unseen content, data, or information. This can include anything from text, images, videos, music, or other forms of media. It's called "generative" because it generates new data samples that are similar to a given set of training data.

AI governing principles

Ethical use. The ethical use of AI means that AI systems shall be designed and utilized:   

  1. With respect for the rights and dignities of individuals and protects the autonomy of humans.
  2. To promote the well-being and safety of our society and in a manner that garners public trust.
  3.  To be inclusive and provide access to the widest possible audience while being free from bias.
  4. Such logic can be explained and understood by all individuals in the AI lifecycle (developers, experts, data providers, end-users.)
  5. With human oversight and not as a replacement for human judgement with the understanding that ultimate accountability rests with the AI user. 
  6. With controls and audit functions to ensure accountability. 
  7. Without violating laws, rules, regulations, or EVMS policies.

Data control. Individuals have the right to control their data and have their data protected through:  

  1. Consent to the collection and use of their data.
  2. Ensuring that waivers of consent are only granted if in the best interest of the public after a careful consideration of the benefits and the risks to the individual. 
  3. The use of anonymized data when possible.
  4. Knowledge about how their data will be provided and utilized by third parties.  
  5. Knowledge about the IT infrastructure and controls that are in place to secure their data throughout the AI lifecycle.  
  6. Robust cybersecurity measures to protect data and AI from unauthorized access or malicious attacks.

Continuous learning. AI is rapidly evolving, and users of AI have the obligation to: 

  1. Regularly update AI models and be able to explain the reasoning behind AI predictions or insights.   
  2. Ensure that they stay updated with the latest AI advancements in their field.
  3. Have a clear understanding of the limitations of AI and who will be responsible in the event of an AI error.
  4. Regularly assess their use of AI technology against the mission of EVMS.
  5. Understand the risks associated with the use of AI and develop a risk framework that is built into enterprise risk management.
  6. Stay up to date on the regulatory and compliance landscape related to AI use. 

Implementing guidelines

To apply the principles above, best practices and guidelines are recommended for users of AI in the education, research, clinical and administration domains. Please consult the guidelines that best apply to your role.


Guidelines for AI usage

Specific examples of how AI can be used responsibly and ethically in different contexts (e.g., teaching, research, clinical practice).

  • Example 1: Using AI to personalize learning experiences for students with different needs and learning styles.
  • Example 2: Using AI to develop more accurate and efficient diagnostic tools for patients.
  • Example 3: Using AI to automate administrative tasks, freeing up staff time to focus on more important tasks.

Relevant resources on AI ethics and responsible use