Accountability
The obligation and responsibility of the developers, deployers, and distributors of an AI system to ensure the system operates in a manner that is ethical, fair, transparent and compliant with applicable rules and regulations (see also fairness and transparency). Accountability ensures the actions, decisions and outcomes of an AI system can be traced back to the entity responsible for it.
Active learning
A subfield of AI and machine learning in which an algorithm selects some of the data it learns from. Instead of learning from all the data it is given, an active learning model requests additional data points that will help it learn the best.
→ Also called query learning.
Adaptive learning
A method that adjusts and tailors educational content to the specific needs, abilities and learning pace of individual students. The purpose of adaptive learning is to provide a personalized and optimized learning experience, catering to the diverse learning styles of students.
Adversarial attack
A safety and security risk to the AI model that can be instigated by manipulating the model, such as by introducing malicious or deceptive input data. Such attacks can cause the model to malfunction and generate incorrect or unsafe outputs, which can have significant impacts. For example, manipulating the inputs of a self-driving car may fool the model to perceive a red light as a green one, adversely impacting road safety.
Agentic AI
Agentic AI are AI systems that are designed to autonomously make and act upon decisions with limited human oversight. Agentic AI is often referred to as being able to pursue complex goals with limited supervision.
→ Also known as AI agents.
AI assurance
A combination of frameworks, policies, processes and controls that measure, evaluate and promote safe, reliable and trustworthy AI. AI assurance schemes may include conformity, impact and risk assessments, AI audits, certifications, testing and evaluation, and compliance with relevant standards.
AI audit
A systematic review of an AI system or a portfolio of AI systems to ensure it operates as intended and complies with relevant laws, regulations and standards. An AI audit can help identify and map unforeseen risks during the impact assessment and development phases.
AI governance
A system of frameworks, practices and processes at an organizational level. AI governance helps various stakeholders implement, manage and oversee the use of AI technology. It also helps manage associated risks to ensure AI aligns with stakeholders’ objectives, is developed and used responsibly and ethically, and complies with applicable requirements.
Algorithm
A computational procedure or set of instructions and rules designed to perform a specific task, solve a particular problem or produce a machine learning or AI model.
Artificial intelligence
Artificial intelligence is a broad term used to describe an engineered system that uses various computational techniques to perform or automate tasks. This may include techniques, such as machine learning, in which machines learn from experience, adjusting to new input data and potentially performing tasks previously done by humans. More specifically, it is a field of computer science dedicated to simulating intelligent behavior
in computers. It may include automated decision-making.
→ Acronym: AI
Automated
decision-making
The process of making a decision by technological means without human involvement.
Autonomy
The ability of an AI system to operate independent of human intervention.
Bias
There are several types of bias within the AI field. Computational bias or machine bias is a systematic error or deviation from the true value of a prediction that originates from a model’s assumptions or the data itself (see also input data).
Cognitive bias refers to inaccurate individual judgment or distorted thinking, while societal bias leads to systemic prejudice, favoritism and/or discrimination in favor of or against an individual or group. Either or both may permeate the model or the system in numerous
ways, such as through selection bias, i.e. biases in selecting data for model
Bootstrap aggregating
A machine learning method that aggregates multiple versions of a model (see also machine learning model) trained on random subsets of a dataset. This method aims to make a model more stable and accurate.
→ Sometimes referred to as bagging.
Chatbot
An AI system designed to simulate human-like conversations and interactions. Modern versions use natural language processing and large language models to understand and respond to text or other media. Because chatbots are often used for customer service and other personal help applications, chatbots often ingest users’ personal information.
Classification model
A type of model (see also machine learning model) used in machine learning that is
designed to take input data and sort it into different categories or classes.
→ Also called classifiers.
Clustering
An unsupervised machine learning method where patterns in the data are identified and evaluated, and data points are grouped accordingly into clusters based on their similarity.
→ Also called clustering algorithms.
Compute
The processing resources that are available to a computer system. This includes the
hardware components such as the central processing unit or graphics processing
unit. Compute is essential for memory, storage, processing data, running applications, rendering graphics for visual media, powering cloud computing, among others.
Computer vision
A field of AI that uses computers to process and analyze images, videos and other visual inputs. Common applications of computer vision include facial recognition, object recognition and medical imaging.
Conformity assessment
An analysis, often performed by an entity independent of a model developer, on an AI system to determine whether requirements, such as establishing a risk management system, data governance, record-keeping, transparency and cybersecurity practices have been met.
Contestability
The principle of ensuring AI systems and their decision-making processes can be
questioned or challenged by humans. This ability to contest or challenge the outcomes, outputs and actions of AI systems depends on transparency and helps promote accountability within AI governance.
→ Also called redress.
Corpus
A large collection of texts or data that a computer uses to find patterns, make predictions or generate specific outcomes. The corpus may include structured or unstructured data and cover a specific topic or a variety of topics.
Counterfactual
A hypothetical alternative in which an input variable is changed. A counterfactual is often used to play out what-if scenarios.
Data drift
The shift that occurs when a dataset being used as an input for an AI system changes
over time so that it does not statistically resemble the dataset the model was trained on. This could lead to reduced performance as the model is not compatible with a different
dataset.