Definitions Flashcards

(67 cards)

1
Q

Accountability

A

The obligation and responsibility of the developers, deployers, and distributors of an AI system to ensure the system operates in a manner that is ethical, fair, transparent and compliant with applicable rules and regulations (see also fairness and transparency). Accountability ensures the actions, decisions and outcomes of an AI system can be traced back to the entity responsible for it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Active learning

A

A subfield of AI and machine learning in which an algorithm selects some of the data it learns from. Instead of learning from all the data it is given, an active learning model requests additional data points that will help it learn the best.
→ Also called query learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Adaptive learning

A

A method that adjusts and tailors educational content to the specific needs, abilities and learning pace of individual students. The purpose of adaptive learning is to provide a personalized and optimized learning experience, catering to the diverse learning styles of students.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Adversarial attack

A

A safety and security risk to the AI model that can be instigated by manipulating the model, such as by introducing malicious or deceptive input data. Such attacks can cause the model to malfunction and generate incorrect or unsafe outputs, which can have significant impacts. For example, manipulating the inputs of a self-driving car may fool the model to perceive a red light as a green one, adversely impacting road safety.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Agentic AI

A

Agentic AI are AI systems that are designed to autonomously make and act upon decisions with limited human oversight. Agentic AI is often referred to as being able to pursue complex goals with limited supervision.
→ Also known as AI agents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

AI assurance

A

A combination of frameworks, policies, processes and controls that measure, evaluate and promote safe, reliable and trustworthy AI. AI assurance schemes may include conformity, impact and risk assessments, AI audits, certifications, testing and evaluation, and compliance with relevant standards.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

AI audit

A

A systematic review of an AI system or a portfolio of AI systems to ensure it operates as intended and complies with relevant laws, regulations and standards. An AI audit can help identify and map unforeseen risks during the impact assessment and development phases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

AI governance

A

A system of frameworks, practices and processes at an organizational level. AI governance helps various stakeholders implement, manage and oversee the use of AI technology. It also helps manage associated risks to ensure AI aligns with stakeholders’ objectives, is developed and used responsibly and ethically, and complies with applicable requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Algorithm

A

A computational procedure or set of instructions and rules designed to perform a specific task, solve a particular problem or produce a machine learning or AI model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Artificial intelligence

A

Artificial intelligence is a broad term used to describe an engineered system that uses various computational techniques to perform or automate tasks. This may include techniques, such as machine learning, in which machines learn from experience, adjusting to new input data and potentially performing tasks previously done by humans. More specifically, it is a field of computer science dedicated to simulating intelligent behavior
in computers. It may include automated decision-making.
→ Acronym: AI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Automated
decision-making

A

The process of making a decision by technological means without human involvement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Autonomy

A

The ability of an AI system to operate independent of human intervention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Bias

A

There are several types of bias within the AI field. Computational bias or machine bias is a systematic error or deviation from the true value of a prediction that originates from a model’s assumptions or the data itself (see also input data).
Cognitive bias refers to inaccurate individual judgment or distorted thinking, while societal bias leads to systemic prejudice, favoritism and/or discrimination in favor of or against an individual or group. Either or both may permeate the model or the system in numerous
ways, such as through selection bias, i.e. biases in selecting data for model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Bootstrap aggregating

A

A machine learning method that aggregates multiple versions of a model (see also machine learning model) trained on random subsets of a dataset. This method aims to make a model more stable and accurate.
→ Sometimes referred to as bagging.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Chatbot

A

An AI system designed to simulate human-like conversations and interactions. Modern versions use natural language processing and large language models to understand and respond to text or other media. Because chatbots are often used for customer service and other personal help applications, chatbots often ingest users’ personal information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Classification model

A

A type of model (see also machine learning model) used in machine learning that is
designed to take input data and sort it into different categories or classes.
→ Also called classifiers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Clustering

A

An unsupervised machine learning method where patterns in the data are identified and evaluated, and data points are grouped accordingly into clusters based on their similarity.
→ Also called clustering algorithms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Compute

A

The processing resources that are available to a computer system. This includes the
hardware components such as the central processing unit or graphics processing
unit. Compute is essential for memory, storage, processing data, running applications, rendering graphics for visual media, powering cloud computing, among others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Computer vision

A

A field of AI that uses computers to process and analyze images, videos and other visual inputs. Common applications of computer vision include facial recognition, object recognition and medical imaging.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Conformity assessment

A

An analysis, often performed by an entity independent of a model developer, on an AI system to determine whether requirements, such as establishing a risk management system, data governance, record-keeping, transparency and cybersecurity practices have been met.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Contestability

A

The principle of ensuring AI systems and their decision-making processes can be
questioned or challenged by humans. This ability to contest or challenge the outcomes, outputs and actions of AI systems depends on transparency and helps promote accountability within AI governance.
→ Also called redress.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Corpus

A

A large collection of texts or data that a computer uses to find patterns, make predictions or generate specific outcomes. The corpus may include structured or unstructured data and cover a specific topic or a variety of topics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Counterfactual

A

A hypothetical alternative in which an input variable is changed. A counterfactual is often used to play out what-if scenarios.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Data drift

A

The shift that occurs when a dataset being used as an input for an AI system changes
over time so that it does not statistically resemble the dataset the model was trained on. This could lead to reduced performance as the model is not compatible with a different
dataset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Data leak
An accidental exposure of sensitive, personal, confidential or proprietary data. This can be a result of poor security defenses, human error, storage misconfigurations or a lack of robust policies around internal and external data sharing practices.
24
Data poisoning
An adversarial attack in which a malicious user injects false data into a model to manipulate the training process, thereby corrupting the learning algorithm. The goal is to introduce intentional errors into the training dataset, leading to compromised performance that results in undesired, misleading or harmful outputs.
25
Data provenance
A process that tracks and logs the history and origin of records in a dataset, encompassing the entire life cycle from its creation and collection to its transformation to its current state. It includes information about sources, processes, actors and methods used to ensure data integrity and quality. Data provenance is essential for data transparency and governance, and it promotes better understanding of the data and eventually the entire AI system.
26
Data quality
The measure of how well a dataset meets the specific requirements and expectations for its intended use. Data quality directly impacts the quality of AI outputs and the performance of an AI system. High-quality data is accurate, complete, valid, consistent, timely and fit for purpose.
27
Decision tree
A type of supervised learning model used in machine learning (see also machine learning model) that represents decisions and their potential consequences in a branching structure.
28
Deep learning
A subfield of AI and machine learning that uses artificial neural networks. Deep learning is especially useful in fields where raw data needs to be processed, like image recognition, natural language processing and speech recognition.
29
Deepfakes
Audio or visual content that has been altered or manipulated using generative artificial intelligence. Deepfakes can be used to spread misinformation and disinformation.
30
Diffusion model
A generative model used in image generation that works by iteratively refining a noise signal to transform it into a realistic image when prompted.
31
Discriminative model
A type of model (see also machine learning model) used in machine learning that directly maps input features to class labels and analyzes for patterns that can help distinguish between different classes. It is often used for text classification tasks, like identifying the language of a piece of text. Examples are traditional neural networks, decision trees and random forest.
32
Disinformation
Audio or visual content, information and synthetic data that is intentionally manipulated or created to cause harm. Disinformation can spread through deepfakes by bad actors.
33
Entropy
The measure of unpredictability or randomness in a set of data used in machine learning. A higher entropy signifies greater uncertainty in predicting outcomes.
34
Expert system
A form of AI that draws inferences from a knowledge base to replicate the decision-making abilities of a human expert within a specific field, like a medical diagnosis. This term is often used to describe the opposite of artificial general intelligence.
35
Explainability
The ability to describe or provide sufficient information about how an AI system generates a specific output or arrives at a decision in a specific context to a predetermined addressee. Explainability is important maintaining transparency and trust in AI. → Acronym: XAI
36
Exploratory data analysis
Data discovery process techniques that take place before training a machine learning model to gain preliminary insights into a dataset, such as identifying patterns, outliers and anomalies and finding relationships among variables.
37
Fail-safe plans
A back-up plan to be executed in case an AI system behaves in unexpected or dangerous ways. A fail-safe plan can be part of a technical redundancy solution to boost robustness.
38
Fairness
An attribute of an AI system that prioritizes relatively equal treatment of individuals or groups in its decisions and actions in a consistent, accurate and measurable manner. Every model must identify the appropriate standard of fairness that best applies, but most often it means the AI system's decisions should not adversely impact, whether directly or disparately, sensitive attributes like race, gender or religion.
39
Federated learning
A machine learning method that allows models (see also machine learning model) to be trained on the local data of multiple edge devices or servers. Only the updates of the local model, not the training data itself, are sent to a central location where they are aggregated into a global model — a process that is iterated until the global model is fully trained. This process enables better privacy and security controls for individual user data.
40
Fine-tuning
The process of taking a foundation model and training it further for a specialized task through supervised learning. It involves taking a foundation model that has already learned general patterns from a large dataset and training it for a specific task using a much smaller and labeled dataset. → See foundation model.
41
Foundation model
A large-scale model that has been trained on extensive and diverse datasets to enable broad capabilities, such as language (see also large language model), vision, robotics, reasoning, search or human interaction, that can function as the base for use-specific applications. → Also called general purpose AI model and frontier AI.
42
Generalization
The ability of a model (see also machine learning model) to understand the underlying patterns and trends in its training data and apply what it has learned to make predictions or decisions about new, unseen data.
43
Generative AI
A field of AI that uses deep learning trained on large datasets to create new content, such as written text, code, images, music, simulations and videos. Unlike discriminative models, generative AI makes predictions on existing data rather than new data.
44
Greedy algorithm
A type of algorithm that makes the optimal choice to achieve an immediate objective at a particular step or decision point based on the available information and without regard for the long-term optimal solution.
45
Ground truth
The absolute or objectively known state of a dataset against which the quality of an AI system can be evaluated. It serves as the real-world reference against which the outputs are measured for accuracy and reliability.
46
Hallucinations
Instances in which generative AI models create seemingly plausible but factually incorrect outputs under the appearance of fact. → Also known as confabulations.
47
Human-centric AI
An approach to AI design, development and deployment that prioritizes human well-being, autonomy, values and needs. The goal is to develop AI systems that amplify and augment human abilities rather than undermine them.
48
Human-in-the-loop
A design paradigm that incorporates human oversight, intervention, interaction or control over the operation and decision-making processes of an AI system. → Acronym: HITL
49
Impact assessment
An evaluation process designed to identify, understand and mitigate the potential ethical, legal, economic and societal implications of an AI system. → Also referred to as a risk assessment.
50
Inference
A type of machine learning process in which a trained model (see also machine learning model) is used to make predictions or decisions based on input data.
51
Input data
Data provided to or directly acquired by a learning algorithm or model (see also machine learning model) for the purpose of producing an output. It forms the basis for machine learning models to learn, make predictions and carry out tasks.
52
Interpretability
The ability to explain or present a model's reasoning in human-understandable terms. Unlike explainability, which provides an explanation after a decision is made, interpretability emphasizes designing models that inherently facilitate understanding through their structure, features or algorithms. Interpretable models are domain-specific and require significant domain expertise to develop.
53
Large language model
A form of AI that utilizes deep learning algorithms to create models (see also machine learning model, foundation model and fine-tuning) pretrained on massive text datasets for the general purpose of analyzing and learning patterns and relationships among characters, words and phrases to perform text-based tasks. There are generally two types of LLMs: generative models that make text predictions based on the probabilities of word sequences learned from its training data (see also generative AI) and discriminative models that make classification predictions based on probabilities of data features and weights learned from its training data (see also discriminative model). The word large generally refers to the model's capacity measured by the number of parameters and to the enormous datasets it is trained on. → Acronym: LLM
54
Machine learning
A subfield of AI involving algorithms that iteratively learn from and then make decisions, recommendations, inferences or predictions based on input data. These algorithms build a model from training data to perform a specific task on new data without being explicitly programmed to do so. Machine learning implements various algorithms that learn and improve by experience in a problem-solving process that includes data collection and preparation, feature engineering, training, testing and validation. Companies and government agencies deploy machine learning algorithms for tasks such as fraud detection, recommender systems, customer inquiries, health care, and transportation and logistics. → Acronym: ML
55
Machine learning model
A learned representation of underlying patterns and relationships in data, created by applying an AI algorithm to a training dataset. The model can then be used to make predictions or perform tasks on new, unseen data.
56
Misinformation
False audio or visual content that is unintentionally misleading. It can be spread through deepfakes created by those who lack intent to cause harm.
57
Model card
A brief document that discloses information about an AI model, like explanations about intended use, performance metrics and benchmarked evaluation in various conditions, such as across different cultures, demographics or race (see also system cards).
58
Multimodal models
A type of model used in machine learning (see also machine learning model) that can process more than one type of input or output data, or "modality," at the same time. For example, a multimodal model can take both an image and text caption as input and then produce a unimodal output in the form of a score indicating how well the text caption describes the image. These models are highly versatile and useful in a variety of tasks, like image captioning and speech recognition.
59
Natural language processing
A subfield of AI that helps computers understand, interpret and apply human language by transforming information into content. It enables machines to translate languages, read text or spoken language, interpret its meaning, measure sentiment, and determine which parts are important for understanding.
60
Neural networks
A type of model (see also machine learning model) used in deep learning that mimics the way neurons in the human brain interact with multiple processing layers, including at least one hidden layer. This layered approach enables machine-based neural networks to model complex nonlinear relationships and patterns within data. Artificial neural networks have a range of applications, such as image recognition and medical diagnoses.
61
Open-source software
A development model that provides free and open access to source code to the public, which can then be viewed, modified and redistributed according to the terms of its respective license. The goal is to promote innovation, transparency, shared collaboration and learning among researchers and technical experts. Open source models are often more lightly regulated.
62
Overfitting
A concept in machine learning in which a model (see also machine learning model) becomes too specific to the training data and cannot generalize to unseen data, which means it can fail to make accurate predictions on new datasets.
63
Oversight
The process of effectively monitoring and supervising an AI system to minimize risks, ensure regulatory compliance and uphold responsible practices. Oversight is important for effective AI governance, and mechanisms may include certification processes, conformity assessments and regulatory authorities responsible for enforcement.
64
Parameters
The internal variables an algorithmic model learns from the training data. They are values the model adjusts to during the training process so it can make predictions on new data. Parameters are specific to the architecture of the model. For example, in neural networks, parameters are the weights of each neuron in the network.
65
Post processing
Steps performed after a machine learning model has been run to adjust its output. This can include adjusting a model's outputs or using a holdout dataset — data not used in the training of the model — to create a function run on the model's predictions to improve fairness or meet business requirements.