One Flashcards

(564 cards)

1
Q

(What are the five broad areas of inquiry covered by a readiness assessment?

A

(1) Opportunity Discovery; (2) Data Management; (3) IT Environment and Security; (4) Risk, Privacy, and Governance; and (5) Adoption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Beyond a sectoral or comprehensive approach, what is sometimes considered a third approach that countries have taken to regulate AI?

A

Modifying existing laws to make clear that they apply to AI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

True or False: Whether an AI incident can be said to have occurred is necessarily context specific.

A

True.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A companion document has been produced by NIST to its AI Risk management Framework that provides a profile for what type of AI?

A

Generative AI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A distributed model of governance is also known by what two other names?

A

A localized or de-centralized model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

According to the NIST, what are the four steps in an incident response plan?

A

(1) Preparation; (2) Detection and analysis; (3) Containment, eradication, and recovery; and (4) Post-incident activity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

According to the USPTO, can an AI-assisted invention be patented?

A

Yes, so long as a human makes a “significant contribution” to the invention process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

After being subject to testing, what is the next “layer” in the ARIA Program?

A

An “assessment” layer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

AI systems that pose transparency-related risks fall into what category under the E.U. AI Act?

A

Limited risk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

AI vendors typically take one of two forms. What are they?

A

(1) Integrating third-party AI tools into operations or products and services; or (2) off-the-shelf AI tools.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An AI impact assessment will weigh what against the potential harms or risks of an AI system?

A

The benefits of AI use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

An approach to achieving privacy-enhanced AI that relies upon minimizing, hiding, separating, and aggregating is referred to as what?

A

A data-oriented approach or strategy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An effective “AI champion” is likely to have what characteristics or authority?

A

(1) Experience with the organization; (2) The respect of colleagues; and (3) Ownership of a budget.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

An ML model that does not allow for a ready explanation about how the inner workings lead to a reliable prediction is called what?

A

A “black box” algorithm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Are non-profit entities subject to the FTC’s Section 5 authority?

A

No.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Article 5 (of the EU AI Act) lists categories of AI systems that pose what level of risk?

A

Unacceptable risk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Before a high-risk AI system is brought to market or before any substantial modification is performed on a high-risk AI system already on the market.

A

Individuals that will be subjected to automated decisions by the high-risk AI system and employees subject to the use of a high-risk AI system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Between Articles and Recitals, which provides interpretive guidance?

A

Recitals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Between data provenance and data lineage, which helps ensure that datasets are representative, accurate, and unbiased?

A

Data provenance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Between data provenance and data lineage, which identifies dependencies between different data elements?

A

Data lineage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Between data provenance and data lineage, which includes information about sources, processors, actors, and methods used to ensure data integrity and quality?

A

Data provenance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Between overfitting and underfitting, which is more likely to result form the inclusion of too much irrelevant data in the dataset.

A

Overfitting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Between supervised and unsupervised learning techniques, which is considered more costly and time-consuming?

A

Supervised learning, due to the need for high-quality labeled data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Beyond any documents produced by the developer, what other document(s) should a deployer of an AI model review prior to deployment?

A

Any vendor or open-source agreements related to the AI model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Bootstrap aggregating, staking, and boosting are all examples of what?
Ensemble methods.
26
True or False: A post-market monitoring plan must be rigorous, comprehensive, and exhaustive for all providers of AI systems under the EU AI Act.
False, The post-market monitoring system must be proportionate to the model risk.
27
Data minimization can be achieved by targeted fewer or less of what.
Targeting fewer individuals and less data points.
28
Designing and building an AI model involves what three steps?
Establishing the AI system architecture, model selection, and implementing a data strategy.
29
Does a country with a sectoral model of privacy protection have one agency or multiple agencies that are responsible for privacy protection?
Multiple agencies.
30
Does a risk management system applicable to high-risk AI systems need to account for risks arising from intended use or foreseeable misuse, or both?
Both.
31
Does a risk management system applicable to high-risk AI systems need to account for pre-market or post-market risks, or both?
Post-market risks.
32
Does an ML mode train an ML algorithm, or does an ML algorithm train an ML model?
An ML algorithm trains an ML model.
33
Does China's Cyberspace Administration Guidelines adopt a risk-based or rights-based approach to AI regulation.
Rights-based.
34
Does higher entropy lead to greater or less uncertainty in predicting outcomes?
Greater uncertainty.
35
Does Privacy by Design protect information from the beginning to end of the data life cycle or only during the initial period when information is first collected?
Under the fifth principle, PbD ensures cradle to grave, secure life cycle management of information, end-to-end.
36
Does red teaming occur prior to or after a model has been released to the public?
It occurs both before and after release.
37
Does synthetic data, transformed data, or both use real-world data as a starting point?
Transformed data.
38
Does the GDPR take a permissive or restrictive approach toward the processing of special categories of personal data?
A restrictive approach that prohibits it unless an exception applies.
39
Ensuring business alignment, identifying key stakeholders, and approaching AI with a pro-innovation mindset are all important steps in establishing an __________.
AI governance strategy.
40
From the perspective of a developer, what is the primary reason that an AI model should be continuously monitored?
AI models will drift over time and potentially become less accurate, more biased, etc.
41
From where did many of the key privacy terms derive?
Most privacy-related terms were originally developed in Europe, but they have become common throughout the privacy and security industries.
42
How does Canada approach the processing of sensitive categories of personal data?
It calls for taking sensitivity into account in determining legal requirements, such as obtaining consent.
43
How does the CSET Harm Taxonomy for AIID define AI harms?
(1) An entity that experienced; (2) a harm event or harm issue; (3) that can be directly linked to a consequence of the behavior of; (4) an AI system.
44
How long must providers of high-risk AI systems keep documentation related their compliance with the E.U. AI Act?
10 years.
45
If a MedTech company is deemed to provide "devices" that transmit data over the internet under the Food, Drug, and Cosmetic Act, what must it then do?
Provide a cybersecurity plan to the FDA.
46
If Company A (a "data controller") contracts with Company B to process data and Company B subcontracts with Company C to also process that information, who among these companies is a "data processor"?
Company B and Company C. All processors of data down a chain of contracting are referred to as "data processors."
47
In defining artificial intelligence, it is often contrasted against what other form of intelligence?
Human or "natural" intelligence.
48
In what case did the FTC pursue regulatory action against against a company for developing an AI tool that could be used for purposes of creating false or misleading reviews for use in advertising?
In the Matter of Rytr LLC.
49
In what case did the FTC successfully obtain "algorithmic disgorgement" from a defendant?
In the Matter of Everalbum, Inc.
50
In what two ways does the use of AI increase security vulnerabilities for organizations?
(1) It increases the attack surface for adversaries; and (2) It can be leveraged to attack existing systems.
51
In what way can mass surveillance using AI lead to group harm?
It can lead to self-censorship that adversely impacts freedom, creativity, and self-development.
52
In what way has mobile technology, social media, IoT, computer vision, AR/VR, and the metaverse all contributed to the development of AI/ML?
Each has led to a great increase in the available data that can be used for training AI/ML models.
53
In what ways does software facilitate the development of AI/ML?
It provides the means of coding AI algorithms, accessing AI models, labeling data, and performing fine-tuning.
54
In what years was the OECD Principles of Trustworthy AI adopted and then subsequently amended?
Adopted in 2019; Amended in 2024.
55
Is a conformity assessment a single test or a broader set of assessments?
It is a broader set of technical and non-technical assessments, supported by appropriate documentation.
56
Is a deployer of a proprietary AI model likely to release more or less technical documentation related to that model?
Less documentation.
57
Is optionality regarding the use of AI systems ever mandated by applicable law?
Yes.
58
Is Singapore's Model AI Governance Framework a mandatory or voluntary regulation?
Voluntary.
59
Is the risk management system required for high-risk AI systems under the EU AI Act established at the outset and consistent, or is it more of a continuous iterative process?
A continuous and iterative process.
60
Is there a private cause of action under Section 5 of the FTC Act?
No.
61
Matching a dataset to the problem being solved, considering constrains, and choosing an algorithm that achieves the desired level of accuracy are all important aspects of what part of AI development?
The designing and building of an AI model or system.
62
Most principles of responsible AI are outgrowths of what concept from the privacy and data protection industry?
Fair Information Practices (FIPs).
63
Open-source AI systems are exempt from the reach of the EU AU Act, so long as they satisfy what requirement?
They cannot be high-risk systems or pose an unacceptable risk.
64
Other than establishing an AI strategy, what is another preliminary step before creating an AI framework?
Creating an inventory of AI applications, algorithms, and use cases.
65
Other than its increased processing power, what other recent development regarding compute has led to the proliferation of the use in AI/ML?
Its availability or accessibility through things like cloud computing or serverless technology.
66
Principal component analysis is a specific type of what kind of unsupervised learning algorithm?
Dimensionality reduction.
67
Prior to deploying a high-risk AI system on the market, who has responsibility for maintaining logs of inputs and outputs?
Providers while those systems are under their control.
68
Record keeping requirements under data protection laws help support what two responsible AI principles?
Accountability and transparency.
69
Should AI Governance policies be agnostic or opinionated towards laws, technology, and industry?
Agnostic.
70
Should AI governance policies be drafted to be prescriptive or non-prescriptive?
Non-prescriptive.
71
Should AI governance policies be input-driven or outcome-focused?
Outcome focused.
72
Should AI governance policies be tightly controlled by select experts or consensus driver across the organization?
Consensus driven.
73
Should an AI inventory be a static or living document?
A living document that changes as the organization's use of AI changes.
74
Should documentation about the planning and building phase of AI development explain what was decided, why it was decided, or both?
Both.
75
Should documentation related to training and testing an AI model be created after testing and training is complete or contemporaneously with the training and testing?
Contemporaneously.
76
Strict products liability statutes in the U.S. typically apply to what three types of defects?
(1) Manufacturing defects; (2) Design defects; and (3) Warranty defects.
77
The belief that AI systems are superior to human systems refers to what concept?
AI Exceptionalism.
78
The Council of Europe's HUDERIA Methodology looks at what specific types of impact from AI systems?
Impacts on human rights, democracy, and the rule of law.
79
The E.U. Reformed Products Liability Directive modified the definition of what term to include software?
The term "products."
80
The European Union is an example of what type of model of privacy protection?
A comprehensive model.
81
The variables or features of a model, governance practices, and the amount of human oversight may all be impact by what?
The laws applicable to a specific AI model or system.
82
To whom must deployers of high-risk AI systems provide notice that they are being subjected to an AI system?
Individuals that will be subjected to automated decisions by the high-risk AI system and employees subject to the use of a high-risk AI system.
83
True and False: A report on fundamental rights impact assessment that is conducted must be provided to market surveillance authorities only when the risk to fundamental rights is high.
False. A report of a FRIA must always be provided to market surveillance authorities.
84
True of False: A "data controller" cannot process information beyond the scope of how its "data processors" can process that information.
False. A "data controller" has the ultimate authority over how data is processed.
85
True of False: All privacy incidents constitute a data breach.
False. All data breaches are privacy incidents, but
86
True of False: Because Executive Order 14110 was repealed, all other guidance from the federal government that was produced because of this is now also void.
False. Other guidance is still effective, unless otherwise also repealed or revoked.
87
True of False: Better matching computer hardware to the AI model requirements can help improve AI/ML model performance.
True.
88
True of False: Organizations should always align AI practices with existing security and privacy practices.
False. While there must be alignment with the organization's use of AI and its privacy and security practices, using AI may require adapting existing privacy and security practices in some cases.
89
True of False: Some laws include specific documentation requirements related to the training and testing of AI models.
True. An example is the EU AI Act, which requires documentation related to performance and accuracy.
90
True of False: The risks posed by AI are will understood and apply in other contexts.
False. The risks posed by AI systems are unique in many ways and not completely understood.
91
True or False. Human oversight necessarily requires that staff be properly trained and empowered.
True.
92
True or False: A conformity assessment includes a review of the QMS and technical documentation drawn up by the provider of a high-risk AI system.
True. But, there are other requirements as well.
93
True or False: A "data controller" cannot process information beyond the scope of how its "data processors" can process that information.
False. A "data controller" has the ultimate authority over how data is processed.
94
True or False: A large percentage of AI governance programs are built atop existing privacy programs.
True. According to a recent survey, 50% of organizations where doing so.
95
True or False: A model card is useful only to developers and end users of the AI model.
False. Model cards are important for many stakeholders, including regulators, policy makers, and AI practitioners.
96
True or False: A post-market monitoring plan must be rigorous, comprehensive, and exhaustive for all providers of AI systems under the EU AI Act.
False. The post-market monitoring system must be proportionate to the model risk.
97
True or False: A Privacy by Design can be modular, in that it can be "bolted onto" an engineering project after the fact.
False. The third principle of PBbD states that PbD is embedded into the design and architecture of IT systems and business practices. It is not bolted on as an add-on, after the fact.
98
True or False: A technical implementation of AI does not allow organizations to avoid generally applicable laws.
True.
99
True or False: Accuracy of an AI model does not correlate with risk of the AI model.
True.
100
True or False: AI governance should never be automated to ensure that AI remains human-centric.
False. Certain aspects of AI governance can and should be automated.
101
True or False: All privacy incidents constitute a data breach.
False. All data breaches are privacy incidents, but not all privacy incidents are data breaches.
102
True or False: Although Privacy by Default is sometimes recognized as distinct from Privacy by Design, Privacy by Default is also commonly thought of as a primary principle of Privacy by Design practices.
True.
103
True or False: An AI governance strategy requires a holistic approach that is closely tailored to the specific needs of your organization.
True.
104
True or False: An AI system will always have one developer.
False. Multiple parties could be considered "developers" of a single AI system.
105
True or False: An organization can be both a "data controller" and a "data processor"
True.
106
True or False: An organization should simply take an industry-produced AI framework and apply it without modification to its use of AI.
False. An organization's AI framework should be tailored to its own unique needs and use cases.
107
True or False: Applicable law can dictate how an AI model is deployed and what governance practices must be adopted.
True.
108
True or False: Because an individual has ultimate control over his or her personal information, they are often referred to as the "data controller."
False. An individual whose personal information is being processed is called the "data subject," while the person having ultimate control over how that information is used is called the "data controller."
109
True or False: Because it is the deployer that actually deploys an AI model into production, a developer of an AI model has no responsibilities related to deployment.
False. The developer will have certain responsibilities unique to its role, such as creating a model card and performing a conformity assessment.
110
True or False: Because of the inherent complexity involved, AI systems can store personal data just in case it is needed at a later time.
False. AI systems should collect and use personal data only for specified purposes, which are determined prior to obtaining and processing personal data.
111
True or False: Business disruption is considered a "cost" by most organizations.
True. Because business disruption is thought of as a "cost," limiting that disruption is a key part of building a business case for AI governance.
112
True or False: Copyright protects facts, ideas, systems, methods.
False. Copyright only protects original works of art and expression.
113
True or False: Data governance practices are closely circumscribed and should be similar for all organizations.
False. Data governance practices will be unique to an organization and impacted by many different factors.
114
True or False: Defining roles and responsibilities of employees related to the organization's development and governance of AI is simply a best practice.
False. Defining roles and responsibilities can also be an explicit compliance obligation.
115
True or False: Deploying a proprietary AI model is potentially higher risk and potentially higher reward.
True.
116
True or False: Documentation about the planning and building process of AI development should be tailored to specific organization, model, use case, and context.
True.
117
True or False: Downstream harms are caused only by users of an AI system.
False. Downstream harms can result from many different sources.
118
True or False: Each profile in the NIST AI RMF is broken down into categories and subcategories.
False. Each core function (not profile) is broken down into categories and subcategories.
119
True or False: Even though it is called a "life cycle" the steps in the AI Life Cycle are not always performed sequentially.
True.
120
True or False: High-risk AI systems must be designed to allow testing to ensure compliance with the requirement to establish a risk management system.
True.
121
True or False: Human oversight necessarily requires that staff be properly trained and empowered.
True.
122
True or False: If a decision is made not to perform a data protection impact assessment, the reasoning for that decision should be documented.
True.
123
True or False: In AI license agreements, each specific component of the AI (e.g., training data, output data) should be separately considered.
True.
124
True or False: In deciding between a cloud-based or on-premise deployment environment, there is usually one that is a superior method in nearly all cases.
False. The specific environment for an AI system will depend on many factors unique to the deployer and the AI model itself.
125
True or False: Industry produced AI governance frameworks can serve as a baseline or foundation for an organization's individual AI governance framework.
True.
126
True or False: It is often possible to eliminate risk entirely in properly designed AI systems.
False. It is nearly impossible to entirely eliminate risks, which calls for prioritization of risk mitigation.
127
True or False: Just like a privacy notice, there is one comprehensive document that is typically used to make all AI-related disclosures.
False. There is no single over-arching disclosure document for AI systems that can be compared to a privacy notice.
128
True or False: Many organizations build AI impact assessments on top of or as an extension to privacy impact assessments.
True.
129
True or False: Models that are trained on streaming data are likely to decay over time.
False. Models trained on static data are likely to decay over time; models trained with streaming data, however, may drift.
130
True or False: Non-personal data is generally not protected against use in AI model training and testing.
False. Non-personal data may be protected under IP law, contractual rights, and other means.
131
True or False: Once a data subject provides consent, a record is no longer needed because, with their continued use of a product, consent is implied.
False. Records of data subject consent should always be maintained.
132
True or False: One organization can be a developer, deployer, and user with respect to a single AI system.
True.
133
True or False: Only high-risk AI systems, and not the providers, must be registered with E.U. authorities before being brought to market.
False. Both high-risk AI systems and the providers of those high-risk AI systems must register.
134
True or False: Only the lawfulness of using personal data needs to be assessed for purposes of AI model training an testing.
False. The lawfulness of using all types of data should be assessed when it is used for AI model training and testing.
135
True or False: "Opt-in" consent refers to a passive form of consumer consent implied by a person's conduct.
False. This is referred to as "Opt-out" or implied consent.
136
True or False: Organizations can benefit from creating a legal "inventory" of laws and regulations applicable to its use of AI.
True.
137
True or False: Organizations cannot analyze what laws apply to an AI system at the outset of design and planning because it is not clear what laws apply until deployment.
False. Organizations should analyze what laws are applicable to an AI model or system early in the design process. These may change, however, during the deployment stage.
138
True or False: Organizations should always align AI practices with existing security and privacy practices.
False. While there must be alignment with organization's use of AI and its privacy and security practices, using AI may require adapting existing privacy and security practices in some cases.
139
True or False: Performing a PIA or DPIA should be carried out according to specific requirements that apply to all organizations using AI.
False. While PIAs and DPIAs have general requirements, they should be tailored to the specific needs of the organization.
140
True or False: Preliminary research, testing, and development of AI systems are subject to the EU AI Act.
False. Research, testing, and development of AI systems prior to being placed on market or put into service are exempt from the EU AI Act.
141
True or False: Privacy by Design practices should be obfuscated because this provides an added layer of information security protection.
False. The sixth principle of PbD states that its component parts and operations remain visible and transparent, to both users and providers alike, including subject to independent verification.
142
True or False: Privacy by Design requires certain tradeoffs that require a balancing of all legitimate interests.
False. The fourth principle of PbD states that PbD seeks to accommodate all legitimate interests and objectives in a positive-sum "win-win" manner, not through a dated, zero-sum approach, where unnecessary trade-offs are made.
143
True or False: Sampling bias is a specific type of cognitive bias.
True.
144
True or False: Serverless computing does not rely on servers to perform computations.
False. Serverless computing means that the execution of software is not limited to a particular server or piece of hardware.
145
True or False: Some combination of notice and consent will typically dictate the extent to which personal data may be processed for purposes of AI model training and testing.
True.
146
True or False: Some data protection laws require "opt-in" consent while others require "opt-out" consent.
True.
147
True or False: Some types of ML techniques are superior to others.
False. Each learning technique facilitates different objectives and have different use cases; one is not "superior" to others.
148
True or False: The bigger a business is, the more complex its AI governance will need to be.
In most cases, this is true.
149
True or False: The collection of personal data can be either passive or active.
True.
150
True or False: The E.U. AI Act has a personal-use exemption.
True.
151
True or False: The E.U. AI Act provides individual rights that can be specifically compensated for.
False. While the E.U. AI Act does provide some individual rights, such as the right to make a complaint, remedies must be sought under other laws (e.g., GDPR, product liability, etc.).
152
True or False: The E.U. AI Act sets forth a prescriptive list of strict requirements that providers of high-risk AI systems must include in their quality management system.
False. While the E.U. AI Act does require certain components be included in a QMS, how specific requirements are implemented will vary from one organization to the next.
153
True or False: The FIP "notice" refers only to telling a consumer that his or her data will be collected.
False. Notice also refers to providing consumers additional information, such as how their data will be used, who their data will be disclosed to, and when their data will be destroyed.
154
True or False: The FIP of "access" refers only to providing a consumer the ability to view the personal information collected about him or her.
False. "Access" also refers to the ability of consumers to correct or update inaccurate information.
155
True or False: The FTC has issued expansive regulations under its Section 5 authority related to AI.
False. The FTC has never issued AI-related regulations under its Magnuson-Moss authority.
156
True or False: The NIST AI RMF Playbook can best be thought of as a step-by-step guide or checklist for achieving risk management goals.
False. It is not a step-by-step guide or checklist, but rather suggests specific, concrete actions organizations can take to help manage risk.
157
True or False: The obligation to maintain documentation to verify compliance with the EU AI Act extends beyond the life of the company.
True.
158
True or False: The terms in a vendor or open-source agreement may impact the risks associated with using a particular AI model.
True.
159
True or False: There may be situations where an AI impact assessment must be conducted but a privacy impact assessment need not be conducted.
True.
160
True or false: Under the E.U. AI Act, the relative risk level of an AI system is determined at the point of production or deployment and is thereafter fixed.
False. Risk level should be continuously analyzed and monitored, as it may change over time.
161
True or False: When AI is used to solve business problems, the specific type of business problem being solved may dictate design choices such as the type of learning technique used.
True.
162
True or False: When processing special categories of personal data for bias testing AI models, it is best practice to obtain that personal data directly from the data subject with a narrowly defined purpose specification.
True.
163
Under the Digital Services Act, online platforms are prohibited from using targeted advertisements based on profiling that relies on what type of data?
The personal data of minors or special categories of personal data.
164
Under the Digital Services Act, what must very large platforms that incorporate a recommender systems do?
Provide at least one option for a recommender system that is not based on profiling.
165
Under the E.U. AI Act, what level of risk are AI systems that are used in the administration of justice and other democratic processes?
High Risk.
166
Under the E.U. AI Act, what level of risk are AI systems that provide generative AI output?
Limited risk.
167
Under the E.U. AI Act, what level of risk are AI systems that infer emotions in the workplace or educational institution?
Unacceptable risk.
168
Under the E.U. AI Act, what level of risk are AI systems that process biometrics?
High risk.
169
Under the E.U. AI Act, what level of risk are AI systems that use social scores that lead to negative treatment?
Unacceptable risk.
170
Under the E.U. AI Act, what level of risk are AI systems that create or expand facial recognition databases through untargeted scraping?
Unacceptable risk.
171
Under the E.U. AI Act, what level of risk are AI systems that perform real-time biometric identification in publicly accessible spaces for purposes of law enforcement?
Unacceptable risk.
172
Under the E.U. AI Act, what level of risk are AI systems that interact directly with individuals?
Limited risk.
173
Under the E.U. Reformed Products Liability Directive, under what three conditions can the defectiveness of a product be presumed?
(1) Where the defendant fails to make proper disclosures; (2) Where there is non-compliance with product safety requirements; or (3) Where damage is caused by an obvious malfunction that was reasonably foreseeable.
174
Under the EU AI Act, what level of risk are AI systems that are used for immigration and border control?
High risk.
175
Under the EU AI Act, what level of risk are AI systems that are used in law enforcement?
High risk.
176
Under the EU AI At, what level of risk are AI systems that are involved in the management of critical infrastructure?
High risk.
177
Under the EU Reformed Products Liability Directive, under what three conditions can the defectiveness of a product be presumed?
(1) Where the defendant fails to make proper disclosures; (2) Where there is non-compliance with product safety requirements; or (3) Where damage is caused by an obvious malfunction that was reasonably foreseeable
178
Variance often involves a tradeoff with what other characteristic of an AI/ML model?
Bias.
179
Vision, audio, and text are all different types of what?
Modalities.
180
What are the four main principles used to protect an organization's use of personal information?
(1) Security controls; (2) Data quality controls; (3) Limitations on processing; and (4) Accountability (e.g. proper administration and monitoring)
181
What accountability practice is vital for the early stages of scoping an AI project?
Documenting decisions about tradeoffs in design and planning decisions.
182
What AI/ML training technique requires a tradeoff between exploring new actions and exploiting known good actions?
Reinforcement Learning.
183
What are administrative security controls?
Administrative controls refer to policies, such as those designed to limit access to data to only those employees who need access to accomplish their assigned job functions.
184
What are AI governance frameworks intended to accomplish?
What are AI governance frameworks intended to accomplish?
185
What are blockchains?
Shared immutable, append-only ledgers that are used for recording the history of transactions.
186
What are common exclusions from indemnification agreements in software licenses that are not sufficient in the context of AI?
What are common exclusions from indemnification agreements in software licenses that are not sufficient in the context of AI?
187
What are counter-factual explanations?
A method for understanding model outputs by exploring hypothetical scenarios.
188
What are ensemble methods?
Techniques that aim at improving the accuracy of results in models by combining multiple models instead of using a single model.
189
What are four options for managing risk?
(1) Accepting the risk as is; (2) Transferring the risk to another entity; (3) Mitigating the risk with the implementation of controls; or (4) Avoiding the risk.
190
What are functional requirements of AI systems?
Requirements that describe a specific function of the intended information system, such as how the system will work and what inputs create what outputs, and design elements to be implemented.
191
What are hallucinations?
Instances in which generative AI models create seemingly plausible but factually incorrect outputs under the appearance of fact.
192
What are instructions of use?
A type of documentation produced by developers of AI models to help guide deployers and end users.
193
What are non-functional requirements of AI systems?
Requirements that describe a constraint or property of the system that an engineer can trace to functional requirements or design elements.
194
What are pre-deployment pilots or testing?
Small-scale trial implementations that closely match the production version prior to full deployment to identify risks in real-world scenarios.
195
What are privacy-enhancing technologies?
Technologies whose purpose is privacy.
196
What are profiles under the NIST AI RMF?
Implementations of the core functions for a specific setting or application based on requirements, risk tolerance, and resources of the user.
197
What are some benefits of keeping an AI incident register?
It can act as a mitigation strategy to improve future responses and avoid future incidents.
198
What are some common challenges that must be overcome in deploying an AI governance strategy?
Organizational complacency, apprehension about putting money and assets into a new business function, and a lack of understanding regarding the importance of AI governance.
199
What are some common use cases for AI?
Recommendations, recognizing inputs, detecting things from inputs, forecasting events, goal-driven optimization, interaction support, and personalization.
200
What are some drawbacks to edge computing?
It may be limited by hardware and computing power of edge devices.
201
What are some effective tools for understanding the business problem that an AI model is intended to address?
Market research and user interviews.
202
What are some elements that should be included in an AI incident response plan?
A statement of management commitment; purpose and objective of policy; scope of policy; defining roles and responsibilities; prioritization of incidents; performance measures; and reporting or contact forms.
203
What are some examples of generally applicable laws that might impact AI development and deployment?
IP, non-discrimination, consumer protection, products liability, and privacy.
204
What are some examples of key stakeholders that should typically be consulted in the design phase of most AI projects?
Legal, marketing and sales, executive leadership, and AI governance (if established).
205
What are some examples of sensitive categories of personal data?
Data related to race, sex, religion, sexual orientation, political affiliation, or union membership, as well as health data (e.g., biometric or genetic data).
206
What are some examples of specific types of cloud computing?
Software-as-a-service ("SaaS"), platform-as-a-service ("PaaS"), and infrastructure-as-a-service ("IaaS").
207
What are some examples of U.S. laws that prohibit discrimination in the employment context?
Title VII of the CRA, the Age Discrimination in Employment Act, the Pregnancy Discrimination Act, and the Equal Pay Act.
208
What are some of the benefits of a distributed model of governance?
Advantages include: (1) Information flows from the bottom-to-top, which allows for decisions to be made on a well-informed basis related to actual business functions; (2) Functional-level expertise can be called upon in the decision-making process; (3) Functional-level employees feel more ownership over AI-related decisions.
209
What are some of the benefits of utilizing a classification framework for AI?
A classification framework can: (1) promote a common understanding of AI; (2) inform registries and inventories of algorithms and automated decision systems; (3) support sector-specific frameworks; (4) support AI risk assessments; and (5) support AI risk management.
210
What are some of the disadvantages of a distributed model of governance.
Disadvantages include: (1) Inefficiencies may be created because each AI-related decision may need to be made numerous times by each separate department; (2) Divergent or conflicting policies may be implemented by individual departments.
211
What are some of the focuses of AI governance specific to deployers of AI systems?
Governance over the contextual decisions about how the AI system is used, transparency for end users, the underlying business purpose, and accountability over how the system is used.
212
What are some of the focuses of AI governance specific to developers of AI systems?
Governance of training data, documentation for downstream users, incorporating constraints into the model, validating the system.
213
What are some of the focuses of AI governance specific to users of AI systems?
Ensuring the system is used appropriately and within scope of any AUP, understanding the risks of use (e.g., inputting personal data), and providing feedback to developers and deployers.
214
What are some suggested best practices for managing AI risk included in the NIST AI RMF?
Have senior and independent oversight, declare risk tolerance, support AI risk managers, and document roles and responsibilities vis-a-vis AI risk.
215
What are some ways that providers of high-risk AI systems can take corrective action when they have reason to believe that a high-risk AI system they have brought to market is no longer in conformity?
Bringing the system into conformity, or withdrawing, disabling, or recalling the system.
216
What are technical security controls?
Technical controls refer to computer code or other electronic systems designed to limit access to authorized users and to maintain the integrity of data from outside attack.
217
What are the 5 V's of data preparation?
(1) Volume; (2) Velocity; (3) Variety; (4) Veracity; and (5) Value.
218
What are the benefits of preprocessing data prior to training an AI/ML model?
It improves data quality, mitigates bias, addresses fairness, and enhances performance and reliability.
219
What are the benefits of understanding an AI model from the perspective of a deployer?
It helps ensure alignment, helps the deployer match a model with a business problem, and helps the deployer understand its fit for purpose.
220
What are the benefits of using an on-premise deployment environment?
On-premise environments allow greater control by the organization; better security in handling sensitive data or in regulated industries (e.g., healthcare).
221
What are the characteristics of a dataset for which deep learning excels at processing compared to other ML learning techniques?
Massive amounts of raw input data.
222
What are the different persons, areas, or things that can be impacted by AI risks?
Individuals, groups, communities, organizations, society, and the environment.
223
What are the five dimensions of the OECD Classification Framework?
(1) People & Planet; (2) Economic Context; (3) Data & Input; (4) AI Model; and (5) Task & Output.
224
What are the five key aspects of ISO/IEC Standard 42001?
(1) Creation and implementation of AI governance policies; (2) AI risk management and mitigation strategies; (3) Data management practices; (4) Transparency and explainability of AI systems; and (5) Stakeholder engagement.
225
What are the five levels of AI maturity under MITRE's AI Maturity Model?
(1) Initial; (2) Engaged; (3) Defined; (4) Managed; and (5) Optimized.
226
What are the five major categories of harm set forth in the article Sociotechnical Harms of Algorithmic Systems?
(1) Representational; (2) Allocative; (3) Quality of Service; (4) Interpersonal; and (5) Social System.
227
What are the five primary reasons that an AI incident may occur?
Brittleness, lack of robustness, lack of quality data, insufficient testing, and model or data drift.
228
What are the five principles of the OECD Principles of Trustworthy AI?
(1) Inclusive growth, sustainable development, and well-being; (2) Human rights and democratic values, including fairness and privacy; (3) Transparency and explainability; (4) Robustness, security, and safety; and (5) Accountability.
229
What are the four additional requirements imposed on GPAI models that pose systemic risk under the E.U. AI Act?
(1) Perform model evaluation (red teaming); (2) perform risk assessments; (3) log, document, and report serious incidents and corrective measures; and (4) ensure an adequate level of cybersecurity.
230
What are the four categories of risk under the E.U. AI Act?
(1) Unacceptable risk; (2) high risk; (3) limited risk; and (4) minimal risk.
231
What are the four core functions of the NIST AI RMF?
Govern, map, measure, and manage.
232
What are the four sources of AI risks?
Data risks, model risks, operational risks, and ethical or legal risks.
233
What are the functional requirements of AI systems?
Requirements that describe a specific function of the intended information system, such as how the system will work and what inputs create what outputs, and design elements to be implemented.
234
What are the seven trustworthy AI principles included in the NIST AI RMF?
(1) Valid and reliable; (2) Safe; (3) Secure and resilient; (4) Accountable and transparent; (5) Explainable and interpretable; (6) Privacy-enhanced; and (7) Fair with harmful bias managed.
235
What are the six stages of the data life cycle?
(1) Collection; (2) Processing; (3) Use; (4) Disclosure; (5) Retention; and (6) Destruction.
236
What are the six steps of the OECD's version of the AI life cycle?
(1) Planning and design; (2) collecting and processing data; (3) building and using the model; (4) verifying and validating; (5) deployment; and (6) operating and monitoring.
237
What are the steps during the Development phase of the AI Life Cycle?
(1) Build the model; (2) Perform feature engineering; (3) Perform model training; (4) Perform model validation and testing.
238
What are the steps during the Implementation phase of the AI Life Cycle?
(1) Perform a readiness assessment; (2) Deploy the model into production; (3) Continuously monitor and validate the model; (4) Continuously maintain the model.
239
What are the two common paths of AI adoption?
(1) Leveraging AI to perform current tasks in a new, better way; and (2) Using AI to create new products, services, or functionality not previously available.
240
What are the two key components of an AI system under the definition of that term in the E.U. AI Act?
(1) A system that operates at varying levels of autonomy; and (2) infers from the input how to generate output that can influence physical or virtual environments.
241
What are the two strategies that can be used to achieve privacy-enhanced AI?
A process-oriented strategy and a data-oriented strategy.
242
What are three common types of production environments?
Cloud-based environments, on-premise environments, or edge computing environments.
243
What are three dividing lines across all of the AI-related legislation that has been adopted globally?
(1) Sectoral vs. comprehensive; (2) Risk-based vs. rights-based; and (3) Mandatory vs. voluntary.
244
What are three steps in planning or designing an AI model?
(1) Determining the business objectives and requirements; (2) Scoping the project; and (3) Determining the governance structure and responsibilities.
245
What are three types of preprocessing data for training an AI/ML model?
Data wrangling, data cleansing, and data labeling.
246
What are three ways in which to take personal information and transform it into non-personal information?
(1) Encryption; (2) Anonymization; and (3) Pseudonymization (in some cases).
247
What are two benefits of using an edge computing design paradigm?
It creates a better user experience and is more privacy protective.
248
What are two common methods used to expose an AI model?
Embedding the model directly into an application or using an application programming interface (API).
249
What is an AI "kill switch"?
A system that allows a deployer to deactivate or localize an AI model or system.
250
What aspect of AI most directly calls for the inclusion of social sciences during design and development?
Its socio-technical nature.
251
What aspect of planning and building in AI model development serves as an important communication tool?
Documenting the process. Including all decisions made and why those decisions were made.
252
What aspect of the AI tech stack is important for purposes of quickly identifying and fixing any potential problems.
Observability.
253
What benefit is there in understanding that terminology regarding AI can very widely?
It can facilitate aligning AI terms to how they are specifically used in your organization.
254
What case stands for the proposition that AI vendors have a duty not to sell products that can be used to knowingly or unknowingly violate fair housing laws?
Connecticut Fair Housing Center v. Core Logic Rental Properties.
255
What challenge does data persistence pose from a privacy perspective when personal data is used for AI training?
Data cannot be minimized or destroyed once incorporated into a model, and organizations may not be able to respond to data subject requests to delete personal data.
256
What challenge does data spillover pose from a privacy perspective when personal data is captured incidentally?
It raises concerns about consent and appropriate deletion of personal data.
257
What challenge does the collection and creation of personal data by AI systems pose from a privacy perspective?
It may be difficult to provide appropriate notice and obtain informed consent when data is produced or used as part of a black box model.
258
What characteristic of AI makes it difficult to prove that an AI system caused a particular harm to an individual?
The complexity of AI systems makes it difficult to identify what specific aspect (e.g., training data, the specific algorithm, etc.) is responsible for the harm.
259
What characteristic of AI makes it difficult to prove that an AI system acted negligently?
What characteristic of AI makes it difficult to prove that an AI system acted negligently?
260
What core function under the NIST AI RMF is cross-cutting and infused within the other three core functions?
Govern.
261
What data subject right under the GDPR presents the greatest challenge for AI developers and deployers?
Article 22's prohibition on the automated decision-making and profiling.
262
What department (or departments) in an organization is likely to be responsible for determining whether third party vendors are complying with internal AI governance policies and practices of the organization?
The legal or procurement department.
263
What department in an organization is likely to be an end user of AI systems that are used as part of the hiring or promotion process?
The human resources department.
264
What department in an organization is likely to be responsible for risk assurance practices related to the use of AI?
The internal audit department.
265
What did the court hold in Rogers v. Christie?
That an AI system does not qualify as a "product" as that term is defined under New Jersey's product liability statute.
266
What did the court hold in Thader v. Vidal?
That AI models cannot be listed as inventors on patents.
267
What did the court hold in Thaler v. Perlmutter?
Where output from a generative AI system is created without human input, it is not an original work of art that can be protected under copyright law.
268
What did the E.U. AI Liability Directive attempt to accomplish before the proposal for it was withdrawn by the E.U. Commission?
Reinforcing the E.U. AI Act by allowing any violation to be enforced via civil liability.
269
What do the enforcement actions brought by the FTC against DoNotPay, Ascent Capventures, and TheFBAMachine establish?
That companies may not make false claims about how using AI tools can result in a specific benefit or result.
270
What does ARIA stand for?
Assessing Risks and Impacts of AI.
271
What does combining multiple layers of artificial neurons allow for in a neural network?
The discovery of complex non-linear relationships and patterns in data.
272
What does it mean to "anonymize" data?
To strip data of its identifying information.
273
What does it mean to "deploy" an AI model?
Deployment is the functional process of moving the system from a development or testing environment to a production environment.
274
What does it mean to "expose" an AI model?
Make it accessible in the real world so that users can interact with it.
275
What does it mean to localize an AI system?
To make it available only internally to an organization.
276
What does it mean to "rationalize" legal requirements?
Taking all legal requirements from multiple jurisdictions and adopting one overarching approach that satisfies all legal requirements.
277
What does TEVV stand for?
Training, Evaluation, Verification, and Validation.
278
What does the acronym COTS stand for?
Commercial-off-the-shelf.
279
What does the post-processing of data refer to in the context of artificial intelligence and machine learning?
The steps performed after a machine learning model has been run to adjust the output of that model.
280
What does the preprocessing of data refer to in the context of artificial intelligence and machine learning?
The steps taken to prepare data before training a machine learning model, which can include cleaning the data, handling missing values, normalization, feature extraction and encoding categorical variables
281
What does the term "data quality" refer to?
The measure of how well a dataset meets the specific requirements and expectations for its intended use; high-quality data is accurate, complete, valid, consistent, timely and fit for purpose.
282
What does the term "democratization of AI" refer to?
The availability of low-code and no-code software that allows those without an advanced degree to access and develop AI systems
283
What fact about AI systems or models requires special consideration in drafting an indemnification provisions in an AI-related license agreement?
AI models evolve over time and are subject to retraining.
284
What factors should be considered with respect to each component of an AI system when drafting an AI-related license agreement?
(1) Who owns each component; (2) Who provides each component; (3) Who will use each component; and (4) How each component will be used.
285
What FIPs most directly call upon organizations to avoid secondary use of personal data?
Purpose limitation and consent.
286
What four questions does threat modeling seek to address?
(1) What are we working on?; (2) What can go wrong?; (3) What are we going to do about it?; and Did we do a good enough job?
287
What global AI legislation requires that generative AI systems be subject to security reviews by the government?
China's Cyberspace Administration Guidelines
288
What is a centralized model of governance?
A model in which one person or one dedicated team is responsible for the AI governance functions within an organization.
289
What is a common practice used to evaluate a third-party vendor's reputation?
Calling references provided by the vendor.
290
What is a common use case for retrieval augmented generation?
When an AI model is trained on static, historical data that has a specific cut-off date.
291
What is a confusion matrix?
A way to measure the accuracy of an AI model by comparing predicted output with actual output.
292
What is a control?
Any measure that modifies risk.
293
What is a data protection impact assessment?
An analysis similar to a PIA that is subject to specific rules set forth in the GDPR.
294
What is a "data sheet"?
A document that describes the motivation, composition, collection process, implementing of privacy-enhancing technologies, and recommended uses for a base data pile.
295
What is a "data subject"?
An individual whose personal information is being processed.
296
What is a disparate impact?
An adverse impact that is unintentional and results from a seemingly neutral practice.
297
What is a distributed model of governance?
A model in which functional levels of an organization make AI-related policy decisions
298
What is a distributor under the E.U. AI Act?
A natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.
299
What is a fairness threshold?
A standard used as part of calculating an adverse impact ratio, where if the ratio falls below that level, retraining or system modification may be needed.
300
What is a feature in the context of AI?
Input variables used to generate model predictions.
301
What is a first step in imposing governance from the perspective of a deployer?
Understanding the AI system or model being deployed.
302
What is a first step in imposing governance from the perspective of a deployer?
Understanding the AI system or model being deployed.
303
What are the benefits of using a cloud-based deployment environment?
Cloud computing allows for easy scaling up and down; reduces upfront costs; avoids the need for specific in-house IT expertise.
304
What is a first-party audit?
An internal audit conducted by the organization itself; usually for self-certification purposes.
305
What is a foundation model?
A large-scale ML model that has been trained on extensive and diverse datasets to enable broad capabilities, such as language, vision, robotics, reasoning, search, or human interaction.
306
What is a General-Purpose AI Model?
Any AI model that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications (i.e., a foundation model).
307
What is a greedy algorithm?
A type of algorithm that makes the optimal choice to achieve an immediate objective at a particular step or decision point, based on the available information and without regard for the longer-term optimal solution.
308
What is a harms taxonomy?
A map of harms that is used to classify risk and determine appropriate controls.
309
What is a heuristic?
An approach to problem solving that employs a practical method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate, short-term goal or approximation.
310
What is a hybrid model of governance?
A model that combines aspects of both the centralized and distributed models of governance.
311
What is a key aspect of accountability with respect to an employee training program?
Ensuring that adequate records of attendance or training completion are maintained.
312
What is a machine learning model?
The learned representation of underlying patterns and relationships in data that can then be used to make predictions or perform tasks on new, unseen data.
313
What is a model card?
A brief document that discloses information about an AI model, like explanations about intended use, performance metrics and benchmarked evaluation in various conditions, such as across different cultures, demographics or race.
314
What is a name used to describe each layer in a neural network?
A node.
315
What is a neural network?
A type of ML model that mimics the way neurons in the human brain interact with multiple processing layers, including at least one hidden layer.
316
What is a patent?
An exclusive right granted to the inventor to make, use, or sell an invention for a set period of time.
317
What is a privacy impact assessment?
An analysis of discrete products, services, processes, or events to determine the privacy risks associated with processing data in a particular way.
318
What is a privacy notice?
A written document that is intended for an external audience that sets forth how a controller collects, stores, and uses the personal data that it gathers.
319
What is a production environment?
An environment hosted on a server that is accessible to users.
320
What is a provider of AI systems under the EU AI Act?
Natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and placed on the market.
321
What is a random forest?
The combination of multiple decision trees, where the results of each decision tree are combined to reach a final decision.
322
What is a readiness assessment?
An analysis of whether the organization is ready to deploy an AI model.
323
What is a reliability assessment?
A method of assessing the reliability of an AI system from a statistical perspective that looks at whether the model behaves as expected and performs its intended function consistently and accurately, even with new data that it has not been trained on.
324
What is a rights-based approach to AI regulation?
An approach that places emphasis on transparency, consent, risk assessments and other practices designed to protect individual interests.
325
What is a risk-based approach to AI regulation?
An approach that applies regulatory scrutiny based upon the potential risks of AI systems.
326
What is a system card?
A brief document that discloses information about how various AI models work together within a network of AI systems, promoting greater explainability of the overall system.
327
What is a tabletop exercise?
A structured readiness-testing activity that simulates an emergency situation (such as a data breach) in an informal, stress-free setting.
328
What is a third-party audit?
An audit conducted by an independent third party.
329
What is a use case analysis?
An analysis of whether AI might be used to solve an identified business problem.
330
What is acceleration risk?
The fact that the volume and speed of data processing in AI systems, along with the complexity of the system, means that not all risks will be able to be anticipated at the outset.
331
What is accuracy in the context of an AI model?
The degree to which an AI system correctly performs its intended task; the measure of the system's performance and effectiveness in producing correct outputs based on its input data.
332
What is active learning?
A subfield of AI/ML where an algorithm can select some of the data it learns that will help it learn best.
333
What is adaptive learning?
A machine learning method that adjusts and tailors educational content to the specific needs, abilities and learning pace of individual students.
334
What is AI maturity?
The degree to which organizations have mastered AI-related capabilities in the right combination to achieve high performance.
335
What is AI risk management?
The process of systematically identifying, mitigating and addressing the potential risks associated with AI technologies.
336
What is AI safety?
The designing, developing, and deploying of AI systems in a way that minimizes harms from things such as misinformation, hallucinations, and other unintended behaviors.
337
What is "algorithmic disgorgement"?
When a defendant is required to delete an algorithm developed using improperly obtained consumer data.
338
What is aligned AI?
AI that pursues objectives that match the organization's intended goals, preferences, and ethical principles.
339
What is an acceptable use policy?
Part of terms and conditions of use that sets forth both the permitted and unpermitted uses for the system.
340
What is an AI audit?
A review and assessment of an AI system to ensure it operates as intended and complies with relevant laws, regulations and standards.
341
What is an AI governance strategy?
An organization's overall approach toward communicating and supporting trustworthy AI and the broader company vision about how it will use AI.
342
What is an AI impact assessment?
An evaluation process designed to identify, understand, document, and mitigate the potential ethical, legal, economic and societal implications of an AI system in a specific use case.
343
What is an AI "kill switch"?
A system that allows a deployer to deactivate or localize an AI model or system.
344
What is an AI model evolution?
The iterations of the AI model that evolve during training and subsequent use.
345
What is an AI user?
Those that use an AI system for specific purposes (i.e., end users).
346
What is an audit trail?
A chain of electronic activity or sequence of paperwork used to monitor, track, record, or validate an activity.
347
What is an authorized representative under the E.U. AI Act?
A natural or legal person located or established in the Union who has received and accepted a written mandate from a provider to perform and carry out on its behalf the obligations and procedures of the AI Act.
348
What is an expert system?
A form of rules-based AI that draws inferences from a knowledge base provided by human experts to replicate their decision-making abilities within a specific field, like medical diagnoses.
349
What is an incident register?
A document or record of every incident that has taken place within an organization, on a select project, or at a specific location.
350
What is an input attack in AI?
Where the input data is manipulated in an attempt to manipulate the output of the model.
351
What is another name for a discriminative model?
A classic model.
352
What is another name for a measure of correctness, effectiveness, and task completion?
Accuracy.
353
What is another name for a reliability assessment?
A repeatability assessment.
354
What is another name for AGI?
Strong AI.
355
What is another name for an AI governance program?
An AI governance framework.
356
What is another name for contestability?
Redressability.
357
What is another name for human-on-the-loop?
Human-over-the-loop.
358
What is another name used to describe the responsible AI principle of robustness?
Generalizability.
359
What is Artificial General Intelligence?
AI that is considered to have human-level intelligence and strong generalization capacity to achieve goals and carry out a broad range of tasks in different contexts and environments.
360
What is automated decision-making?
The process of making a decision by technological means without human involvement, either in whole or in part.
361
What is benchmarking?
Standardized tests and assessments that evaluate model performance.
362
What is bootstrap aggregating?
A machine learning method that aggregates multiple versions of a model trained on random subsets of a dataset with the aim to make a model more stable and accurate.
363
What is bounded rationality?
The notion that humans may not be able to make perfectly rational decisions even if provided complete information and unlimited time.
364
What is brittleness in the context of an AI model?
A characteristic that causes even small changes to an input to result in poor model performance, such as incorrect payloads or hallucinations.
365
What is Canada's Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems?
A voluntary framework in which signatories agree to undertake steps to facilitate the responsible AI practices proposed as part of the AIDA.
366
What is challenger modeling?
An assessment of model drift or other changes that compares a new “challenger” model with the existing “champion" model.
367
What is cognitive bias?
Inaccurate individual judgment or distorted thinking.
368
What is commonly the first step in formulating an AI governance strategy?
Identifying key stakeholders.
369
What is compute?
The processing resources that are available to a computer system.
370
What is contestability in the context of AI systems?
The principle of ensuring that AI systems and their decision-making processes can be questioned or challenged.
371
What is copyright?
Legal protection for original works of authorship, including writing, music, graphics, computer code, and more.
372
What is data cleansing?
The process of removing erroneous, duplicate, or irrelevant data from a dataset.
373
What is data drift?
When the statistical properties or attributes of input data change over time; will impact the model trained on that data.
374
What is data governance?
The planning, oversight, and control over management of data and the use of data and data-related sources.
375
What is data labeling?
The process of placing necessary tags or annotations on data to enrich the dataset.
376
What is data life cycle management?
A policy-based approach to managing the flow of information through a life cycle from creation to final disposition.
377
What is data lineage?
A map of data as it flows through a system, from end-to-end, including any transformations or movement of that data.
378
What is data residency?
Where the servers storing data are physically located.
379
What is differential privacy?
A mathematical guarantee about people’s indistinguishability in a given dataset.
380
What is dimensionality reduction?
An unsupervised algorithm that seeks to transform data from high-dimensional spaces to low-dimensional spaces without compromising meaningful properties in the original data.
381
What is disinformation?
Audio or visual content that is intentionally manipulated or created to cause harm.
382
What is edge computing?
A computing framework that brings applications and data processing closer to where the data is being generated.
383
What is encryption?
The process of obscuring information, often through the use of a cryptographic scheme in order to make the information unreadable without special knowledge.
384
What is fair use?
A legal doctrine that permits the copying and use of copyrighted works without consent for limited or "transformative" purposes.
385
What is fault-based liability?
A regime in which liability is imposed based on the relative or comparative fault of those actors involved; must show that an action or inaction caused harm (e.g., negligence).
386
What is feature engineering?
The process of transforming raw data into useful representations (i.e., features) for the model.
387
What is fine-tuning?
The process of taking a pretrained deep learning model and training it further for a specialized task through supervised learning
388
What is first party data collection?
When a data subject provides personal data to the collector directly.
389
What is fuzzy logic?
A method of AI/ML reasoning or logic that can account for varying degrees of uncertainty or vagueness in a factual scenario.
390
What is ground truth?
Known, verified facts that serve as reference data to objectively measure AI model performance.
391
What is high-performance computing?
When computing resources are combined (into what is called a cluster) in order to solve advanced computational problems.
392
What is human-in-the-loop?
When a human is involved in verifying a decision before it takes effect.
393
What is human-on-the-loop?
Where a human has the option to modify the decision making process.
394
What is human-out-of-the-loop?
Where there is no human involvement at all.
395
What is intellectual property?
The various creations of the mind and intangible assets, such as inventions, artistic works, designs, and symbols used in commerce, the use and ownership of which are protected by law.
396
What is interpretability in the context of AI systems?
The ability to explain or present a model's reasoning in human-understandable terms.
397
What is it important to prioritize risks according to severity?
It allows for better allocation of resources for risk mitigation.
398
What is machine perception?
The capability of a computer or an AI system to interpret data in a manner that is similar to the way humans use their senses to relate to the environment.
399
What is misinformation?
False audio or visual content that is unintentionally misleading.
400
What is model drift?
When the accuracy and outputs of a model changes over time as it is exposed to a continuous stream of new training data.
401
What is networking in the context of computing?
The interconnection between different computing resources over which data is shared.
402
What is one benefit of deploying a proprietary AI model?
It can be used to solve more unique business needs than generic implementations.
403
What is one drawback of deploying a proprietary AI model?
It can result in increased obligations on the deployer, including potentially greater liability.
404
What is one example of a comprehensive approach toward AI regulation?
The E.U. AI Act.
405
What is one potential benefit that deployers of AI systems have in performing an AI impact assessment?
They know the real-world contest in which the AI model will be put to use.
406
What is one potential challenge that deployers of AI systems have in performing an AI impact assessment?
They may not have full insight into the AI model risk.
407
What is one potential consequence on the resulting AI/ML model where the training data has a high level of variance?
The potential for overfitting.
408
What is one transparency-related drawback to deploying a proprietary AI model?
Public disclosures may need to be limited to protect business secrets or the privacy interests related to proprietary datasets.
409
What is opt-in consent?
An express consent that requires some affirmative act by the consumer before consent will be deemed adequate
410
What is overfitting?
When a model becomes too specific to the training data and is unable to generalize to unseen data, which means it can fail to make accurate predictions on new data sets.
411
What is preprocessing of data?
The steps taken to prepare data for a machine learning model, which can include cleaning the data, handling missing values, normalization, feature extraction and encoding categorical variables.
412
What is Privacy by Design?
An approach to systems engineering that “builds in” privacy to products and services throughout its entire life cycle or, put differently, data protection through technology design.
413
What is privacy-enhanced AI?
When organizations adopt practices to ensure that the collection, storage, and use of personal data by AI systems is done in a manner that adequately protects individuals' right to privacy.
414
What is purpose limitation?
The principle that data controllers should identify the purpose of processing personal data prior to any processing and limit processing only to those specified purposes.
415
What is red teaming?
A form of structured testing that is designed to find flaws and vulnerabilities in an AI system (adversarial testing).
416
What is requirements engineering?
The process of identifying project specifications and constraints.
417
What is residual risk?
The risk remaining in a system after controls have been implemented.
418
What is responsible AI?
Principle-based AI development and AI governance, which incorporates principles such as security, safety, transparency, explainability, accountability, privacy, non-bias, and others
419
What is retrieval augmented generation?
The process of optimizing the output of a large language model, so it references an authoritative knowledge base outside of its training data sources before generating a response.
420
What is risk tolerance?
An organization’s or AI actor’s readiness to bear the risk in order to achieve its objectives.
421
What is robotic processing automation?
Software that uses robots to automate repetitive tasks that humans typically perform, such as data manipulation.
422
What is robotics?
A multidisciplinary field that encompasses the design, construction, operation and programming of robots.
423
What is semi-structured data?
Data that is partially structured; it has organizational properties but without the rigid structure of a predefined scheme (e..g JSON)
424
What is serverless computing?
When the execution of software is not limited to a particular server or piece of hardware; does not mean that there is no server involved.
425
What is social bias?
Systemic prejudice, favoritism and/or discrimination in favor of or against an individual or group.
426
What is social engineering?
When a malicious actor attempts to manipulate a person into creating a security vulnerability or providing confidential information.
427
What is SR 11-7?
A guidance document from the US Federal Reserve that addresses risk management related to advanced statistical models.
428
What is stakeholder mapping?
A risk mitigation strategy that maps stakeholders and risks from their vantage point according to their interest in the project, their relative authority in the organization, and their domain or function; can avoid conflicting priorities.
429
What is staking in the context of AI development?
The process of training multiple base models and then using a "meta learner" that is trained on the output of the multiple base models.
430
What is static data?
Data that will not change, such as historical data that is fixed.
431
What is strict liability?
A regime in which liability is imposed whenever a particular action happens, regardless of the state of mind or intent of the party committing the act; need only prove that defendant acted in a specific way and that action caused harm.
432
What is structured data?
Data that is in a fixed field, such as a spreadsheet or relational database; generally structured according to a predefined schema.
433
What is synthetic data?
Data generated by a system or model that can mimic and resemble the structure and statistical properties of real data.
434
What is system architecture in AI?
The high-level design that identifies the structure, components, and organization of the AI system or model.
435
What is temporal bias?
When a model is trained to work at a specific point in time but later does not work as well due to changed circumstances or other events.
436
What is the European Artificial Intelligence Board?
An entity established by the EU AI Act that seeks to ensure a consistent application of the EU AI Act across Europe.
437
What is the AI Life Cycle?
A description of the evolution of an AI system from inception through retirement.
438
What is the ARIA Program?
A voluntary risk assessment tool that is sector and task agnostic.
439
What is the authorized representative required to do under the EU AI act.
Verify the declaration of conformity and act as a point of contact for competent authorities.
440
What is the basis for an inference in an ML model?
The statistical representation of data that was learned during training.
441
What is the difference between an AI governance strategy and an AI governance program?
An AI governance strategy is the organization's overall approach toward communicating and supporting trustworthy AI and the broader company vision; an AI governance program is the implementation efforts that lead to trustworthy AI.
442
What is the European Artificial Intelligence Board?
An entity established by the E.U. AI Act that seeks to ensure a consistent application of the E.U. AI Act across Europe.
443
What is the final step in an AI incident response plan?
A "lessons learned" phase that can be used to improve the system and response plan.
444
What is the first line of defense in the 3LOD model?
Management or process owners that have primary responsibility for manage risk in day-to-day operations (i.e., “risk owners”).
445
What is the first step in designing an AI/ML model?
Understanding the business context and use case.
446
What is the goal of awareness and how is it different from the goals of training?
To correct bad practices and reinforce good practices through communication; both training and awareness programs share this same goal.
447
What is the goal of the E.U. AI Act?
To ensure a healthy internal market, while promoting human-centric and trustworthy AI that protects fundamental rights and democratic values.
448
What is the goal of the OECD Principles of Trustworthy AI?
To ensure a stable policy environment that promotes human-centric and trustworthy AI.
449
What is the human feedback loop?
The process used in reinforcement learning that trains an AI model.
450
What is the intended purpose of the Contextual Robustness Index under the ARIA Program?
It provides a measure of whether the AI application can maintain robustness and trustworthy functionality across deployment contexts.
451
What is the key difference between "anonymizing" data and "pseudonymizing" data?
Pseudonymized data can be re-identified with an individual by reference to additional information, while truly anonymized data cannot.
452
What is the maximum fine that can be imposed for violations other than those under Article 5 or for providing false information to authorities?
Up to €15 million or 3% of worldwide turnover.
453
What is the most important holding from the SCHUFA case from the perspective of AI developers and deployers?
The GDPR's prohibition on automated decision-making and profiling applies to both developers and deployers of AI systems.
454
What is the name for an AI system designed to simulate human-like conversions and interactions by being able to understand and respond to text and speech?
A chatbot.
455
What is the name that the E.U. AI Act uses to refer to developers of AI systems?
Providers.
456
What is the NIST AI RMF Playbook?
A companion document to the AI RMF that provides suggested actions to achieve desired outcomes.
457
What is the NIST AI RMF?
A voluntary framework that is sector- and use-case-agnostic for managing risk.
458
What is the primary basis for establishing whether a GPAI presents "systemic risk" under the E.U. AI Act?
Compute power involved, measured as a function of floating-point operations (FLOPs).
459
What is the primary benefit of building an AI governance program using existing structures and processes of the organization?
Using existing structures helps ensure that the AI governance does not create significant new roadblocks for the organization or disrupt other business processes.
460
What is the primary difference between misinformation and disinformation?
Misinformation results in unintentional misleading, while disinformation is intended to manipulate or mislead.
461
What is the primary function by which organizations oversee their third-party vendors?
Vendor contracts.
462
What is the primary goal of AI model training?
To ensure that the resulting model is accurate
463
What is the second line of defense in the 3LOD model?
Those responsible for identifying and addressing emerging risks on a day-to-day basis (e.g., compliance officers).
464
What is the third line of defense in the 3LOD model?
Support staff that provides objectives and independent assurance that risk management efforts are effective (e.g., audit).
465
What is the word-of-machine effect?
A decisional heuristic in which humans tend to trust and prefer recommendations made by AI compared to those made by humans.
466
What is threat modeling?
A form of risk assessment that models aspects of a potential attack against specific pieces of data, applications, systems, or environments.
467
What is transfer learning?
A type of learning where an algorithm learns to perform one task, such as recognizing cats, and then uses that learned knowledge as a basis for when learning a different but related task, such as recognizing dogs.
468
What is transformed data?
A dataset that takes real data as a starting point and then manipulates that data in various ways to reduce identification and other privacy or security risks.
469
What is transparency in the context of AI systems?
The extent to which information regarding an AI system is made available to stakeholders, including disclosing whether AI is used and explaining how the model works; implies openness, comprehensibility and accountability in the way AI algorithms function and make decisions
470
What is use case evaluation?
A risk mitigation strategy that assesses the potential AI use case for risk associated with ease of implementation, strategic alignment and required expertise.
471
What is validation data?
A subset of the dataset used to assess the performance of the machine learning model during the training phase.
472
What is variance?
A statistical measure that reflects how far a set of numbers are spread out from their average value in a data set.
473
What is verification testing?
Testing that ensures that a resultant system performs according to its requirements.
474
What is watermarking in the context of AI?
The process of embedding subtle or visually imperceptible patterns in AI-generated content or metadata that can only be detected by computers.
475
What ISO/IEC standard provides guidance on implementing an AI management system?
ISO/IEC Standard 42001.
476
What jurisdiction imposed expansive restrictions on the use of automated decision-making technology to screen applicants in 2022?
New York City.
477
What law, regulation, or guidance document suggests that banks should avoid over-reliance on statistical models in making credit decisions?
SR 11-7.
478
What legislation regulates the use of "devices" in the MedTech industry?
The Food, Drug, and Cosmetic Act (FDCA).
479
What level of accuracy, robustness, and cybersecurity is prescribed for high-risk AI systems under the E.U. AI Act?
An "appropriate" level.
480
What more specific harms to groups can be caused by AI risks?
Discrimination towards subgroups.
481
What more specific harms to individuals can be caused by the use of AI Systems?
Harms to civil liberties, economic opportunities, physical safety, and psychological safety.
482
What more specific harms to organizations can be caused by AI risks?
Harms to reputation, culture, legal and regulatory compliance, economic harm, acceleration risk, and misalignment risk.
483
What more specific harms to society can be caused by AI risks?
Harms to the democratic process and participation, trust in institutions, access to public services, public safety, and society-wide employment patterns.
484
What must a deployer do from a functional perspective with an AI system before it is deployed?
Package it into a format that can be deployed by a server.
485
What must an authorized representative do when it has reason to believe that a provider is violating the E.U. AI Act?
Terminate the mandate and notify appropriate supervisory authorities.
486
What must an importer do if it determines that a high-risk AI system is not in conformity?
Notify the provider, the authorized representative, and market surveillance authorities.
487
What must online platforms disclose about recommender systems under the Digital Services Act?
The main parameters used, along with the relative importance and what options users have to modify or influence parameters.
488
What other set of FIPs served as the basis for the CSA Model Code for the Protection of Personal Information?
The OECD Guidelines.
489
What practice can help organizations facilitate monitoring AI systems?
Creating an inventory of deployed AI models and systems.
490
What preliminary question should a deployer ask when deciding whether to deploy an AI model?
Whether an AI system should be used at all (e.g., if other alternatives would be sufficient).
491
What privacy-enhancing technology serves as the basis for cryptocurrencies?
Secure multi-party computation.
492
What role can AI researchers play in an organizations governance of AI?
Identify risks and the core principles the organization should aim for in the use of AI.
493
What role do anthropologists play in developing AI?
Anthropologists can identify how different cultures are likely to perceive outputs or interact with AI systems.
494
What role do sociologists play in developing AI?
Sociologists can identify potential downstream effects of AI technology.
495
What separates audits from other assessments?
Audits are evidence based and more formal than other types of assessments.
496
What serves as the basis for the communication plan created by developers of AI models?
The documentation created as part of planning, designing, training, and testing.
497
What should a company consider in choosing a third-party vendor?
(1) Reputation; (2) Financial stability; and (3) Security controls.
498
What should a data strategy cover?
How data is collected and used based on quality, quantity, integrity, and fit-for-purpose.
499
What should a deployer do if it determines that the benefits of using an AI model do not outweigh the risks and costs of using the model?
Not use the AI system.
500
What should an AI use case analysis consider?
Gaps in organizational goals, whether the use of AI comports with the organization’s mission, applicable data considerations, and whether non-AI alternatives are sufficient.
501
What should periodic assessments measure as part of post-market monitoring?
The model's performance, reliability, fairness, and safety.
502
What should periodic assessments measure as part of post-market monitoring?
The model's performance, reliability, fairness, and safety.
503
What technique do transformer models rely upon to focus on the most important aspect of input data?
Attention.
504
What technique helps leverage economies of scale by allowing foundation models to be reused for specific use cases without needing to undergo retraining?
Fine-tuning
505
What term is generally given to state-level laws that are closely similar to Section 5 of the FTC Act?
Unfair and Deceptive Acts or Practices Statutes or UDAP Statutes.
506
What "test bed" under the ARIA Program examines positive and negative impacts of the AI model that arise under regular use in real-world settings?
Field testing.
507
What "test bed" under the ARIA Program performs stress tests on the AI model to provide guiderails for sufficiency and robustness?
Red teaming.
508
What test is used to determine whether a machine has the ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human?
The Turing Test.
509
What three elements must the FTC prove in order to establish an "unfair" trade practice?
A trade practice that results in (1) a substantial injury (2) with a lack of off-setting benefits and (3) the injury is one that consumers themselves could not reasonably have avoided.
510
What three technology advances have contributed to recent boom in AI?
(1) Increased availability of data because of the internet and related technologies; (2) increase in computing power; and (3) technologies that allow ethical AI development.
511
What two concepts are inherent in the data minimization principle?
Necessity and proportionality.
512
What type of documentation should a deployer review before deploying an AI system?
Technical documentation, models cards, datasheets, instructions for use, and other similar documents created by the developer.
513
What type of factors can impact an organization's AI strategy?
Organization type, size, maturity, risk tolerance, and AI use cases.
514
What type of law may prohibit the use of certain data outside the geographic boundaries of a country?
Data localization laws.
515
What type of learning technique is used where the goal is to uncover patterns that may exist in unlabeled datasets?
Unsupervised learning.
516
What type of machine learning technique is also often considered to be a type of privacy-enhancing technology.
Federated learning.
517
What type of mindset should organizations adopt when monitoring AI models and systems for security threats?
An "adversarial" mindset.
518
What type of ML training technique relies upon trial and error?
Reinforcement learning.
519
What type of software is explicitly excluded from the reach of the E.U.'s Reformed Products Liability Directive?
Open-source software.
520
What type of terms in an AI vendor agreement must be reviewed prior to deployment of an AU model?
Those related to data, security and safety, model bias and fairness, product use permissions, technical and performance specifications, continuos monitoring obligations, and other general contract terms.
521
What types of data governance practices must be imposed on high-risk AI systems?
Proportionate data governance practices, maintain quality data, account for unique characteristics of use, and process special categories only when strictly necessary for bias detection.
522
What types of disclosures should be made by a deployer of an AI system?
That AI is in use, the rights of individuals with respect to the system, and how to use the system.
523
What U.S. federal regulator has issued guidelines related to the use of robots in the workplace?
What U.S. federal regulator has issued guidelines related to the use of robots in the workplace?
524
What U.S. law expressly prohibits discrimination in the provision of healthcare on the basis of race?
Section 1557 of the Affordable Care Act.
525
What U.S. law prohibits discrimination on the basis of race?
The Equal Credit Opportunity Act
526
What U.S. law requires those relying on a consumer report as a basis for denying a credit decision to provide notice to the affected consumer?
The Fair Credit Reporting Act.
527
What U.S. state has established an Office of AI Policy and AI Learning Lab Program?
Utah.
528
What US federal regulator has issued guidelines related to the use of robots in the workplace?
The Occupational Health and Safety Administration (OHSA)
529
What US law prohibits discrimination in housing on the basis of race?
The Fair Housing Act.
530
When complying with Section 2 of the E.U. AI Act, what two factors should providers and deployers of high-risk AI systems take into account?
The purpose of the AI system and the state of the art.
531
When is a third-party audit most likely to be conducted?
When credibility is paramount or high-level expertise is needed.
532
When is automated decision-making and profiling permitted under the GDPR?
Where the automated decision-making or profiling is: (1) necessary to enter into a contract; (2) otherwise authorized by law; or (3) done with explicit consent of the data subject.
533
When must a conformity assessment be performed under the E.U. AI Act?
Before a high-risk AI system is brought to market or before any substantial modification is performed on a high-risk AI system already on the market.
534
When must a data protection impact assessment be performed under the GDPR?
Whenever processing is likely to result in a high risk to the rights and freedoms of natural persons.
535
When must disclosures called for under Article 50 of the E.U. AI Act be provided to individuals?
At the time of first interaction or exposure.
536
When should a privacy impact assessment be performed?
Before any new type of processing is performed, or in response to specific events (e.g., a change in law)
537
When should an AI impact assessment be conducted?
At the outset of an AI project, as well as at other stages of the AI Life Cycle when needed.
538
When should organizations anticipate and model downstream threats from an AI model or system?
Early in the development process and throughout the AI Life Cycle.
539
When using labeled data that is (x, y) pairs, what do the (x) and the (y) represent?
(x) is the original unlabeled portion of the data, while (y) is the desired output associated with the original data.
540
When was the E.U. AI Act adopted and when do most of its provisions become effective?
The E.U. AI Act was adopted in 2024, with most provisions becoming effective 24 months later.
541
When was the term "artificial intelligence" originally used?
In 1956 as part of the Dartmouth Conference.
542
Which AI actor has primary responsibility for ensuring conformity of an AI model before release?
The developer.
543
Which FIP calls on developers of AI systems to provide clear, transparent communications about how personal data will be used as part of an AI system?
Notice.
544
Which ISO/IEC standard provides an AI-specific standard that is certifiable?
ISO/IEC Standard 42001.
545
Who first pioneered the concept of Privacy by Design?
The former information and privacy commissioner of Ontario, Ann Cavoukian.
546
Who has an affirmative obligation to take corrective actions when an AI system comes out of conformity?
Developers of AI Systems.
547
Who has primary responsibility for ensuring that the requirements of Section 2 of the E.U. AI Act related to high-risk systems are complied with?
Providers of high-risk AI systems.
548
Who has responsibility for establishing a policy to respect E.U. copyright law under the E.U. AI Act?
Providers of GPAI models.
549
Who has responsibility to perform a fundamental rights impact assessment under the EU AI Act?
Deployers of high-risk AI systems.
550
Who has responsibility to provide AI literacy training to their employees under the E.U. AI Act?
Both providers and deployers of any AI system.
551
Who has responsibility to provide notice to individuals that they are subject to automated decision-making by high-risk AI systems under the E.U. AI Act?
Deployers of high-risk AI systems.
552
Who has responsibility under the E.U. AI Act to assign human-oversight over high-risk AI systems to natural persons?
Deployers of high-risk AI systems.
553
Who is generally subject to the greatest amount of regulation, a "data controller" or a "data processor"?
A "data controller" is generally subject to the greatest regulation because the "data controller" has the ultimate responsibility for what is done with personal information.
554
Who must register with European authorities under the E.U. AI Act?
Providers of high-risk AI systems.
555
Why is a base data pile compiled before it is divided into datasets used for training, testing, and validation?
To ensure that each step uses statistically similar and representative data.
556
Why is it important for a deployer of an AI model to understand whether a model is customizable or "out of the box" before deploying?
Customization can create risks and may result in a deployer being labeled a developer.
557
Why is it important for organizations to try and avoid the label of "provider" under the EU AI Act?
Because providers have significantly more onerous compliance obligations than other actors under the EU AI Act.
558
Why is it important to identify the role of an organization as a developer, deployer, or user of an AI system?
Each role has a different risk profile.
559
Why is it important to minimize the inclusion of unnecessary features when designing and building an AI model?
Unnecessary features add complexity and waste resources (e.g., compute).
560
Why is it important to prioritize risks according to severity?
It allows for the better allocation of resources for risk mitigation.
561
Why should a deployer conduct its own independent AI impact assessment prior to deployment?
Because each actor in the AI Life Cycle has its own risk profile.
562
Why should a deployer understand who the end user will be, how the end user will use the system, and why the end user will use the system?
It helps a deployer understand the business problem and potential risks of the system.
563
Why should organizations take versioned snapshots of its AI models that have been deployed?
It allows AI models to be rolled back to prior models in the event that a model decays or drifts.
564
XAI is an abbreviation commonly used for what concept?
Explainable AI, or explainability.