Midterm Flashcards

(154 cards)

1
Q

What is Artificial Intelligence?

A
  1. systems that think and act like humans –> model the cognitive fxns of humans
  2. systems that think and act rationally
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Turing Test

A
  • One method of determining the strength of artificial intelligence, in which a human tries to decide if the intelligence at the other end of a text chat is human
  • consider it intelligent when people can’t differentiate a computer from people
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Rationality

A

an abstract “ideal” of intelligence rather than “whatever humans think/do”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Syllogisms

A

argument structures that always yield correct solutions given correct premises

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Building Agents

A

artifacts that are able to think and act rationally in their environments –> better design objective

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is an agent?

A
  • situated in some environment - can make observations
  • can act
  • has goals or preferences
  • may have prior knowledge or beliefs and some way of updating beliefs based on new experiences
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Thinking and Acting Rationally

A
  • act appropriately given goals and circumstances
  • flexible to changing environments and goals
  • learn from experience
  • make appropriate choices given perceptual and computational limitations
  • gather information
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What do we need to represent?

A
  • the environment/world - how the world works –> constraints, causal relationships, action preconditions and effects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Reasoning Tasks/Problems

A
  • Constraint Satisfaction: find a state that satisfies some set of constraints - Answering Queries - Planning: choose actions to reach a goal state or maximize utility
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Representation and Reasoning System

A
  • a language in which the environment and how it works can be described (representation) - computational procedures to compute a solution to a problem in that environment (reasoning)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Deterministic vs. Stochastic Domains

A

if the answer is YES to both of these, then the environment is considered deterministic, and stochastic otherwise: 1. Sensing Uncertainty: Can the agent fully observe the current state of the world? 2. Effect Uncertainty: Does the agent know for sure what the direct effects of its actions are?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How can states be described?

A
  • described in terms of features - described in terms of objects and relationships
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Flat vs Hierarchical

A
  • Flat: one level of abstraction - Hierarchical: multiple levels of abstraction
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Knowledge Given vs. Knowledge Learned

A
  • agent is provided with a model of the world once and for all OR - agent can learn about the world through experience
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Goal vs. Preferences

A
  • an agent may have a goal it wants to achieve such as some state of the world the agent wants to be in, or some proposition that the agent wants to make true - an agent may have preferences and has some preference fxn that describes how “happy” the agent is in each state of the world –> want to be as “happy” as can be
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What happens if there are other agents whose actions affect us?

A

it can be useful to explicitly model their goals and beliefs rather than considering them as part of the environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What can other agents be?

A

either cooperative or competitive, or both

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Single-Agent

A

only thing that modifies the environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Describe a simple planning agent.

A
  • deterministic, goal-driven agent - initially in a start state - given a goal - environment changes only when the agent acts - agent perfectly knows what actions can be applied in any given state and the state it will end up in when it takes an action - the sequence of actions is the solution
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How can we find a sequence of actions that lead to the goal?

A

define underlying search space graph where nodes are states and edges are actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How to Search

A
  • begin at start state - consider the effects of different actions from states that have been encountered in the search so far - stop when a goal state is encountered
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Graph

A

consists of a set of N nodes (vertices) and a set A of ordered pairs of nodes called arcs (edges)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Neighbours

A

node n2 is a neighbour of n1 if there is an arc from n1 to n2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Path

A

a sequence of nodes n0, n1,…, nk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Cycle
a path with 2 or more nodes such that the start node is the same as the end node
26
Directed Acyclic Graph (DAG)
a graph with no cycles and where the arcs have associated directions
27
Solution
given a start and goal node, a solution is a path from a start to a goal node
28
Generic Search Algorithm
- given a graph, start node, and goal nodes, incrementally explore paths from the start node - maintain a frontier of paths from the start node that have been explored - as the search proceeds, the frontier expands into the unexplored nodes until a goal node is encountered
29
(Forward) Branching Factor
the forward branching factor of a node is the number of arcs going out of the node
30
Backward Branching Factor
the backward branching factor of a node is the number of arcs going into the node
31
Complete Algorithm
a search algorithm is complete if, whenever at least one solution exists, the algorithm is guaranteed to find a solution within a finite amount of time
32
Optimal Algorithm
a search algorithm is optimal if, when it returns a solution, it is the best solution (ie. no better solution)
33
Time Complexity
an expression for the worst-case amount of time it will take to run, expressed in terms of m and b
34
m
maximum path length
35
b
maximum branching factor
36
Space Complexity
an expression for the worst-case amount of memory that algorithm will use
37
Depth First Search (DFS)
"Dive" as deep as possible into the graph first, then branch when necessary - treats the frontier as a stack (first in, last out) --> always selects the path most recently added to the frontier
38
Is DFS complete?
No - if there are cycles in the graph, DFS may get "stuck" in them - only complete for finite acyclic graphs
39
Is DFS optimal?
No - DFS can reach longer solution paths before it finds shorter ones
40
What is the time complexity for DFS?
O(b^m) --> in worst-case, we must check every node in the search space
41
What is the space complexity for DFS?
O(bm) --> for every node in the path currently explored, DFS maintains a path to its unexplored siblings
42
When is DFS appropriate?
- space is restricted --> limited memory available - solutions tend to be a considerable "distance" from the start state
43
When is DFS inappropriate?
- cycles - shallow solutions
44
Breadth First Search (BFS)
traversal level by level - treats the frontier as a queue (first in, first out) --> always selects the path first added to the frontier - selects one of the least recently added paths
45
Is BFS complete?
Yes - does not get stuck in cycles
46
Is BFS optimal?
Yes - guaranteed to find a path that involves the fewest arcs
47
What is the time complexity for BFS?
O(b^m) - may need to examine every node in the tree
48
What is the space complexity for BFS?
O(b^m) - frontier contains all paths of the relevant length
49
When is BFS appropriate?
- space is not a problem - need solution with fewest arcs - some solutions are shallow
50
When is BFS inappropriate?
- space is limited - all solutions tend to be "far" from the start state - very large branching factor
51
BFS or DFS: Need the shortest path to a solution
BFS
52
BFS or DFS: Search graph has cycles or is infinite
BFS
53
BFS or DFS: All solutions are distant from the start state
DFS
54
BFS or DFS: Some solutions are distant from the start state
BFS
55
BFS or DFS: Memory is limited
DFS
56
Iterative Deepening Search (IDS)
- re-compute elements of the frontier rather than saving them - combines DFS and BFS
57
Is IDS complete?
yes - when b is finite - depth bound prevents getting stuck in cycles
58
Is IDS optimal?
yes - assuming arc costs are equal - incrementing depth bound ensures shallow solutions are found first
59
What is the time complexity for IDS?
O(b^m)
60
What is the space complexity for IDS?
O(bm) - since we're doing DFS
61
Cost of a Path
the sum of the costs of its arcs
62
When is a search algorithm optimal?
a search algorithm is optimal if, when it returns a solution, it is the one with minimal cost
63
Lowest-Cost-First Search (LCFS)
- at each stage, LCFS selects a path on the frontier with the lowest cost - priority queue ordered by path cost
64
When arc costs are equal LCFS is equivalent to...
BFS
65
Is LCFS complete?
yes - as long as arc costs are strictly positive - a cycle with zero or negative total cost could be followed forever
66
Is LCFS optimal?
yes - if arc costs are non-negative
67
What is the time complexity for LCFS?
O(b^m)
68
What is the space complexity for LCFS?
O(b^m)
69
What information could we use to better select paths from the frontier?
an estimate of the distance/cost from the last node on each path to the goal
70
Heuristic
an estimate of the distance from each node to a goal node that helps to guide the search
71
Search Heuristic h(n)
an estimate of the cost of the lowest-cost path from node n to a goal node
72
When is a search heuristic h(n) admissible?
a search heuristic h(n) is admissible if it is never an overestimate of the minimum cost - ie. there is never a path from n to a goal that has path cost less than h(n) - ie. h(n) is a lower bound on the cost of getting from n to the nearest goal.
73
How to Construct an Admissible Heuristic
- consider a relaxed version of the problem (some constraints or restrictions on actions have been dropped) - identify the cost at each state of solving this simpler version of the original problem (should not be greater than the cost of solving the original problem) - another approach = solve a sub-problem
74
Dominance
if we have two heuristics h1 and h2 and h2(n) >= h1(n) for every state of n, then h2 dominates h1 - if both heuristics are admissible, then h2 is better for search
75
What do you do if there is no dominance between heuristics?
- combine heuristics - if h1 and h2 are both admissible, then h = max(h1, h2) is also admissible and dominates both h1 and h2
76
Best-First Search (BestFS)
- select the path whose end is closest to a goal according to the heuristic fxn - selects the path on the frontier with minimal h-value - treats frontier as a priority queue ordered by h - greedy approach --> always takes the path whose "future" appears best
77
Is BestFS complete?
no - low heuristic values in a cycle can result in that cycle being followed forever - heuristics that mislead us so there could be answers we never reach
78
is BestFS optimal?
no - greedy approach doesn't lead to optimal solution
79
What is the time complexity for BestFS?
O(b^m)
80
What is the space complexity for BestFS?
O(b^m)
81
A*
- combines BestFS and LCFS - treats the frontier as a priority queue ordered by f(p) = lowest cost(p) + h(p) - always selects the path on the frontier with the lowest estimated total cost to a goal
82
If the heuristic is h=0 everywhere, and the edge costs are all the same, A* is equivalent to?
BFS and LCFS
83
Is A* complete?
yes - A* won't get stuck in cycles
84
Is A* optimal?
yes - as long as the branching factor is finite, arc costs are strictly positive, and h(n) is an underestimate (admissible) and is non-negative
85
What is the time complexity for A*?
O(b^m)
86
What is the space complexity for A*?
O(b^m)
87
A* - Optimality
if A* selects a path p as the solution, then p is optimal (ie. lowest cost) path
88
A* - Optimal Efficiency
among all optimal algorithms that start from the same start node and use the same heuristic h, A* expands the minimal number of paths
89
Branch and Bound Search (B&B)
- use DFS --> expand the most recently added path first - order in which neighbours are added to the frontier can be governed by some arbitrary node-ordering approach (eg. alphabetical ordering or ordering based on f-score) - space complexity becomes O(bm) - once we find a solution, keep searching for shorter/lower cost solutions - to remain informed, keep track of an upper bound on the solution cost at each path
90
Is B&B complete?
no - same reasons why DFS is not complete
91
Is B&B optimal?
yes, but not optimally efficient
92
What is the time complexity for B&B?
O(b^m)
93
What is the space complexity for B&B?
O(bm) - big improvement over A*
94
Iterative Deepening A* (IDA*)
- search depth-first, but to a fixed bound (like IDS) - bound is measured in f-values - if no solution during a given iteration of IDA*, update the bound with the lowest f-value that passed the previous bound during that iteration
95
Is IDA* complete?
yes
96
Is IDA* optimal?
yes
97
What is the time complexity for IDA*?
O(b^m)
98
What is the space complexity for IDA*?
O(bm)
99
Memory-Bounded A* (MBA*)
- more memory available - keep as much of the frontier in memory as we can - if we have to delete something, delete the worst paths (highest f-values) and back them up to a common ancestor - update heuristic value of ancestor if possible
100
Cycle Checking
you can prune a path that ends in a node already on the path --> cannot remove an optimal solution
101
Multiple-Path Pruning
- you can prune a path to a node n that you have already found a path to (if new path is more costly) - if less costly, you can remove all paths from the frontier that contain the longer path and you can replace the initial segments of those paths on the frontier to use the shorter path instead
102
Dynamic Programming
- build a table of dist(n) - dist(n) = actual cost of the lowest-cost path from a node n to goal g - perfect heuristic - from each node, go to its neighbour that minimizes cost(n,m) + dist(m) - 2 problems: need to store entire graph and dist fun needs to be recomputed for each goal
103
Variables are denoted using?
capital letters
104
What does each variable have?
a domain dom(V) of possible values
105
Variables can be:
- boolean - finite: domain contains a finite # of values - infinite but discrete: eg. set of all integers - continuous: eg. real #'s between 0 and 1
106
Possible World
a complete assignment of values to a set of variables
107
Constraints
restrictions on the values that one or more variables can take
108
Constraints can be:
- unary: restriction involving one variable - k-ary: restriction involving the domains of k different variables (can always be represented as binary constraints)
109
How can constraints be specified?
- giving a fxn that returns true when given values for each variable that satisfy the constraint - giving a list of valid domain values for each variable
110
Constraint Satisfaction
a possible world satisfies a set of constraints if the set of variables involved in each constraint takes values that are consistent with the constraint
111
What does a constraint satisfaction problem (CSP) consist of?
- a set of variables
112
be
- unary: restriction involving one variable - k-ary: restriction involving the domains of k different variables (can always be represented as binary constraints)
113
How can constraints be specified?
- giving a fxn that returns true when given values for each variable that satisfy the constraint - giving a list of valid domain values for each variable
114
What does a constraint satisfaction problem (CSP) consist of?
- a set of variables - a domain for each variable - a set of constraints
115
Model of a CSP
a model of a CSP is an assignment of values to variables (ie. a possible world) that satisfies all of the constraints
116
CSP Objectives
- determine whether or not a model exists - find a model - find all models - find the best model given some model "goodness" --> optimization problems
117
Generate-and-Test Algorithm
- Brute-force method - generate possible worlds, one at a time - test them to see if they violate any constraints - behaves similar to DFS - horrible efficiency - no solution - this method can solve any CSP - runtime is proportion to the # of possible worlds --> far too long
118
Generate-and-Test Algorithm CODE
for a in domA for b in domB for c in domC if (abc) satisfies all constraints return (abc) return NULL
119
Standard Search Problem
- an agent can solve a problem by searching in a space of states - state is a "black-box" --> any arbitrary data structure that supports three problem-specific routines: 1. neighbours(n) 2. heuristic(n) 3. goal(n)
120
States
assignments of values to the subset of the variables
121
Start State
the empty assignment (no variables assigned values)
122
Neighbours of a State
values are assigned to one additional variable
123
Goal State
any state where all variables are assigned values and all constraints are satisfied
124
DFS with Pruning
- want to explore as few sub-trees as possible - once we consider a path whose end now violates one or more constraints, we know that a solution cannot exist past that point - we then remove (prune) that path rather than continue to search
125
Domain Consistent
a variable is domain consistent if no value of its domain is ruled impossible by an unary constraints
126
Constraint Network
constraint network is defined by a graph with: - one node for every variable (round) - one node for every constraint (rectangle) - undirected edges running between variable nodes and constraint nodes where a given variable is involved in a given constraint
127
Arc Consistency
an arc in a constraint network is arc consistent if, for each value X in dom(X), there is some value y in dom(Y) such that r(x,y) is satisfied
128
What happens if an arc is not arc consistent?
if an arc is not arc consistent, all values x in dom(X) for which there is no corresponding value in dom(Y) may be deleted from dom(X) to make the arc r(x,y) consistent
129
When is a constraint network arc consistent?
if all of its arcs are arc consistent
130
Arc Consistency Algorithm
- we have a constraint network with variable constraint arcs that may or may not be consistent - consider arcs one at a time, ensuring that each one is consistent - may need to prune a variable domain to make this happen - will never remove solutions - order in which we consider arcs doesn't affect the output of the algorithm
131
What happens after we prune domains to make arcs consistent?
when we reduce the domain of a variable X to make an arc arc consistent, we revisit every arc where c' involves Z and X
132
What is the time complexity for the arc consistency algorithm?
O(n^2*d^3)
133
What are the three possible outcomes when all arcs are consistent?
1. one domain is empty --> no solution 2. each domain has a single value --> unique solution 3. some domains have more than one value --> zero or more solutions
134
Domain Splitting
- when arc consistency ends, some domains may still have more than one value that may or may not be a solution - 2 ways we can proceed: 1. apply DFS with pruning on reduced domains 2. split problem into 2 disjoint cases
135
Advantage of Domain Splitting
allows us to perform arc consistency again on each case, leveraging its polynomial runtime
136
Disadvantage of Domain Splitting
need to keep many constraint networks around
137
States in Searching by Domain Splitting
"remaining domains"
138
Start States in Searching by Domain Splitting
results of running arc consistency on the collection of original domains
139
Successor Function in Searching by Domain Splitting
split one of the domains and run arc consistency
140
Goal State in Searching by Domain Splitting
collection of unary domains that satisfies all constraints
141
Local Search: General Method
1. start from a possible world 2. generate some neighbours 3. move from current node to a neighbour, selected according to strategy 4. repeat 2 and 3 as needed - no frontier here
142
How can you change assignments to neighbours?
- differ in one variable's value by +- 1 or by any amount - differ in 2 variable's values
143
Hill Climbing
selecting the neighbour which maximizes a (value-based) scoring function
144
Greedy Descent
selecting the neighbour which minimizes a (cost-based) scoring function - minimizes cost and # of conflicts
145
Problems with Hill Climbing
- local maxima - plateaus and shoulders - ridges --> sequences of local maxima not directly connected to each other
146
What do we want our local search to do?
- be guided by the scoring function - not to get stuck in local maxima, minima, plateaus, etc. - solution = alternate between using hill climbing steps (move to best neighbour), random steps (move to random neighbour), and random restart (reassign random values to all variables)
147
Random Steps: One-Step
- to add occasional randomness to the selection of a (variable, value) pair, choose a pair: 1. sometimes according to scoring fxn (greedy) 2. sometimes just a random pair
148
Random Steps: Two-Step
- sometimes select variable: 1. that is in largest # of conflicts 2. that is in at least 1 conflict 3. at random - sometimes select value: 1. that minimizes number of conflicts 2. at random
149
Stochastic Local Search Advantages
- applicability to online settings - goal = repair with minimum # of changes - anytime algorithms
150
Stochastic Local Search Limitations
- no guarantee to find a solution even if one exists --> SLS algorithms can sometimes stagnate and never terminate - not able to show that no solution exists --> doesn't terminate
151
How can SLS algorithms differ?
can differ in: - how frequently they take random steps, or random restarts, or both - whether they find neighbours by one-step or two-step
152
If an SLS algorithm stagnates, its runtime goes to?
infinity
153
Runtime Distribution Plots
(x) # of steps (y) proportion (or #) of the runs that are solved within that runtime
154
SLS: Summary
- combine greedily improving moves with randomization, or random steps/random restarts - always keep best solution so far - stop when solution is found or runs out of time