Prolog, Datalog, Learning Flashcards

(89 cards)

1
Q

What is Prolog mainly used for?

A

Symbolic reasoning and knowledge representation; describing what is true rather than how to compute it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the three main building blocks of Prolog programs?

A

Facts, Rules, Queries

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does the underscore ‘_’ represent in Prolog?

A

An anonymous variable that matches any single value but is ignored.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does a Prolog query ask?

A

Whether a certain statement (goal) can be proven true from the known facts and rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is backtracking in Prolog?

A

A mechanism where Prolog systematically searches alternative rule branches when one path fails.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does the ‘!’ operator (cut) do?

A

It commits to all choices made so far and prevents further backtracking past that point.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is ‘fail’ in Prolog used for?

A

It forces a goal to fail intentionally

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does ‘not(Goal)’ mean in Prolog?

A

Negation as failure: it succeeds if ‘Goal’ cannot be proven true.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What logical assumption does ‘not/1’ rely on?

A

The Closed World Assumption: anything not known to be true is assumed false.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the difference between ‘=’ and ‘is’ in Prolog?

A

’=’ unifies terms (pattern matching)

“is”: Arithmetic evaluation: computes numeric expressions on the right-hand side

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the core idea of Explanation-Based Learning (EBL)?

A

Learning by explaining why an example is true using background knowledge.

(But you don’t learn anything new!)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does a Prolog meta-interpreter do?

A

It is a Prolog program that interprets other Prolog programs.
It can record how a goal was proven (the explanation trace) and generalize that proof into a reusable rule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does the ‘generalization’ step in EBL do?

A

It replaces specific constants (like ‘tweety’) with variables to form a general rule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the result of an EBL process?

A

A generalized and operational rule (hypothesis) that can explain not only the training example but also future instances of the same concept

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How does EBL differ from statistical learning?

A

EBL is deductive and symbolic; it explains and generalizes single examples using logic instead of patterns in data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a limitation of EBL?

A

It relies heavily on correct background knowledge; if the logic base is incomplete or wrong, it can’t learn effectively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What does ‘prove_with_trace’ in a meta-interpreter typically do?

A

It executes reasoning steps while recording which clauses were used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Why are hybrid neuro-symbolic systems important today?

A

They combine data-driven learning with logical reasoning and explainability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Inductive Learning vs. Deductive Learning

A

IL:
- Examples cannot be derived from the background knowledge
(prior necessity)
- Many examples needed
- The validity of the generalizations needs to be checked empirically
- focus on the learning on relevant
features

DL (EBL):
- Examples can be proved with the available background knowledge
- a single example can suffice
- Generalizations are provably correct
- the focus is provided by the background knowledge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is Symbolic AI?

A

An approach to AI that represents knowledge explicitly using symbols and logic, enabling reasoning instead of statistical pattern recognition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is Inductive Logic Programming (ILP)?

A

A form of symbolic machine learning that learns logical rules (Horn clauses) from examples and background knowledge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the main goal of Concept Learning?

A

To find the logical definition (theory T) of a target concept that explains positive examples and excludes negative examples, using background knowledge B.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are Positive Examples (E⁺) in ILP?

What are Negative Examples (E⁻) in ILP?

A

Instances of the target relation that are known to be true and should be covered by the learned theory.

Instances of the target relation that are known to be false and should not be covered by the learned theory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is Background Knowledge (B)?

A

Known facts or relations (e.g., parent/2, male/1) that can be used in constructing the target relation but are not themselves examples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What are the a priori constraints in ILP?
Prior necessity (learning is needed) and prior satisfiability (background knowledge must be consistent; no negative example should already follow from the background knowledge ).
26
What are the a posteriori desiderata in ILP?
*Posterior sufficiency* (completeness): All positive examples must be entailed by the learned theory and background knowledge (∀e ∈ E⁺ : B ∧ T ⊨ e). *Posterior satisfiability* (consistency): No negative examples should be entailed by the learned theory
27
What does 'Learning as Search' mean in ILP?
Learning is framed as searching through the space of possible hypotheses (logic programs) to find one that fits the examples.
28
What is the Hypothesis Space in ILP?
The set of all possible logic programs (rules) that can be formed using the available predicates and constants.
29
What is the target relation in ILP?
The predicate whose logical definition is being learned (e.g., father/2, ancestor/2).
30
What are literals in ILP?
Atoms/negated atoms
31
What is an atom compared to a literal?
An atom is a positive predicate (e.g., parent(A,B)); a literal is an atom or its negation (e.g., not(parent(A,B))).
32
What is FOIL?
First Order Inductive Learner — a classical ILP algorithm that learns first-order logic rules using examples and background knowledge (Quinlan, 1990).
33
What type of logic programs does FOIL learn?
Recursive Datalog programs (Horn clauses).
34
What search strategy does FOIL use?
A Separate-and-Conquer strategy combined with Top-Down Hill-Climbing.
35
What does Top-Down Hill-Climbing mean in FOIL?
Starting from the most general rule and greedily adding literals that increase rule quality (information gain).
36
What heuristic does FOIL use to select literals?
Weighted Information Gain (WIG), which measures improvement in the ratio of positives to negatives covered.
37
What is the MDL principle used for in FOIL?
Minimum Description Length — a principle to balance model complexity and data fit. To stop rule refinement when further specialization no longer reduces the total description length (prevents overfitting).
38
What is a recursive rule in ILP?
A rule that refers to itself in the body (e.g., ancestor(X,Y) :- parent(X,Z), ancestor(Z,Y)).
39
What problem can occur with recursive literals?
Infinite recursion or trivial recursion (ancestor(X,Y) :- ancestor(X,Y)).
40
What is pruning in rule learning?
Removing or avoiding overly complex or overly specific rules to prevent overfitting.
41
What is Language Bias in ILP?
A specification of which literals or rules are allowed during search, used to restrict the hypothesis space.
42
What are Type Constraints in ILP?
Restrictions ensuring that variables appear only in predicates compatible with their semantic types (e.g., person vs. number).
43
What are Mode Declarations in ILP?
Specifications indicating how arguments of predicates can be used (input '+', output '-', constant '#').
44
What are Symmetry Specifications?
Rules indicating that some predicates are symmetric (e.g., married(A,B) = married(B,A)) so only one orientation is tested.
45
What is Linus?
An approach that transforms flat data (like tables) into logical facts (Horn clauses) by introducing binary attributes for each possible literal (Lavrač & Džeroski, 1991). Makes ILP usable for databases
46
What is Dinus?
An extension of Linus that also handles multirelational structures and decomposes problems. Learns rules layer by layer
47
What is the main difference between Linus and Dinus?
Dinus encodes output variable bindings of determinate literals as propositional attributes; Linus only handles true/false literals.
48
What are common symbolic representation methods?
- Propositional and first-order logic - semantic networks - frames - ontologies - rule-based systems.
49
What is the advantage/disadvantage of symbolic AI?
It provides transparency, interpretability, and explainable reasoning based on explicit knowledge. It struggles with uncertainty, ambiguity, and learning from raw data.
50
How does ILP combine learning and symbolic reasoning?
It learns symbolic rules (knowledge) from examples, combining inductive generalization with logical inference.
51
What does FOIL stand for? What's the goal of FOIL?
First Order Inductive Learner. Goal is to learn first-order logic rules (Horn clauses) from examples and background knowledge.
52
What kind of logic programs does FOIL learn?
Recursive Datalog programs — i.e., sets of Horn clauses that can include recursion.
53
What is the structure of the FOIL algorithm?
It uses a Separate-and-Conquer outer loop and a Top-Down Hill-Climbing inner loop.
54
What is the Separate-and-Conquer strategy?
A rule learning strategy that learns one consistent rule at a time, removes covered positives, and repeats until all positive examples are covered.
55
What happens in the Separate phase of FOIL?
Remove all examples covered by the current rule from the training set.
56
What happens in the Conquer phase of FOIL?
Continue learning new rules for the remaining positive examples until all are covered.
57
What is FOIL's inner loop called?
Top-Down Hill-Climbing search.
58
What does Top-Down Hill-Climbing mean in FOIL?
Start with the most general rule and iteratively add literals that improve the rule according to a heuristic (information gain).
59
What is the starting rule in FOIL's hill-climbing?
The most general rule: target(...) :- true.
60
What is a literal in FOIL?
A predicate (atom) or its negation that can appear in the body of a rule as a condition.
61
What is a candidate literal?
A possible predicate with variable bindings that FOIL considers adding to the rule body.
62
How does FOIL generate candidate literals?
By combining predicate symbols, existing variables, possibly new variables, and constants that appear in examples.
63
Why can literal generation be expensive in FOIL?
Because the number of possible variable and predicate combinations grows combinatorially — each must be evaluated for coverage.
64
What are common constraints used to limit literal generation?
- language bias (a declarative specification defining which literals or rule structures are valid, reducing the search space) - type constraints (restrictions that ensure variables appear only in predicates with compatible argument types) - templates (schemata) that restrict which combinations are allowed
65
What heuristic function does FOIL use to evaluate literals?
Weighted Information Gain (WIG) Measures how much the addition of a literal improves the purity of a rule — i.e., increases positives and reduces negatives.
66
What do p₀ and n₀ mean in FOIL's gain formula? What do p₁ and n₁ mean in FOIL's gain formula?
p₀ and n₀ are the number of positive and negative examples covered **before** adding the literal. p₁ and n₁ are the number of positive and negative examples covered **after** adding the literal.
67
What happens when a literal has zero information gain but is necessary?
It may still be added if it introduces new variables required for later conditions (useful literals without gain).
68
What is the MDL principle used for in FOIL?
Minimum Description Length — it stops rule refinement when additional literals no longer reduce the total description length. It prevents Overfitting — rules that perfectly fit noise or exceptions in the training data.
69
What is pre-pruning in FOIL?
Stopping rule refinement early when no significant improvement is achieved (e.g., by MDL or gain threshold).
70
What is post-pruning in FOIL?
Removing unnecessary or redundant literals from rules after learning to simplify them.
71
What are determinate literals, and why are they useful?
Literals that introduce new variables which have *at most one* possible binding for each variable assignment in the head — deterministic relations. They don’t increase the number of tuples (proofs) but allow introducing new variables efficiently.
72
Give an example of a determinate literal.
succ(N,N1) — every number has exactly one successor.
73
What does FOIL do if no literal achieves sufficient gain?
It adds all possible determinate literals to the rule to help learning continue.
74
What threshold defines 'sufficient gain' in FOIL?
Typically 80% of the maximum possible gain.
75
What happens after a rule is completed in FOIL?
All unnecessary literals (that don’t affect accuracy) are removed from the rule.
76
What is a recursive rule? Give an example of a recursive rule.
A rule whose head predicate also appears in its body, enabling recursive reasoning (e.g., ancestor/2). e.g. ancestor(X,Y) :- par
77
What are Horn clauses?
a clause which has at most one positive literal
78
Resolution Strategy in Prolog
SLD Selection: Prolog selects the *first* matching rule Linear: continues with the result of the previous step Definite clauses (Horn clauses)
79
Search strategy in Prolog
Depth-first, left-to-right. Creates choice points when multiple clauses match. On failure, backtracks to the most recent choice point, tries the next alternative.
80
What are the two main categories of knowledge representation?
Declarative and Procedural representation.
81
What does declarative knowledge describe? What does procedural knowledge describe?
Declarative: It describes facts and relationships about the world — what is true. Procedural: It describes how to do things — methods
82
What is an example of declarative knowledge representation?
Logic-based representations such as predicate logic or semantic networks.
83
What is an example of procedural knowledge representation?
Production rules (IF–THEN)
84
What are production rules?
A rule-based representation: IF THEN
85
What is a semantic network?
A graph-based declarative structure linking concepts with relationships (e.g.
86
What are frames in knowledge representation?
Structured objects with slots and values describing an entity’s properties and relations.
87
What are ontologies used for?
Organizing domain knowledge hierarchically with defined concepts and relations (used in semantic web systems).
88
What type of knowledge representation is the object-attribute-value (O-A-V) triplet?
Declarative representation.
89
Where are knowledge graphs used?
- Search engines (Google Knowledge Graph) - Recommendation systems - Question answering (QA) systems - Semantic Web / Linked Data - AI reasoning frameworks