week 5 Flashcards

(47 cards)

1
Q

What is the purpose of a feature map ϕ(x)?

A

To map data into a higher-dimensional space where a linear separator may exist.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why can nonlinear problems become linearly separable after a feature map?

A

Because ϕ(x) adds nonlinear functions such as x², enabling linear boundaries in feature space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why do polynomial feature maps become expensive?

A

Their dimension grows combinatorially with degree; for degree s in d dimensions, size = Σ_{j=1}^s (d+j−1 choose j).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the idea behind kernel methods?

A

Use infinite-dimensional feature maps indirectly via kernels without computing ϕ(x).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a Hilbert space?

A

A possibly infinite-dimensional inner product space (e.g., ℝ^a, ℓ², L²).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How is a linear model written in feature space?

A

fθ(x)=⟨θ,ϕ(x)⟩.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why is training directly in infinite dimensions impossible?

A

θ has infinitely many coordinates; optimisation cannot be done in ℋ directly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does the representer theorem state?

A

The minimiser θ* lies in the finite span of data: θ* = Σ_i α_i ϕ(x_i).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the key consequence of the representer theorem?

A

An infinite-dimensional optimisation reduces to an n-dimensional optimisation in α.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a reproducing kernel?

A

A function k(x,z)=⟨ϕ(x),ϕ(z)⟩ giving feature-space inner products.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does k(xz) represent?

A

Similarity between x and z in feature space; higher k means more similar.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the kernel trick?

A

Replacing ⟨ϕ(x),ϕ(z)⟩ with k(x,z) so feature maps need not be computed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the Gram matrix K?

A

K ∈ ℝ^{n×n} with entries K_ij = k(x_i, x_j).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the kernelised training objective?

A

argmin_α (1/n) Σ_j l(y_j,(Kα)_j) + (λ/2) αᵀ K α.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the kernelised predictor?

A

fα(x)=Σ_i α_i k(x, x_i).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How many parameters does a kernel model have?

A

One parameter α_i per training datapoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Why does infinite feature dimension not matter in kernel methods?

A

All computations require only kernel values k(x,z).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Give examples of valid kernels.

A

Constant k=c², Linear k=xᵀz, Polynomial k=(1+xᵀz)^a, Gaussian k=exp(−γ‖x−z‖²), Exponential, Matern.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is an RBF (Gaussian) kernel?

A

k(x,z)=exp(−γ‖x−z‖²).

20
Q

What does γ control in the RBF kernel?

A

Lengthscale of similarity: large γ = very local similarity; small γ = smoother decision boundaries.

21
Q

How does RBF kernel classify difficult nonlinear data?

A

By creating highly flexible decision functions in infinite-dimensional feature space.

22
Q

Why do polynomial kernels fail on the moons dataset?

A

They cannot capture the oscillatory, non-smooth boundaries needed.

23
Q

Which kernel works well on the moons dataset?

A

The RBF (Gaussian) kernel.

24
Q

How does kernel-SVM classification work?

A

Sign(Σ_i α_i k(x,x_i) − b) using hinge loss and representer theorem.

25
What library does scikit-learn use for SVM?
LIBSVM (quadratic programming solver).
26
Why is face recognition high-dimensional?
Images have thousands of pixels; e.g., 62×47 ≈ 3000 features.
27
Why apply PCA before SVM in face recognition?
Reduce dimensionality, avoid dominated features, improve conditioning.
28
What problem occurs in the LFW face dataset?
Imbalanced number of examples per person.
29
How is imbalance handled?
Downsampling, data augmentation, or modifying loss to emphasise minority class.
30
What hyperparameters must be chosen for RBF SVM?
C (regularisation) and γ (kernel width).
31
How are C and γ selected?
Using cross-validation grid search.
32
What accuracy was achieved in the face recognition example?
High accuracy with some misclassified ambiguous images.
33
What does a confusion matrix show?
Which classes are commonly confused and where errors occur.
34
What is MNIST?
A dataset of 60k train + 10k test 28×28 handwritten digits.
35
What task is shown in MNIST (lecture)?
Binary classification: digit 3 vs digit 8.
36
How well does an RBF SVM perform on MNIST?
≈99% accuracy on clean data.
37
How expressive is the RBF kernel?
It is a universal approximator: can approximate any continuous function.
38
What are random features?
Approximate ϕ(x) via sampled basis functions b(x,ω_i), giving φ_ω(x)≈ϕ(x).
39
Why use random features?
Computing full K is expensive (O(n²)); random features approximate kernels cheaply.
40
What does kernel representer theorem generalise to?
Any objective Ψ depending only on ⟨θ,ϕ(x_i)⟩ and ‖θ‖ gives θ* in span{ϕ(x_i)}.
41
Give an example of a kernel for non-vector data.
Jaccard kernel on sets: k(A,B)=|A∩B| / |A∪B|.
42
What are benefits of kernel methods?
Handle infinite-dimensional spaces, strong theory, work on non-vector data, relate to many statistical models.
43
What are Concept Bottleneck Models?
Models where prediction f = h∘g must rely on human-interpretable concepts.
44
How to train concept bottleneck models independently?
Train ĝ = argmin Σ L_c(c_j, g(x_j)), then train ĥ = argmin Σ L_l(y_j, h(c_j)).
45
How to train concept bottleneck models sequentially?
Learn ĝ first via L_c, then train h on predicted concepts ĥ = argmin Σ L_l(y_j, h(ĝ(x_j))).
46
How to train concept bottleneck models jointly?
Minimise Σ L_l(y_j, h(g(x_j))) + λ Σ L_c(c_j, g(x_j)).
47
What issues arise in concept bottleneck models?
Missing or incorrect concepts; good predictions but wrong concepts → harms interpretability.