week 3 Flashcards

(53 cards)

1
Q

What is the goal of learning/training in ML?

A

Choose a hypothesis class FΘ and loss l, and find θ̂ = argminθ R̂(θ) such that f_θ̂ predicts labels well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is empirical risk R̂(θ)?

A

R̂(θ)= (1/n) Σ_j l(y_j, f_θ(x_j)).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why do we minimise empirical risk?

A

Because the true risk is unknown; empirical risk approximates it using training data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does gradient descent rely on?

A

Local information: the gradient ∇θF gives the steepest direction of increase, so −∇θF is steepest descent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a directional derivative?

A

vᵀ ∇θ F — tells the rate of change of F when moving infinitesimally in direction v.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a descent direction?

A

Any v such that vᵀ∇θF < 0, meaning moving along v reduces F.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the direction of steepest descent?

A

v = −∇θF / ||∇θF||.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the gradient descent update rule?

A

θ_{t+1} = θ_t − γ_t ∇θ_t F.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does γ_t represent in gradient descent?

A

Learning rate (step size).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is gradient flow?

A

The continuous-time limit: dθ/dt = −∇θF(θ(t)).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How is GD related to gradient flow?

A

GD is a numerical approximation of gradient flow with step size γ_t.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why does geometry of level sets matter for GD?

A

Because ∇F is orthogonal to level sets; elongated level sets cause slow convergence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does the condition number κ measure?

A

κ = λ_max / λ_min of (1/n)XᵀX; large κ means slow convergence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Why is GD not applicable to 0–1 loss?

A

Its gradient is zero almost everywhere.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What happens if ∇θF = 0?

A

GD gets stuck (local minimum, saddle point, plateau).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How does step size affect GD?

A

Too small: slow. Too large: divergence or oscillations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are common learning rate schedules?

A

Constant γ; diminishing γ_t→0; exponential γ_t = γ₀β^t; line search.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is excess risk decomposition?

A

R(f̂) − R* = (estimation error) + (approximation error).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is estimation error?

A

R(f̂) − R(f_{θ*}) — due to finite data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is approximation error?

A

R(f_{θ}) − R — due to model class limitations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is optimisation error?

A

R̂(f̂) − infθ R̂(fθ) — due to not fully minimising empirical risk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Why is full GD expensive?

A

Computing ∇θ F requires evaluating all n samples; too slow for large n.

23
Q

What is stochastic gradient descent (SGD)?

A

θ_{t+1} = θ_t − γ_t ∇θ_t l(y_j, f_θ(x_j)) using one randomly sampled data point.

24
Q

What is the key idea of SGD?

A

Replace full gradient with unbiased stochastic estimate.

25
Why is SGD not a descent direction?
Because gradient of a single sample can point uphill.
26
What is mini-batch SGD?
Use a batch B_t ⊂ {1,…,n}: θ_{t+1}=θ_t−γ_t (1/|B_t|) Σ_{j∈B_t}∇l_j.
27
What is an epoch?
One full pass through all data using batches.
28
Typical batch sizes?
32–256.
29
What is gradient clipping?
Rescale ∇F → c ∇F/||∇F|| to prevent exploding gradients.
30
What is early stopping?
Stop training when validation risk stops decreasing; acts as implicit regularisation.
31
Why does early stopping regularise?
Parameters remain small early in training, similar to L2 penalty.
32
Why can SGD escape local minima?
Noise helps avoid sharp minima; SGD does not strictly follow descent directions.
33
Why do sharp minima generalise poorly?
They overfit; flatter minima yield better generalisation.
34
What is momentum in optimisation?
θ_{t+1} = θ_t − γ ∇F(θ_t) + β (θ_t − θ_{t−1}).
35
What does momentum do?
Smooths updates and accelerates convergence in shallow directions.
36
Why introduce ADAM?
Different parameters may have different scales; need adaptive per-dimension learning rates.
37
What does ADAM do?
Normalises gradient by its first and second moments: updates use m_t and v_t (momentum terms).
38
Why is ADAM effective?
Makes progress in every direction regardless of gradient scale.
39
What are overparameterised models?
Models with enough parameters to fit the training data exactly (interpolation).
40
Why do overparametrised models overfit?
They can achieve zero empirical risk but fit noise.
41
What is explicit regularisation?
Modify objective: J(θ)=R̂(θ)+λΩ(θ).
42
What is implicit regularisation?
Algorithm biases the solution without modifying objective (e.g., SGD selects minimum-norm interpolator).
43
What is the implicit bias of SGD for linear regression?
Among all interpolating solutions, SGD converges to the minimum Euclidean norm solution.
44
Does implicit regularisation depend on loss?
No — occurs for quadratic and logistic loss.
45
What is the double descent phenomenon?
Test error decreases → increases near interpolation → decreases again in extreme overparameterisation.
46
What causes double descent?
Increased model size induces inductive biases and effective simplicity beyond interpolation threshold.
47
How does GD implicitly regularise?
GD approximates gradient flow of F̃(θ)=F(θ)+γ/4 ||∂F/∂θ||², i.e., penalises large gradients.
48
What is a convex function?
F is convex if F(tη+(1−t)θ) ≤ tF(η)+(1−t)F(θ) for all θ,η and t∈[0,1].
49
What does ∇θF = 0 imply for convex F?
Any stationary point is a global minimum.
50
Do convex functions need to be differentiable?
No — convexity does not require differentiability.
51
How do we optimise non-smooth functions?
Replace gradient with a subgradient z ∈ ∂F(θ).
52
What is a subgradient?
A generalisation of gradient that exists for convex functions even if non-smooth.
53
What is the GD update with subgradients?
θ_{t+1} = θ_t − γ z_t, where z_t ∈ ∂F(θ_t).