True/False Flashcards

(60 cards)

1
Q

The mean-field theory for Hopfield network yields the exact value for the critical storage capacity.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

That the energy cannot increase under the deterministic Hopfield dynamics is a consequence of the fact that the weights are symmetric.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The stochastic update rule for the Hopfield network is different from the Metropolis algorithm.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

All stored patterns are local minima of the energy function.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The detailed balance condition is a necessary condition for the Markov-Chan Monte-Carlo algorithm to converge.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

That the energy cannot increase under the deterministic Hopfield dynamics is valid only if the thresholds are put to zero.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

For a given ๐›ผ, the one-step error probability for the deterministic Hopfield network is lower when the diagonal weights are set to zero.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In the limit of N โ†’ โˆž the order parameter m๐œ‡ can have more than one component of order unity, the other components are small.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The stochastic update rule for the Hopfield network is identical to the Metropolis algorithm.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The detailed balance condition is a necessary condition for the Markov-Chain Monte-Carlo algorithm to converge.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

That the energy cannot increase under the deterministic Hopfield dynamics is a consequence of the fact that the weights are symmetric.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The mean-field theory for the Hopfield network yields the exact value for the critical storage capacity.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

All stored patterns are local minima of the energy function.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Not all local minima of the energy function of the Hopfield network correspond to stored patterns.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The stochastic update rule of the Hopfield network is different from the Metropolis algorithm.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

That the energy cannot increase under the deterministic Hopfield dynamics is a consequence of the fact that the diagonal weights are set to zero.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

That the energy cannot increase under the deterministic Hopfield dynamics holds also when the thresholds are zero.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The detailed condition is a necessary condition for the Markov-Chain Monte-Carlo algorithm to converge.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A perceptron that solves the parity problem with N inputs contains at least N^2 hidden neurons.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Increasing the number of hidden neurons in the network increases the risk of overfitting.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Two hidden layers are necessary to approximate any real valued-function with N inputs and one output in terms of a perceptron.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Using stochastic gradient decent in backpropagation assures that the energy either decreases or stays constant.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

In minimisation with a Lagrange multiplier, the function multiplying the Lagrange multiplier can also assume negative values.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Some of the functions with 5 Boolean valued inputs and one Boolean valued output are linearly separable.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Different layers of a deep network learn at different speeds because their effects on the output are different.
True
26
The weights in a perceptron are symmetric.
False
27
L1-regularisation reduces small weights more than L2-regularisation.
True
28
Weight decay helps against overfitting.
True
29
Increasing the number of hidden neurons in the network increases the risk of overfitting.
True
30
Two hidden layers are necessary to approximate any real valued-function with N inputs and one output in terms of a perceptron.
False
31
Pruning increases the risk of overfitting.
False
32
Using stochastic gradient decent in backpropagation assures that the energy either decreases or stays constant.
False
33
In minimisation with Lagrange multiplier, the function the Lagrange multiplier must be equal to or larger than zero.
True
34
Back-propagation is a form of unsupervised learning.
False
35
To make use of back-propagation, it is necessary to know how the target outputs of input patterns in the training set.
True
36
"Early stopping" in back-propagation helps to avoid being stuck in local minima of energy.
False
37
"Early stopping" in back-propagation is a way to avoid overfitting.
True
38
Using stochastic path through weight space in back-propagation helps to avoid being stuck in local minima of energy.
True
39
Using stochastic path through weight space in back-propagation prevents overfitting.
False
40
Using stochastic path through weight space in back-propagation assures that the energy either decreases or stays constant.
False
41
There are 2^(2^n) functions with n Boolean valued inputs and one Boolean valued output.
True
42
None of the functions with 5 Boolean valued inputs and one Boolean valued output are linearly separable.
False
43
There are precisely 24 functions with 3 Boolean valued inputs and one Boolean valued output (equal to zero ore one) where exactly three of the possible inputs maps to zero.
False
44
Oja's learning is a form of unsupervised learning.
False
45
Then dimension of the output space of a Kohonen network must be equal to the dimension of the input space.
True
46
The number of neurons in the input layer of a perceptron is equal to the number of input patterns.
True
47
You need access to the state of all neurons in a multilayer perceptron when updating all weights through backpropagation.
True
48
Consider the Hopfield network. If a pattern is stable it must be an eigenvector of the weight matrix.
False
49
If you store two orthogonal patterns in a Hopfield network, they must always turn out unstable.
False
50
Kohonen algorithm learns convex distributions better than concave ones.
True
51
The number of N-dimensional Boolean functions is 2^N.
False. it is (the number of choices)^(the number of inputs)
52
The weight matrices in a perceptron are symmetric.
False
53
Using g(b)=b as activation function and putting all thresholds to zero in a multilayer perceptron allows you to solve some linearly inseparable problems.
False
54
You need at least four radial basis functions for the XOR-problem to be linearly separable in the space spanned by the radial spaces.
False
55
Consider p>2 patterns uniformly distributed on a circle. None of the eigenvalues of the covariance matrix of the patterns is zero.
True
56
Assume that the weight vector in Oja's rule corresponds to a stable steady state after a given iteration. The weight vector may change in the next iteration.
True
57
If your Kohonen network is supposed to learn the distribution P(xi), it is important to generate the patterns xi^(my) before you start training the network.
False
58
All one-dimensional Boolean problems are linearly separable.
True
59
In Kohonen's algorithm. the neurons have fixed positions in the output space.
True
60
Some elements of the covariance matrix are variances.
True