State the lower bound for the probability of error, given the entropy of H(X|Y) by estimating X from Y.
What is the name of that inequality?

What is the probability density function of a Gaussian random variable?
Pretty tough to remember, eh?

For a source with fixed power level \sigma^2, what is its maximum entropy?
It’s probably not tested before. But just in case.

State Shannon’s Channel Coding theorem.
For a channel of capacity C > entropy H, there exists a coding scheme such that source is reliably sent through channel with error rate lower than any arbitrary epsilon.
State Noisy Channel Coding Theorem. Which is more effective in increasing capacity: SNR or W?
P is average transmitted power, N_0 is power spectral density of noise and W is bandwidth. One can show that C -> P/N_0 log(e) when W tends to infinity. However C grows without bounds when SNR = P/N_0W is increased.

What is a necessary condition for code lengths, such that the resulting code is uniquely decodable?
Please name the inequality.

What is the inescapable bound of the frequency-time uncertainty product, given by the uncertainty principle?
How does it show that information is fundamentally quantised?
Bound is 1/4\pi. For any given closed subspace in the time-frequency plane with area A, there could be at most 4A\pi independent quanta of data.
State the functional and Fourier form of the Gabor wavelet.
Rule 1: Love Gabor wavelets.

Define the KL distance.
KL distance measures the inefficiency of coding optimally in q(x), while the real distribution is p(x).

Explain how Fourier/Gabor/DCT transform manages to compress data.
Neighbouring pixels in an image are often highly correlated. These transformation uses decorrelating expansion basis function. As a result, we get non-uniform coefficients, often peaked at very few places. This leads to lower entropy. By Shannon coding theorem, lower code lengths are then possible. Even lossless images can have large compression. If we quantize values coarsely in the encoding (such as JPEG), we can obtain compression factor of 30:1, without much loss in image quality.
Define auto-correlation function. How is it able to extract periodic component from noise?
Original signal often is coherent, while noises are not. The auto-correlation function would be able to extract periodic component since noises often average out themselves in the process.

What is the ideal function used for retreiving input signal from sampled points?
Given that the sampling frequency is f_s.

Describe the compression strategies of JPEG.
Describe the compression strategies deployed by JPEG-2000.
How does it improve on JPEG?
Does it have any disadvantage?
How does IrisCode work?
List two results that proves bandlimited signals have finite degrees of freedom.
What does it mean for a sequence to be algorithmically random?
A sequence of length n is algorithmically random if the Kolmogorov complexity is at least n.
That is, the shortest program that generates the sequence is the listing of the sequence itself.