What is representation learning? Why is it important?
What is Principal Component Analysis (PCA)? What does it do?
What transformations give PCA a problem?
How can we estimate noise and redundacy in data?
How does the covariance matrix work?
How does PCA reduce features covariance?
- find some orthonormal matrix P where X’ = PX so that the covariance matrix of X’ is diagonal
How does PCA perform dimensionality reduction?
What is kernel PCA?
What can PCA be used for?
What is an autoencoder?
How is the autoencoder composed? What do the components do?
What loss is generally used in an autoencoder? What is the learning procedure?
What types of autoencoders are there?
What are CNN? How do they work?
What is a typical CNN architecture?
What is neural words embedding?
- transforms text into a vector of numbers
What is Word2Vec?
What is a knowledge graph? What is it used for?
What is Knowledge Graph Embedding and what is it used for?
What is knowledge graph completion?