What are Bayesian methods?
What does the Bayes Theorem say?
How is the hypothesis chosen in Bayesian methods?
How does brute force Bayesian learning works?
* return the hypothesis with the highest a posterior probability
What assumptions do we make on the hypotheses?
* deterministic noise-free training data
How do we learn a real-valued function?
How do we learn a probabilistic function?
How do we determine the most likely classification for new instances?
What is a Gibbs classifier?
What is a Naive Bayes classifier? When to use it?
What is the algorithm for naive bayes?
Is the assumption of conditional independence necessary?
What if no learnin example with target value tv has the av attribute value?
How to classify text with Naive Bayes?
When should the Expectation Maximization (EM) algorithm be used?
How can EM be used to estimate k-means?