Sunday, March 20, 2016

Machine Learning study questions

This is a very basic set of questions in Machine Learning. This post is incomplete, more questions to be added over time.


  1. What is bias? variance? What is the bias-variance trade-off?
  2. What is meant by the term "the curse of dimensionality"?
  3. Explain over-fitting with an example
  4. Explain Bayes theorem with an example
  5. What is a V-C bound?
  6. What is the Hoeffding inequality? What does it mean?
  7. What are parametric, semi-parametric, and non-parametric methods? Give an example of each.
  8. What is supervised learning? unsupervised learning? reinforcement learning? online learning?
  1. Briefly describe the perceptron model and how it works
  2. What would you look at as you design a linear regression and read its output?
  3. What is multicollinearity? Heteroskedasticity?
  4. What is a confidence interval? t-statistic? p-value?
  5. What is Laplacian smoothing? When and how would you use it?
  6. What is a logistic regression? Where would you use it? How do you read its output?
  7. Why is logistic regression a useful technique? Which kinds of data distributions does it work best with?
  8. What is a generalized additive model?
  9. What is gradient descent? Explain stochastic gradient descent. What assumptions does it make?
  10. How would you influence the rate of convergence of stochastic gradient descent?
  11. What is Hill Climbing? Give an example of where you would use this.
  12. What is cross-validation? Describe two different ways of performing this.
  13. Describe three distinct ways of dealing with data sets that have more features than samples available.
  14. What are neural networks? How do they work? What are the different kinds of neural nets?
  15. What is deep learning?
  16. Describe how one might design the structure of a neural net to solve a problem.
  17. What are the different techniques of optimizing weights to reflect the learning a neural network undergoes as it is trained with a data set? (e.g. genetic algorithms, particle swarm optimization, back propagation)
  18. What is maximum likelihood estimation? How would you use it?
  19. Describe a naive Bayes model with an example.
  20. What is a Probabilistic Graphical Model (PGM)? Where might it be used?
  21. What are kernel methods? Give examples of their use (e.g. kernel smoothing, support vector machines)
  22. What are genetic algorithms? Illustrate with an example where and how these can be used.
  23. What are support vector machines (SVMs)? How can they be built and used?
  24. What is a self organizing map? How does it work? What is vector quantization?


  1. You are given a million points in 10 dimensional space. I have a hypothesis that these points are clustered in 8 groups. I want to test this hypothesis. What methods would you use and why?
  2. What is the complexity of the method you would use for the above?
  3. Perform a relative computational complexity analysis of K-means clustering vs. Hierarchical Clustering. Does it make a difference to your analysis if you do agglomerative hierarchical clustering vs divisive hierarchical clustering?
  4. The clusters you see when you run your algorithms on my data are elongated hyper-ellipsoids. What does this tell you about the data? What would you do in this situation to make inferencing easier?


No comments:

Post a Comment