Skip to content

Latest commit

 

History

History
119 lines (105 loc) · 7.02 KB

list_of_questions_machine_learning.md

File metadata and controls

119 lines (105 loc) · 7.02 KB

Machine Learning - List of questions

Learning Theory

  1. Describe bias and variance with examples.
  2. What is Empirical Risk Minimization?
  3. What is Union bound and Hoeffding's inequality?
  4. Write the formulae for training error and generalization error. Point out the differences.
  5. State the uniform convergence theorem and derive it.
  6. What is sample complexity bound of uniform convergence theorem?
  7. What is error bound of uniform convergence theorem?
  8. What is the bias-variance trade-off theorem?
  9. From the bias-variance trade-off, can you derive the bound on training set size?
  10. What is the VC dimension?
  11. What does the training set size depend on for a finite and infinite hypothesis set? Compare and contrast.
  12. What is the VC dimension for an n-dimensional linear classifier?
  13. How is the VC dimension of a SVM bounded although it is projected to an infinite dimension?
  14. Considering that Empirical Risk Minimization is a NP-hard problem, how does logistic regression and SVM loss work?

Model and feature selection

  1. Why are model selection methods needed?
  2. How do you do a trade-off between bias and variance?
  3. What are the different attributes that can be selected by model selection methods?
  4. Why is cross-validation required?
  5. Describe different cross-validation techniques.
  6. What is hold-out cross validation? What are its advantages and disadvantages?
  7. What is k-fold cross validation? What are its advantages and disadvantages?
  8. What is leave-one-out cross validation? What are its advantages and disadvantages?
  9. Why is feature selection required?
  10. Describe some feature selection methods.
  11. What is forward feature selection method? What are its advantages and disadvantages?
  12. What is backward feature selection method? What are its advantages and disadvantages?
  13. What is filter feature selection method and describe two of them?
  14. What is mutual information and KL divergence?
  15. Describe KL divergence intuitively.

Curse of dimensionality

  1. Describe the curse of dimensionality with examples.
  2. What is local constancy or smoothness prior or regularization?

Universal approximation of neural networks

  1. State the universal approximation theorem? What is the technique used to prove that?
  2. What is a Borel measurable function?
  3. Given the universal approximation theorem, why can't a MLP still reach a arbitrarily small positive error?

Deep Learning motivation

  1. What is the mathematical motivation of Deep Learning as opposed to standard Machine Learning techniques?
  2. In standard Machine Learning vs. Deep Learning, how is the order of number of samples related to the order of regions that can be recognized in the function space?
  3. What are the reasons for choosing a deep model as opposed to shallow model? (1. Number of regions O(2^k) vs O(k) where k is the number of training examples 2. # linear regions carved out in the function space depends exponentially on the depth. )
  4. How Deep Learning tackles the curse of dimensionality?

Support Vector Machine

  1. How can the SVM optimization function be derived from the logistic regression optimization function?
  2. What is a large margin classifier?
  3. Why SVM is an example of a large margin classifier?
  4. SVM being a large margin classifier, is it influenced by outliers? (Yes, if C is large, otherwise not)
  5. What is the role of C in SVM?
  6. In SVM, what is the angle between the decision boundary and theta?
  7. What is the mathematical intuition of a large margin classifier?
  8. What is a kernel in SVM? Why do we use kernels in SVM?
  9. What is a similarity function in SVM? Why it is named so?
  10. How are the landmarks initially chosen in an SVM? How many and where?
  11. Can we apply the kernel trick to logistic regression? Why is it not used in practice then?
  12. What is the difference between logistic regression and SVM without a kernel? (Only in implementation – one is much more efficient and has good optimization packages)
  13. How does the SVM parameter C affect the bias/variance trade off? (Remember C = 1/lambda; lambda increases means variance decreases)
  14. How does the SVM kernel parameter sigma^2 affect the bias/variance trade off?
  15. Can any similarity function be used for SVM? (No, have to satisfy Mercer’s theorem)
  16. Logistic regression vs. SVMs: When to use which one? ( Let's say n and m are the number of features and training samples respectively. If n is large relative to m use log. Reg. or SVM with linear kernel, If n is small and m is intermediate, SVM with Gaussian kernel, If n is small and m is massive, Create or add more fetaures then use log. Reg. or SVM without a kernel)

Bayesian Machine Learning

  1. What are the differences between “Bayesian” and “Freqentist” approach for Machine Learning?
  2. Compare and contrast maximum likelihood and maximum a posteriori estimation.
  3. How does Bayesian methods do automatic feature selection?
  4. What do you mean by Bayesian regularization?
  5. When will you use Bayesian methods instead of Frequentist methods? (Small dataset, large feature set)

Regularization

  1. What is L1 regularization?
  2. What is L2 regularization?
  3. Compare L1 and L2 regularization.
  4. Why does L1 regularization result in sparse models? here

Evaluation of Machine Learning systems

  1. What are accuracy, sensitivity, specificity, ROC?
  2. What are precision and recall?
  3. Describe t-test in the context of Machine Learning.

Clustering

  1. Describe the k-means algorithm.
  2. What is distortion function? Is it convex or non-convex?
  3. Tell me about the convergence of the distortion function.
  4. Topic: EM algorithm
  5. What is the Gaussian Mixture Model?
  6. Describe the EM algorithm intuitively.
  7. What are the two steps of the EM algorithm
  8. Compare GMM vs GDA.

Dimensionality Reduction

  1. Why do we need dimensionality reduction techniques? (data compression, speeds up learning algorithm and visualizing data)
  2. What do we need PCA and what does it do? (PCA tries to find a lower dimensional surface such the sum of the squared projection error is minimized)
  3. What is the difference between logistic regression and PCA?
  4. What are the two pre-processing steps that should be applied before doing PCA? (mean normalization and feature scaling)

Basics of Natural Language Processing

  1. What is WORD2VEC?
  2. What is t-SNE? Why do we use PCA instead of t-SNE?
  3. What is sampled softmax?
  4. Why is it difficult to train a RNN with SGD?
  5. How do you tackle the problem of exploding gradients? (By gradient clipping)
  6. What is the problem of vanishing gradients? (RNN doesn't tend to remember much things from the past)
  7. How do you tackle the problem of vanishing gradients? (By using LSTM)
  8. Explain the memory cell of a LSTM. (LSTM allows forgetting of data and using long memory when appropriate.)
  9. What type of regularization do one use in LSTM?
  10. What is Beam Search?
  11. How to automatically caption an image? (CNN + LSTM)

Miscellaneous

  1. What is the difference between loss function, cost function and objective function?