The code here was used to explore various machine and reinforcement learning algorithms as part of an class for our grad school program. In each assignment I ran experiments on two datasets to compare and contrast algorithms in an attempt to highlight and intuitively understand the strengths and weaknesses of each algorithm. An accompanying report with experiment results was written and is excluded here but available by request.
Datasets: White Wine, Abalone
Supervised Learning: Comparison of: ANN, KNN, Support Vector Machines, Decision Tree, and Boosting
Randomized Optimization: Comparison of randomized optimization (RO) algorithms: Random Hill Climbing, Simulated Annealing, Genetic Algorithms, and MIMIC. We use RO methods 1) to tune neural network hyperparameters, and 2)to compare performance on 3 optimization problems (Continuous peaks, Traveling Salesman, and Flip Flop).
Unsupervised Learning and Dimensionality Reduction: Comparison of two clustering algorithms: K-Means and Expectation Maximization. And four dimensionality reduction (DR) algorithms: Principal Component Analysis, Independent Component Analysis, Randomized Projection, and Random Forest. I compared these two categories of algorithms alone, and then ran DR first to reduce the dataset followed by observing how the dimensionality reduced data affected the unsupervised methods
Reinforcement Learning: Compare and contrast two different sized grid world problems with Value Iteration, Policy Iteration, and Q-Learning.