Skip to content

kushagra06/CS221_AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Hits

CS221 is "the" intro AI class at Stanford and [ this playlist ] in Youtube, lists the video lectures of CS221 Autumn 2018-19 ( I guess someone uploaded the videos without knowing the terms of taking the class ). Having Access to the video lectures is great, makes going through the slides easier. Since I didn't pay for the course [I am not a Full-time Stanford Student], The difference is , you don't get to ask TA's, submit the projects, or get any feedback but, you get access to notes and slides from the course website, get to learn CS221 ( & that's what matters the most). Also, I was lucky to have access to CS221 Piazza class (CS221 doubt clearing channel) as I had access to my stanford email account (I was a Stanford Visiting Student). All in all, if you want to learn, stay truthful, learn the contents well, be curious and Maintain Honor Code. CS221 is exciting!

Grade Structure - Homework - 60% , Exam - 20% and Final Project- 20%

What do web search, speech recognition, face recognition, machine translation, autonomous driving, and automatic scheduling have in common? These are all complex real-world problems, and the goal of artificial intelligence (AI) is to tackle these with rigorous mathematical tools. In this course, we will learn the foundational principles that drive these applications and practice implementing some of these systems. Specific topics include machine learning, search, game playing, Markov decision processes, constraint satisfaction, graphical models, and logic. The main goal of the course is to equip us with the tools to tackle new AI problems we might encounter in life.

Books : Artificial Intelligence: A Modern Approach / AIMA-pdf , The Elements of Statistical Learning - pdf | AIMA is a great book. Here - PSEUDOCODE algorithms, AIMA code repo, Resources from the book.

Homeworks : (py-2.7)

☍ 1. Foundations β›« ( zip )

☍ 2. Sentiment classification β›« ( zip )

☍ 3. Text reconstruction β›« ( zip )

☍ 4. Blackjack β›« ( zip )

☍ 5. Pac-Man β›« ( zip )

☍ 6. Course scheduling β›« ( zip )

☍ 7. Car tracking β›« ( zip )

☍ 8. Language and logic β›« ( zip )

@ Paper Projects | Guidelines | MIT 6.034 Artificial Intelligence

Course :

β™ž INTRODUCTION

𓁅 Overview of course, Optimization [ slide1p ], [ slide6p ]
β˜„ N.O.T.E.S

β™ž MACHINE LEARNING

𓁅 Linear classification, Loss minimization, Stochastic gradient descent [ slide1p ] , [ slide6p ]
𓁅 Section: optimization, probability, Python (review) [ slide ]
𓁅 Features and non-linearity, Neural networks, nearest neighbors [ slide1p ] , [ slide6p ]
𓁅 Generalization, Unsupervised learning, K-means [ slide1p ],[ slide6p ]
𓁅 Section: Backpropagation and SciKit Learn [ slide ]
β˜„ N.O.T.E.S

β™ž SEARCH

𓁅 Tree search, Dynamic programming, uniform cost search [ slide1p ] , [ slide6p ]
𓁅 A*, consistent heuristics, Relaxation [ slide1p ] , [ slide6p ]
𓁅 Section: UCS,Dynamic Programming, A* [ slide ]
β˜„ N.O.T.E.S

β™ž MARKOV DECISION PROCESSES

𓁅 Policy evaluation, policy improvement, Policy iteration, value iteration [ slide1p ] , [ slide6p ]
𓁅 Reinforcement learning, Monte Carlo, SARSA, Q-learning, Exploration/exploitation, function approximation [ slide1p ] , [ slide6p ]
𓁅 Section: deep reinforcement learning [ slide ]
β˜„ N.O.T.E.S

β™ž GAME PLAYING

𓁅 Minimax, expectimax, Evaluation functions, Alpha-beta pruning [ slide1p ] , [ slide6p ]
𓁅 TD learning, Game theory [ slide1p ] , [ slide6p ]
𓁅 Section: AlphaZero [ slide ]
β˜„ N.O.T.E.S

β™ž CONSTRAINT SATISFACTION PROBLEMS

𓁅 Factor graphs, Backtracking search, Dynamic ordering, arc consistency [ slide1p ] , [ slide6p ]
𓁅 Beam search, local search, Conditional independence, variable elimination [ slide1p ] , [ slide6p ]
𓁅 Section: CSPs [ slide ]
β˜„ N.O.T.E.S

β™ž BAYESIAN NETWORKS

𓁅 Bayesian inference, Marginal independence, Hidden Markov models [ slide1p ] , [ slide6p ]
𓁅 Forward-backward, Gibbs sampling, Particle filtering [ slide1p ] , [ slide6p ]
𓁅 Section: Bayesian networks [ slide ]
𓁅 Learning Bayesian networks, Laplace smoothing, Expectation Maximization [ slide1p ] , [ slide6p ] , [ supplementary ]
β˜„ N.O.T.E.S

β™ž LOGIC

𓁅 Syntax versus semantics, Propositional logic, Horn clauses [ slide1p ] , [ slide6p ]
𓁅 First-order logic, Resolution [ slide1p ] , [ slide6p ]
β˜„ N.O.T.E.S

β™ž CONCLUSION

𓁅 Deep learning, autoencoders, CNNs, RNNs [ slide1p ] , [ slide6p ]
𓁅 Section: semantic parsing (advanced), Higher-order logics, Markov logic, Semantic parsing [ slide ]
𓁅 Summary, future of AI [ slide1p ] , [ slide6p ]
β˜„ N.O.T.E.S

♐Exam Papers - F2017, F2016, F2015, M2014, M2013 , F2012, M2012, PractiveM1:Solution, PractiveM2:Solution

⚷ My Solutions for CS221 Exams - 2017, 2016, 2015

PSets π“€Œ Search : Solution | Variables : Solution ||||| [221@2013] | Project e.g | e.gII

FINAL PROJECT | Past Final Projects

Since the Marks Distribution is Homework - 60% , Exam - 20% and Final Project- 20%. I legit enjoy learning the past posters of CS221, they are exciting. The Final project I made is : " AI playing Mario : Reinforcement Learning Approach ", here is the implementation/code and the poster of the project is here:

About

πŸ₯š Stanford CS221: Artificial Intelligence: Principles and Techniques

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published