Skip to content
This repository has been archived by the owner on Feb 24, 2022. It is now read-only.

Various typos on Smartcab Project #184

Closed
pedropb opened this issue Jan 15, 2017 · 0 comments
Closed

Various typos on Smartcab Project #184

pedropb opened this issue Jan 15, 2017 · 0 comments

Comments

@pedropb
Copy link
Contributor

pedropb commented Jan 15, 2017

File smartcab.ipynb

Question 3

Given that the agent is driving randomly, does the rate of reliabilty make sense?
Should be
Given that the agent is driving randomly, does the rate of reliability make sense?

Question 5

Given what you know about the evironment and how it is simulated,
Should be
Given what you know about the environment and how it is simulated,

Improve Q-Learning Driving Agent

(the default threshold is 0.01)
Should be
(the default threshold is 0.05) - as written in line 111 of smartcab/simulator.py

When improving on your Q-Learning implementation, consider the impliciations it creates
Should be
When improving on your Q-Learning implementation, consider the implications it creates

Optional: Future Rewards - Discount Factor gamma

Including future rewards in the algorithm is used to aid in propogating positive rewards
Should be
Including future rewards in the algorithm is used to aid in propagating positive rewards

File smartcab/agent.py

def learn()

line 112
receives an award
should be
receives a reward

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants