-
Notifications
You must be signed in to change notification settings - Fork 5
Version 0.9.5 mass refactor #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…o version_0_9_5_mass_refactor
…o version_0_9_5_mass_refactor
josiahls
added a commit
that referenced
this pull request
Dec 22, 2019
* Added: - Interpreter (Cleaner) with cleaner code / closer to fastai - To and From Pickle - to_csv Notes: - I tried doing a from_csv implementation, however I am seeing that something like this might not be possible unless using system stuff. Not sure when I will ever get to this. I have some ideas about saving images / states as files with file paths... Maybe to_csv generates a file system also? * Added: - Group Interpreter for combining model runs - Initial fixed dqn notebook (soft of) Fixed: - recorder callback ordering - renaming. It seems that fasti has some cool in-notebook test widgets that we might want to use in the future * Added: - Group Interpreter merging - DQN base notebook - Interpreters with by default close envs Fixed: - env closing <- might be a continuous issue due to different physics engines * Fixed: - setup.py fastai needs to be min 1.0.59 * Fixed: - cpu / device issues. * Added: - DQN Group Results - Reward Metric Notes: - I am realizing that we need sum reward smoothing. The graphs are way too messy. * Added: - Analysis property to the group interpretation * Fixed: - PER crashing due to containing 0 items * Added: - Group Interpretation value smoothing * Fixed: - Value smoothing making the reward values way too big - Tests take too long. If Image input, just do a shorter fit cycle - PER batch size not updating - Tests take too long. If Image input, just do a shorter fit cycle - cuda issues - Bounds n_possible_values is only calculated when used. Should make iteration faster. Added: - Smoothing for the scalar plotting * More test fixing * Fixed: - cuda issues * Added: - Lunary Lander performance test * Added: - minigrid compat - normalization module for dqns using Bounds object * Fixed: - Normalizing cuda error * Fixed: - DDPG cuda error * Fixed: - pybullet human rendering. Pybullet renders differently from regular openai envs. Basically if you want to see what is happening, the ender method needs to be executed prior to reset. Added: - DDPG testing - ddpg env runs - more results - more ddpg tests - walker2d data * Fixed: - Possibly pybullet envs from crashing. There was an issue where the pybullet wrapper was not being added :( * Version 0.9.5 mass refactor (#12) * Added: - Refactored DQN code - DQN learner basic Fixed: - DQN model crashing * Added: - All DQNs pass tests * Fixed: - Some dqn / gym_maze / embedding related crashes - DQN test code and actual DQN tests * Added: - Maze heap map interpreter - Started q value interpreter * Fixed: - DDPG GPU issue. Sampling / action and state objects support to device calls. - DQN GPU issue. - azure pipeline test * Updated: - jupyter notebooks * Removed: - old code files * Fixed: - metrics, ddpg tests * Added: - basic q value plotting - basic q value plotting for ddpg * Updated Version * Changed: - Setup.py excludes some third arty packages due to pypi restriction. Need to find a way around this. * Removed: - old code from README. Revisions coming. * Added: - batch norm toggling. For now / forever defaulted to false * Version 0 9 5 mass refactor (#13) * Added: - revised test script - Slowly adding tests. * Fixed: - somehow trained_learner method in test was completely broken * Added: - Interpreter edge control. can also show average line * Fixed: - models being all shitty. Apparently, batch norm reaaally screws them up. If you use batch norm, the batch size needs to be massive (128 wasnt large enough). By default, you can kind of turn off batch_norm in the Tabular models, but they still, when given a continuous input, will have an entry batch norm. I over-wrote it and now they work significantly better :) * Updated: - gitignore
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The code was ugly. Made more "fastai" like. Main reference was the GAN learners.
One of the most important aspects was modularizing the models. They are now not tied to any internal modules. Originally, we were sending the data object, and action and state objects as parameters, thus making the models more deeply tied. Now the models can be separated from the rest of fast rl if desired.
Also all models support images / tabular / and embedding based state inputs.