diff --git a/com.unity.ml-agents/CHANGELOG.md b/com.unity.ml-agents/CHANGELOG.md index fbf65a850d..b7d74278ba 100755 --- a/com.unity.ml-agents/CHANGELOG.md +++ b/com.unity.ml-agents/CHANGELOG.md @@ -7,32 +7,53 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html). ## [Unreleased] + +### Major Changes +#### com.unity.ml-agents (C#) +#### ml-agents / ml-agents-envs / gym-unity (Python) + +### Minor Changes +#### com.unity.ml-agents (C#) +#### ml-agents / ml-agents-envs / gym-unity (Python) + +### Bug Fixes +#### com.unity.ml-agents (C#) +#### ml-agents / ml-agents-envs / gym-unity (Python) + + +## [1.5.0-preview] - 2020-10-14 ### Major Changes #### com.unity.ml-agents (C#) #### ml-agents / ml-agents-envs / gym-unity (Python) - Added the Random Network Distillation (RND) intrinsic reward signal to the Pytorch trainers. To use RND, add a `rnd` section to the `reward_signals` section of your - yaml configuration file. [More information here](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-Configuration-File.md#rnd-intrinsic-reward) + yaml configuration file. [More information here](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-Configuration-File.md#rnd-intrinsic-reward) (#4473) ### Minor Changes #### com.unity.ml-agents (C#) - - Stacking for compressed observations is now supported. An addtional setting + - Stacking for compressed observations is now supported. An additional setting option `Observation Stacks` is added in editor to sensor components that support compressed observations. A new class `ISparseChannelSensor` with an additional method `GetCompressedChannelMapping()`is added to generate a mapping of the channels in compressed data to the actual channel after decompression, for the python side to decompress correctly. (#4476) - - Added new visual 3DBall environment. (#4513) + - Added a new visual 3DBall environment. (#4513) #### ml-agents / ml-agents-envs / gym-unity (Python) - The Communication API was changed to 1.2.0 to indicate support for stacked compressed observation. A new entry `compressed_channel_mapping` is added to the proto to handle decompression correctly. Newer versions of the package that wish to make use of this will also need a compatible version of the Python trainers. (#4476) - - In `VisualFoodCollector` scene, a vector flag representing the frozen state of + - In the `VisualFoodCollector` scene, a vector flag representing the frozen state of the agent is added to the input observations in addition to the original first-person camera frame. The scene is able to train with the provided default config file. (#4511) + - Added conversion to string for sampler classes to increase the verbosity of + the curriculum lesson changes. The lesson updates would now output the sampler + stats in addition to the lesson and parameter name to the console. (#4484) + - Localized documentation in Russian is added. Thanks to @SergeyMatrosov for + the contribution. (#4529) ### Bug Fixes #### com.unity.ml-agents (C#) - - Fixed a bug where accessing the Academy outside of play mode would cause the Academy to get stepped multiple times when in play mode. (#4532) + - Fixed a bug where accessing the Academy outside of play mode would cause the + Academy to get stepped multiple times when in play mode. (#4532) #### ml-agents / ml-agents-envs / gym-unity (Python)