Assets 2

Breaking Changes

  • Renamed _submit to _remote. #3321
  • Object store memory capped at 20GB by default. #3243
  • Now ray.global_state.client_table() returns a list instead of a dictionary.
  • Renamed ray.global_state.dump_catapult_trace to ray.global_state.chrome_tracing_dump.

Known Issues

  • The Plasma TensorFlow operator leaks memory. #3404
  • Object broadcasts on large clusters are inefficient. #2945
  • Ape-X leaks memory. #3452
  • Action clipping can impede learning (please set clip_actions: False as a workaround) #3496

Core

  • New raylet backend on by default and legacy backend removed. #3020 #3121
  • Support for Python 3.7. #2546
  • Support for fractional resources (e.g., GPUs).
  • Added ray stack for improved debugging (to get stack traces of Python processes on current node). #3213
  • Better error messages for low-memory conditions. #3323
  • Log file names reorganized under /tmp/ray/. #2862
  • Improved timeline visualizations. #2306 #3255

Modin

  • Modin is shipped with Ray. After running import ray you can run import modin. #3109

RLlib

  • Multi agent support for Ape-X and IMPALA. #3147
  • Multi GPU support for IMPALA. #2766
  • TD3 optimizations for DDPG. #3353
  • Support for Dict and Tuple observation spaces. #3051
  • Support for parametric and variable-length action spaces. #3384
  • Support batchnorm layers. #3369
  • Support custom metrics. #3144

Autoscaler

  • Added ray submit for submitting scripts to clusters. #3312
  • Added --new flag for ray attach. #2973
  • Added option to allow private IPs only. #3270

Tune

  • Support for fractional GPU allocations for trials. #3169
  • Better checkpointing and setup. #2889
  • Memory tracking and notification. #3298
  • Bug fixes for SearchAlgorithms. #3081
  • Add a raise_on_failed_trial flag in run_experiments. #2915
  • Better handling of node failures. #3238

Training

  • Experimental support for distributed SGD. #2858 #3033