fix(greet): update dependency ray to v2.10.0 #15393
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
2.9.3
->2.10.0
Warning
Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
Release Notes
ray-project/ray (ray)
v2.10.0
Compare Source
Release Highlights
Ray 2.10 release brings important stability improvements and enhancements to Ray Data, with Ray Data becoming generally available (GA).
num_replicas=”auto”
(#42613).max_queued_requests
(#42950).max_ongoing_requests (max_concurrent_queries)
is also now strictly enforced (#42947).RAY_SERVE_ENABLE_QUEUE_LENGTH_CACHE=0
.max_concurrent_queries
->max_ongoing_requests
target_num_ongoing_requests_per_replica
->target_ongoing_requests
downscale_smoothing_factor
->downscaling_factor
upscale_smoothing_factor
->upscaling_factor
max_ongoing_requests
will change from 100 to 5.target_ongoing_requests
will change from 1 to 2.ScalingConfig(accelerator_type)
.XGBoostTrainer
andLightGBMTrainer
to no longer depend onxgboost_ray
andlightgbm_ray
. A new, more flexible API will be released in a future release.local_dir
andRAY_AIR_LOCAL_CACHE_DIR
.Ray Libraries
Ray Data
🎉 New Features:
num_rows_per_file
parameter to file-based writes (#42694)DataIterator.materialize
(#43210)DataIterator.to_tf
iftf.TypeSpec
is provided (#42917)Dataset.write_bigquery
(#42584)💫 Enhancements:
ImageDatasource
to useImage.BILINEAR
as the default image resampling filter (#43484)ray.data.from_huggingface
(#42599)Stage
class and related usages (#42685)🔨 Fixes:
OutputSplitter
(#43740)OpBufferQueue
(#43015)Limit
operators. (#42958)Dataset.streaming_split
for job hanging (#42601)📖 Documentation:
Ray Train
🎉 New Features:
ScalingConfig(accelerator_type)
for improved worker scheduling (#43090)💫 Enhancements:
train_func
for setup/teardown logic (#43209)DEFAULT_NCCL_SOCKET_IFNAME
to simplify network configuration (#42808)🔨 Fixes:
memory
resource requirements (#42999)Path.as_posix
overos.path.join
(#42037)RayFSDPStrategy
(#43594)RayTrainReportCallback
(#42751)get_latest_checkpoint
returns None (#42953)📖 Documentation:
train_loop_config
(#43691)ray.train.report
docstring that it is not a barrier (#42422)prepare_data_loader
shuffle behavior andset_epoch
(#41807)🏗 Architecture refactoring:
XGBoostTrainer
andLightGBMTrainer
asDataParallelTrainer
. Removed dependency onxgboost_ray
andlightgbm_ray
. (#42111, #42767, #43244, #43424)local_dir
andRAY_AIR_LOCAL_CACHE_DIR
. Add isolation between driver and distributed worker artifacts so that large files written by workers are not uploaded implicitly. Results are now only written tostorage_path
, rather than having another copy in the user’s home directory (~/ray_results
). (#43369, #43403, #43689)ray.train.torch.get_device
into anotherget_devices
API for multi-GPU worker setup (#42314)storage_path
(#42853, #43179)SyncConfig
(#42909)preprocessor
argument from Trainers (#43146, #43234)MosaicTrainer
and removeSklearnTrainer
(#42814)Ray Tune
💫 Enhancements:
TBXLogger
for logging images (#37822)Experiment(config)
to handle RLlibAlgorithmConfig
(#42816, #42116)🔨 Fixes:
reuse_actors
error on actor cleanup for function trainables (#42951)os.path.join
(#42037)📖 Documentation:
🏗 Architecture refactoring:
local_dir
andRAY_AIR_LOCAL_CACHE_DIR
. Add isolation between driver and distributed worker artifacts so that large files written by workers are not uploaded implicitly. Results are now only written tostorage_path
, rather than having another copy in the user’s home directory (~/ray_results
). (#43369, #43403, #43689)SyncConfig
andchdir_to_trial_dir
(#42909)storage_path
(#42853, #43179)NevergradSearch
(#42305)checkpoint_dir
andreporter
deprecation notices (#42698)Ray Serve
🎉 New Features:
max_queued_requests
(#42950).num_replicas=”auto”
(#42613).🏗 API Changes:
max_concurrent_queries
tomax_ongoing_requests
target_num_ongoing_requests_per_replica
totarget_ongoing_requests
downscale_smoothing_factor
todownscaling_factor
upscale_smoothing_factor
toupscaling_factor
max_ongoing_requests
will change from 100 to 5.target_ongoing_requests
will change from 1 to 2.💫 Enhancements:
RAY_SERVE_LOG_ENCODING
env to set the global logging behavior for Serve (#42781).max_ongoing_requests
(max_concurrent_queries
) is also now strictly enforced (#42947).RAY_SERVE_ENABLE_QUEUE_LENGTH_CACHE=0
.max_ongoing_requests=1
for autoscaling deployments and still upscale properly, because requests queued at handles are properly taken into account for autoscaling.RAY_SERVE_COLLECT_AUTOSCALING_METRICS_ON_HANDLE=0
RAY_SERVE_EAGERLY_START_REPLACEMENT_REPLICAS=0
🔨 Fixes:
KeyError
on disconnects (#43713).📖 Documentation:
max_replicas_per_node
(#42743).RLlib
🎉 New Features:
💫 Enhancements:
SampleBatch
column names (e.g.SampleBatch.OBS
) into new class (Columns
). (#43665)OldAPIStack
decorator (#43657)LearnerHyperparameters
) withAlgorithmConfig
. (#41296)🔨 Fixes:
policy_to_train
logic (#41529), fix multi-APU for PPO on the new API stack. (#44001), Issue 40347: (#42090)📖 Documentation:
Ray Core and Ray Clusters
Ray Core
🎉 New Features:
💫 Enhancements:
get_task()
now accepts ObjectRef (#43507)🔨 Fixes:
📖 Documentation:
Ray Clusters
💫 Enhancements:
heap_memory
param forsetup_ray_cluster
API, and change default value of per ray worker node config, and change default value of ray head node config for global Ray cluster (#42604)🔨 Fixes:
Thanks
Many thanks to all those who contributed to this release!
@ronyw7, @xsqian, @justinvyu, @matthewdeng, @sven1977, @thomasdesr, @veryhannibal, @klebster2, @can-anyscale, @simran-2797, @stephanie-wang, @simonsays1980, @kouroshHakha, @Zandew, @akshay-anyscale, @matschaffer-roblox, @WeichenXu123, @matthew29tang, @vitsai, @Hank0626, @anmyachev, @kira-lin, @ericl, @zcin, @sihanwang41, @peytondmurray, @raulchen, @aslonnie, @ruisearch42, @vszal, @pcmoritz, @rickyyx, @chrislevn, @brycehuang30, @alexeykudinkin, @vonsago, @shrekris-anyscale, @andrewsykim, @c21, @mattip, @hongchaodeng, @dabauxi, @fishbone, @scottjlee, @justina777, @surenyufuz, @robertnishihara, @nikitavemuri, @Yard1, @huchen2021, @shomilj, @architkulkarni, @liuxsh9, @Jocn2020, @liuyang-my, @rkooo567, @alanwguo, @KPostOffice, @woshiyyya, @n30111, @edoakes, @y-abe, @martinbomio, @jiwq, @arunppsg, @ArturNiederfahrenhorst, @kevin85421, @khluu, @JingChen23, @masariello, @angelinalg, @jjyao, @omatthew98, @jonathan-anyscale, @sjoshi6, @gaborgsomogyi, @rynewang, @ratnopamc, @chris-ray-zhang, @ijrsvt, @scottsun94, @raychen911, @franklsf95, @GeneDer, @madhuri-rai07, @scv119, @bveeramani, @anyscalesam, @zen-xu, @npuichigo
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by Mend Renovate. View repository job log here.