You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
currently, the async_tree set of benchmarks uses asyncio.gather, but asyncio.TaskGroup is the newer cooler thing.
recently I worked on a few asyncio optimizations, and had to patch pyperformance (itamaro@fe365c8) to measure the impact on TaskGroup (in addition to the proposed patch in gh-279 to cover eager task execution).
as an aside, I was also able to compare gather vs TaskGroup (by comparing the results with and without that patch), and found that TaskGroup is faster than gather across the board!
I was wondering what would be the best way to address this in the benchmarks suite.
change the existing set to use TaskGroup instead of gather? (which would make it 3.11+ only, and make it less useful when comparing to older runs that used gather)
keep the existing set as is, but use TaskGroup if it's available, falling back to gather otherwise?
don't bother - leave it with gather
The text was updated successfully, but these errors were encountered:
I think it's up to @mdboom. In general fixing benchmarks that already exist is painful enough that we don't do it unless there's no other way (e.g. a feature or dependency becomes obsolete). I am fine with adding another benchmark in the asyncio tree family. Maybe @kumaraditya303 has an opinion too?
currently, the async_tree set of benchmarks uses
asyncio.gather
, butasyncio.TaskGroup
is the newer cooler thing.recently I worked on a few asyncio optimizations, and had to patch pyperformance (itamaro@fe365c8) to measure the impact on
TaskGroup
(in addition to the proposed patch in gh-279 to cover eager task execution).as an aside, I was also able to compare gather vs TaskGroup (by comparing the results with and without that patch), and found that TaskGroup is faster than gather across the board!
I was wondering what would be the best way to address this in the benchmarks suite.
The text was updated successfully, but these errors were encountered: