-
-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor dispatcher to reduce run time and memory overhead #99676
Conversation
When we removed the last job/callable from the dict for the signal we did not remove the dict for the signal which meant it leaked
# less memory than a full closure since a partial copies | ||
# the body of the function and we don't have to store | ||
# many different copies of the same function | ||
return partial(_async_remove_dispatcher, dispatchers, signal, target) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
partial creates a new function wrapper with different arguments but the underlying function body reference is the same https://github.com/python/cpython/blob/cf19e8ea3a232086ff1e0d7d5e2a092d3d96fc7c/Modules/_functoolsmodule.c#L84 so we don't end up with a new function body per dispatcher connect in memory.
Memory used per closure: 408.7808
Memory used per partial: 250.01984
from functools import partial
import psutil
process = psutil.Process()
before = process.memory_info().rss
closures = []
def gen_closure(i, y, q):
def this_is_a_closure():
z = i + 1
print(z)
z = y + 1
print(z)
z += q
new_dict = {}
new_dict["key"] = "value"
return i
return this_is_a_closure
for i in range(100000):
closures.append(gen_closure(i, 6, 7))
after = process.memory_info().rss
print(f"Memory used per closure: {(after - before) / len(closures):,}")
before = process.memory_info().rss
def use_for_partial(i, y, q):
z = i + 1
print(z)
z = y + 1
print(z)
z += q
new_dict = {}
new_dict["key"] = "value"
return i
partials = []
for i in range(100000):
partials.append(partial(use_for_partial, i, 6, 7))
after = process.memory_info().rss
print(f"Memory used per partial: {(after - before) / len(partials):,}")
|
||
run: list[HassJob[..., None | Coroutine[Any, Any, None]]] = [] | ||
for target, job in target_list.items(): | ||
for target, job in list(target_list.items()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to just make a copy of the items instead of appending it all to a list one at a time since the python function overhead of calling .append()
over and over it much worse
thanks |
Proposed change
.append()
over in over when sending and instead do onelist()
call since the python overhead of calling.append()
was much more expensive than the singlelist()
callType of change
Additional information
Checklist
black --fast homeassistant tests
)If user exposed functionality or configuration variables are added/changed:
If the code communicates with devices, web services, or third-party tools:
Updated and included derived files by running:
python3 -m script.hassfest
.requirements_all.txt
.Updated by running
python3 -m script.gen_requirements_all
..coveragerc
.To help with the load of incoming pull requests: