-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve performance #6
Comments
profile results:
|
I read their benchmark wrong- it's a per task timeout (what timeout solves 20%, 40%, 60%, etc of the test set). For T=3, they had
Depending on the # of inputs, my numbers are pretty close. Here is # inputs = 2 (my numbers are actually faster)
Here is # inputs = 3 (my numbers are much slower, max 100k nodes expanded)
I'm curious to know the median number of nodes expanded in their datasets. Also I'm exploring roughly 10k nodes per second. |
Current implementation is too slow beyond just python.
T=3, 100 programs
their implementation:
dfs - 500us
sort-and-add - 1ms
mine:
dfs - 38s
sort-and-add - 2640s
dfs slowdown: 76000x
sort-and-add slowdown: 2640000x
Want to get the slowdown to ~100x range (just python)
possible reasons:
The text was updated successfully, but these errors were encountered: