-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi_trial example with different search strategies #4421
Comments
Could solve the stopping problem for the evolutionary algorithm by tuning the |
Sorry for the late response. You are right that evolution has its own control of population and cycles. If not set correctly, the experiment could experience situations that it never ends. It's a known issue, but we didn't come up with a good solution yet. About RL, it's a tianshou compatibility issue. please downgrade tianshou to v0.4.4 for now. Or you can try NNI by installing from source. The issue should have already been fixed. |
Many thanks for your answer! Are there also some restrictions for the RL how the |
I think it should be similar to evolution. |
Okay, but when I set for example the
|
Emm, that could be a problem with concurrency. Could you try to set |
|
I tried the same example. I observe that the experiment hangs instead of a ValueError I need more investigations into this. |
Thanks, perfect this works for me! |
I want to try the multi_trial example with a different search strategy than random, but it causes me some troubles. For this I just remove the line
and replaced it with this for the evolutionary algorithm:
and with this for the reinforcement learning:
But by running the evolutionary algorithm, I encounter the problem that the search never ends/stops. The output is like this, but it never prints out the exported models like it is done in the multi_trial example:
By using the reinforcement learning algorithm I get the following error:
I also tried to use the status of the current master branch because I would love to use the
model_filter
for the evolutionary algorithm, but this doesn't work because of the error mentioned in this issue.The text was updated successfully, but these errors were encountered: