-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incremental optimizer performance #54
Comments
Now I found that when I stopped the process (Ctrl+C) I get this output: value:100, elapsed time(s):46.94204223199995
C^C No solution can be found for problem XXXXXX.
Reason: canceled
total number of iterations: 2
value: 100
XXXXXXXXX satisfiability checked in 46.94s Substituded private info for XXX |
Yes it is. Each time the incremental solver finds a value, it looks for another one, better. If ever the previous solution was the optimum, then there's no other solution and the solver may take time to find one or check satisfiability. |
Although if it found a solution for 100 utilization in 46 secs (output: XXXXXXXXX satisfiability checked in 46.94s), what is it doing after finding it that takes that long? |
Well, I don't know, I never faced this issue. minutes you say? How many exactly? |
Right, sorry, I might have explained myself wrong. I meant to ask if in the current implementation there is a reasonably long process after getting to 100. It's weird because it takes minutes after logging this:
I don't exactly know how many minutes because I stopped it before, but it had already taken like 5. |
What is the max_time value for your solver? |
I set a 1200 so it could always solve it, might this be the issue? |
Yes. The solver tries to find a solution for 101, but there's not any value. I have to add another constraint telling that the resource utilization cannot exceed 100 and that it's not worth trying to find a better value. |
Oh, I didn't know that by adding that constraint it would stop there, I first thought to stop the incremental optimizator loop when it targets values of 100 or 0. Nice though! |
Having tried this constraint solution, I think it doesn't work as expected. Before, the optimizer was able to find 100 utilization solution in 15-60 seconds, but it directly found the 100 solution. Now it does this:
It's still very good, but it's strange that it behaves differently, right? |
Adding a bound to a z3 integer variable slightly changes the wy the solver performs the computation (however it's a bit unclear to me). Did you try the "QF_IDL" logics? Another way to go would be to add a "stop_value" parameter to the incremental solver, that would break the loop as soon as the objective is reached. This way, you could have your 15-16s solution. FYI it's difficult for me to benchmark changes without any additional information. I recently add a benchlark/ folder (see https://github.com/tpaviot/ProcessScheduler/tree/master/benchmark), it is quite useful to measure impacts of commits on computation time/memory footprint. If you can contribute a similar benchmark for your problem, it will be quite useful. |
Got this when I used "QF_IDL" logics |
yep, me too, try "QF_UFIDL" that should work |
I looked at the benchmark file. With contribute with a similar benchmark, you mean implement it in my problem and send you the output, or make an example of my problem without private info and upload a benchmark file for it to the repo? |
The second
Le mar. 1 juin 2021 à 14:06, dreinon ***@***.***> a écrit :
… I looked at the benchmark file. With contribute with a similar benchmark,
you mean implement it in my problem and send you the output, or make an
example of my problem without private info and upload a benchmark file for
it to the repo?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#54 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFBFIW6VQAG4KVFMUTMY2LTQTEL3ANCNFSM454HZLDQ>
.
|
I think I could do that. |
cool, after that I can automatically run the benchmark on azure cloud to check the effects of any change |
N = list(range(10, n, step)) # from 4 to N, step 2
for num_dev_teams in N: Is this part of the dev_team problem, or is it to see how the problem scales? |
Just to see how it scales, and what is the complexity O(n) or O(n^2) or O(n^3) etc. |
Ok. |
I have the following variables that might be interesting to see how the problem scales wrt them. Could you tell me which ones do you find interesting? HORIZON = 60 # Horizon of the problem (multiple of 10 for simplicity)
PERIODS = [(10*i, 10*(i+1)) for i in range(int(HORIZON/10))] # Periods of 10 slots from 0 to horizon
MAX_TASKS_PER_PERIOD = 2 # Maximum tasks that a worker can do in each period (workload constraint)
MAX_TASKS_IN_PROBLEM = 4 # Maximum tasks that a worker can do in the whole problem
NB_WORKERS = 10 # Number of workers
NB_TASKS_PER_WORKER = 5 # Number of optional tasks per worker |
Setting the |
Okay! |
Is it normal for the incremental optimizer to stay a long time (minutes) after finding optimal solution:
value:100, elapsed time(s):46.94204223199995
?The text was updated successfully, but these errors were encountered: