Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Time Windows #113

Closed
K-Leon opened this issue May 8, 2018 · 46 comments
Closed

Support for Time Windows #113

K-Leon opened this issue May 8, 2018 · 46 comments

Comments

@K-Leon
Copy link

K-Leon commented May 8, 2018

I justed looked into vroom - it's a really amazing project! Especially the performance is excellent.

I'm just wondering if there are any plans or existing "workarounds" to support Time Window constraints for Jobs?

Best Regards,
Leon

@jcoupey
Copy link
Collaborator

jcoupey commented May 9, 2018

@K-Leon thanks for your feedback. Performance is achieved by using dedicated approaches for sub-problems like TSP and capacitated VRP (CVRP). The drawback is that there is no cheap hack to add TW support.

The next release will include solving CVRP with skills constraints (still work in progress). Adding TW would be next in line, so I'm flagging this issue as a feature request for further reference.

@mikeppp
Copy link

mikeppp commented Jun 5, 2018

Yes great project, but kind of useless to us without time widows... almost all our jobs have time constraints. If you guys can figure this out this will be quite amazing. Jsprit takes about 1300 ms to solve what this does in 50 ms (using the web nodejs wrapper). But again without time windows its unfortunately of little use... Will definitely looking for TW!

@jcoupey jcoupey added this to the V1.3.0 milestone Jul 5, 2018
@jcoupey jcoupey added the VRPTW label Jul 5, 2018
@jcoupey
Copy link
Collaborator

jcoupey commented Jul 18, 2018

Now that the CVRP feature is released, the good news is we'll probably be able to reuse most of the approach (and part of the code) to handle time-windows. I'd like to have:

  1. a good way to describe timing constraints, designed for fast validity evaluation
  2. a heuristic approach to build initial solutions and/or work on sub-problems at vehicle level
  3. an adjustment of the CVRP local search phase in order to bring in time validity constraints
  4. a spatio-temporal clustering scheme for quick computation of initial solutions

Points 1. to 3. are enough to have a decent VRPTW implementation running. Point 4. could come later, but would be great for computing speed-up, and scaling concerns.

I want to keep this issue as a place for ideas and discussions on the approach outline, and will open dedicated tickets for required steps.

This was referenced Jul 18, 2018
@mikeppp
Copy link

mikeppp commented Jul 18, 2018

This is great news guys!!!

Any ROUGH idea on the timelines??

Thank you!!

@peterdn1
Copy link

Great news!

@jcoupey
Copy link
Collaborator

jcoupey commented Jul 19, 2018

Any ROUGH idea on the timelines??

I'd say a few months from now depending on available slots and priorities on my side. Any help to make it faster is of course appreciated. ;-)

@peterdn1
Copy link

I can help in testing if that is in any way useful.

@sashakh
Copy link
Contributor

sashakh commented Jul 19, 2018

Hi @jcoupey!

+1

Any help to make it faster is of course appreciated. ;-)

I could help with development in my free time. )

@jcoupey
Copy link
Collaborator

jcoupey commented Jul 20, 2018

@peterdn1 yes testing is definitely useful, but we need to get on with the development phase before. Do you have some specific use-cases/instances in mind?

@jcoupey
Copy link
Collaborator

jcoupey commented Jul 20, 2018

@sashakh thanks! I've ticketed everything I have in mind so far for the time-windows part, let me know if you'd like to join for something in particular.

There are also a couple other technical issues added to the v1.3 milestone, and we'll need some work on benchmark instances formatting at some point (see VROOM-Project/vroom-scripts#1).

@sashakh
Copy link
Contributor

sashakh commented Jul 21, 2018

@jcoupey , Ok.
Everything you have ticketed depends on #136 . So I can start there if you wish (and didn't start by yourselves already)), or alternatively I can start to deal with benchmark formatting stuff.

BTW. Probably you know this page already, but recently I've found this research http://homepages.dcc.ufmg.br/~rfsilva/tsptw. There are a lot of TSP+TW benchmarks and some C++ code implemeting TSP+TW running those benchmarks. It seems to be "public domain", so I've published the code at https://github.com/sashakh/TSPTW. Probably could be useful as a reference.

If you wish to contact me directly for boring details please feel free to use my email sashakh at gmail dot com.

@jcoupey
Copy link
Collaborator

jcoupey commented Jul 22, 2018

@sashakh I'm planning to start working on #136 and #137 in the coming week, we can probably find a way to get along.

Thanks for the TSPTW benchmark link, I did not know that source. This will probably be very handy while implementing #137.

Having the benchmarks all set up while starting on the TW work would definitely be great!

@peterdn1
Copy link

We have a mix of windows. Some all day, some fixed and some fairly open. All Day windows, 1 hour, 2 hour Windows, 4-hour windows. Shifts are typically 8-10 with floating breaks.
capture

@jcoupey
Copy link
Collaborator

jcoupey commented Sep 26, 2018

For anyone following this ticket and interested in early tests: #138 (comment).

@K-Leon
Copy link
Author

K-Leon commented Oct 4, 2018

I just let a stream of data running through vroom and i‘m thrilled about the quality and the speed of the timewindow implementation. We‘re talking about a reduction of 40 seconds + for some queries.

One little hint would still be great: Is there an easy way to support soft timewindows? To put I that way: If it‘s not fitting it will plan a stop anyhow but in a way that the overall delay isn‘t that large.

@jcoupey
Copy link
Collaborator

jcoupey commented Oct 5, 2018

Thanks @K-Leon it's really useful to have some feedback on the overall behaviour in term of quality/speed. Note that there is still work in progress to adjust/improve things at #166 and #139.

We‘re talking about a reduction of 40 seconds + for some queries.

I'd be really interested if you could provide a bit more context, like total computing time magnitude/typical problem size, and what other solving approach you're comparing to?

Is there an easy way to support soft timewindows?

Validity constraints enforced by time windows are "hard" constraints: serving a job is either valid if it fits with the timing or not considered at all. There is no support for "soft" constraints with some kind of penalty at the moment. The lazy answer right now is that if you're OK with delaying a stop after it's time window, then the corresponding "hard" constraint was too tight in the first place. ;-)

@K-Leon
Copy link
Author

K-Leon commented Oct 6, 2018

I just followed your tips and we implemented the poor mans soft timewindow algorithm clientside ;) (if it‘s not fitting drop all constraint except the most important)

But one thing we noticed with current master that we have from time to time suboptimal solutions. It‘s not reproducable in current demo server so I guess it‘s related to the latest speed optimizations. I just added a screenshot from our testing App.
Stop 17 is mixed up with 18. Same Request on Vroom Demo Server returns expected results.

7bcfdf59-28c7-440d-839f-b34298933091

Regarding the context: we usually handle quiet simple Problem. Only one vehicle, fixed, but timewindows with soft constraints. Our previous Solution with embedded soft timewindows took a lot of Time - mostly due to the fact that it scaled very badly. The solution with vroom is even way faster if there are multiple rounds necessary and not many stops around. The usual query takes around 500/1000ms with vroom.

@jcoupey
Copy link
Collaborator

jcoupey commented Oct 8, 2018

@K-Leon thanks for the details. Yes scaling is a key factor in my opinion, and it plays a big part in the way the solving approach has been designed so far.

we have from time to time suboptimal solutions

There is no guarantee of optimality in the returned solution, but it should usually be impossible to improve it in a very obvious way. In the case of your example (back-and-forth 16 -> 17 -> 18 -> ...), I'd consider this a bug if there is no reason related to timing constraints for doing so. Could you check that or share your request?

Same Request on Vroom Demo Server returns expected results

This is only another hint that timing constraints might be responsible here. Since august, the demo server has been running on 3979d76, a version prior to any work on time windows. So time windows constraints handed over were simply ignored until today. I just updated it to 96996ef to enable tests with time windows (you might still experience differences with a local instance, mostly related to using different OSM datasets/OSRM version or profile).

@K-Leon
Copy link
Author

K-Leon commented Oct 8, 2018

@jcoupey Thanks for your hard work!

I just prepared the Request. It's very hard to strip it down, because the "bug" seems only to be triggered at very specific conditions. I added a marking for the tricky stops: https://gist.github.com/K-Leon/19a0721329248aa65399e53df717f908

Edit: Like you said - the new version of the Demo Frontend returns a different Result. Now It's getting interesting finding a repro. I'm open for everything to help you
Edit2: Locally i'm running latest OSRM (5.19) with Dataset from Friday. This was my first guess actually.

@jcoupey
Copy link
Collaborator

jcoupey commented Oct 8, 2018

To overcome the effect of different OSM/OSRM context and behaviour, you can run this script on your input file (make sure https://github.com/VROOM-Project/vroom-scripts/blob/master/src/utils/osrm.py#L5-L6 points to your own OSRM server).

python add_osrm_matrix.py input.json

It will handle the table request VROOM would normally send to your OSRM instance, and then embed the matrix in a "standalone" input file. Then we'd be looking at the exact same problem instance.

@jcoupey
Copy link
Collaborator

jcoupey commented Oct 8, 2018

@K-Leon happy to look further into this if you can share an instance with an embedded matrix, but could you then open a dedicated ticket as we're drifting from the original scope here.

@K-Leon
Copy link
Author

K-Leon commented Oct 8, 2018

I just created a new Ticket containing the Matrix Request. Thank you very much!
#169

@jcoupey
Copy link
Collaborator

jcoupey commented Nov 5, 2018

I just tagged a v1.3.0-rc.1 release candidate that fully implements using time window constraints for vehicles and jobs, so it's a good time for testing!

Happy to have any feedback, feel free to comment here or open other tickets.

Edit: v1.3.0-rc.1 is screwed when using -DNDEBUG, use v1.3.0-rc.2.

@jcoupey jcoupey closed this as completed Nov 5, 2018
@mikeppp
Copy link

mikeppp commented Nov 6, 2018

Hi Julien!
This is great news!

Quick question..

I suppose work has to be done on vroom-express to add the new time_window_t data into the requests? Or is that automatic??

I would love to test it via HTTP REST api...

If you have any quick tips it would be appreciated! We could run some routing data (with job time windows) against JSprit and compare it with vroom...

We could then compare the results and see if its working!

Cheers,

One small point I build rc2 and running it yields the message:
VROOM Copyright (C) 2015-2018, Julien Coupey
Version: 1.3.0-rc.1

@jcoupey
Copy link
Collaborator

jcoupey commented Nov 6, 2018

I suppose work has to be done on vroom-express

Nothing to change to vroom-express, POST data is just passed through to the vroom system call here.

We could then compare the results

I'd really appreciate any feedback you may have on real-life instances. For comparisons, do you have a way to make sure the underlying routing costs are identical (maybe you're running jsprit on top of OSRM)?

Also, note that the default -x value is now 5 (longer computing time for better results), you might want to tune it down depending on your instances size and the trade-off you want to pick.

Version: 1.3.0-rc.1

I cherry-picked a commit from master for rc.2 and just forgot to bump the version. Nothing to worry about as long as you're on tag v1.3.0-rc.2.

@mikeppp
Copy link

mikeppp commented Nov 6, 2018

Yes we use OSRM with JSprit...

I will try to get vroom-express up and running ...

Could you give me an example of the json with time windows??

@jcoupey
Copy link
Collaborator

jcoupey commented Nov 7, 2018

Could you give me an example of the json with time windows??

The API description now contains a full example with input and response.

@mikeppp
Copy link

mikeppp commented Nov 7, 2018

Awesome thank you!

@jcoupey
Copy link
Collaborator

jcoupey commented Nov 23, 2018

For anyone following this ticket and interested in testing, you should update to v1.3.0-rc.3 (contains a fix for #180).

@K-Leon
Copy link
Author

K-Leon commented Nov 23, 2018

Sounds great! I'll let my Testdata run over it and let you know about the results.

@mikeppp
Copy link

mikeppp commented Nov 23, 2018

Perfect we are starting are tests v jsprit now. I will post solution from both when complete...

@mikeppp
Copy link

mikeppp commented Nov 23, 2018

Also, note that the default -x value is now 5 (longer computing time for better results), you might want to tune it down depending on your instances size and the trade-off you want to pick.

How do we increase this?? Sorry not a C++ guy anymore.. the -x values are 0 to 5 is there a way to increase this?? Just for comparison, jsprit is using 2000 iterations I would like to use the same for a fair comparison!

@jcoupey
Copy link
Collaborator

jcoupey commented Nov 25, 2018

the -x values are 0 to 5 is there a way to increase this??

The -x value is not a low-level gear like say a number of iterations. My view on this is you should use the default (max value of 5) as long as you're satisfied with the computing times. If for some reason (like handling bigger instances) you want to get solutions faster, then you might want to tune it down.

use the same for a fair comparison

The notion of a "fair" comparison is somewhat hard to define. Is it comparing solution costs with equal computing time? Is it comparing computing times for similar solution costs? In practice probably neither will be doable.

The simplest step would be to use your jsprit setting (the one you're currently satisfied with) and compare it to VROOM with -x 5, by monitoring both solution costs and computing times, maybe cumulate those on a representative set of your instances.

If you want to go ahead and compare multiple settings, you could pick several tunings for jsprit (number of iterations or other parameters) and several values of -x for VROOM. Then looking at the solution costs as a function of the computing times will yield a pareto front of the best available trade-offs.

Of course some parameters should be identical for fairness though:

  • run on the same machine
  • use the same underlying OSRM box
  • allocate the same number of threads to the solving engine

I don't know about multi-threading in jsprit. For VROOM, provided you have enough cores at hand, going from -t 4 to -t 8 will have a dramatic impact on computing time.

@mikeppp
Copy link

mikeppp commented Nov 26, 2018

Ok so same machine/ORSM and threads (8) for both jsprit and vroom.

Note we are using latest north america OSRM file ...

The request is a realistic mix of orders with no windows and a few with windows. NOTE If all have no time windows current "regular" vroom produces optimal result.

In this case the jsprit results are better (although slower). Note that vroom does not satisfy all the time windows requirements and jsprit does. Did I get the request parameters wrong??

Let me know if you need any more info!

Cheers

jsprit_result.txt
vroom_request.txt
vroom_result.txt

@jcoupey
Copy link
Collaborator

jcoupey commented Nov 27, 2018

vroom does not satisfy all the time windows requirements

I guess you mean jobs 19 and 3, whose arrival times are out of the initial time windows. This is caused by an early arrival to those jobs in the solution resulting in a non-zero waiting_time. The expected time window constraint satisfaction is met by the service start, which is arrival + waiting_time.

Could you run this script on your input request in order to provide a standalone version of your problem? This would embed the cost matrix in the request file so that I could reproduce the exact same problem locally without any access to your OSRM stack.

@jcoupey
Copy link
Collaborator

jcoupey commented Nov 27, 2018

@mikeppp looks like the travel times used by both solvers does not really allow a fair comparison. I've taken a look at the common edges from your two result files in order to compare travel times used by jsprit and VROOM.

edge travel time in jsprit result travel time in VROOM result
6 -> 12 300 373
12 -> 7 540 628
7 -> 1 720 883
16 -> 19 60 118
17 -> 18 180 201
4 -> 15 360 448

Looks like your jsprit instance uses travel times rounded to minutes that are consistently lower that the one used by VROOM (OSRM table values rounded to the nearest second). Do you have a way to adjust that?

@mikeppp
Copy link

mikeppp commented Nov 28, 2018

@jcoupey I don't have a way sorry... JSprit has everything in minutes where as VROOM is al lin seconds. I don't actually know the inner workings of JSprit either. I was trying to create a realistic scenario and then apply the problem to both systems. This isn't my specialty or area of expertise. To be honest I just want something that works fast and accurately.

Do you get a similar solution from my input file??

@mikeppp
Copy link

mikeppp commented Nov 29, 2018

It seems you are correct in that VROOM travel times are more realistic. We are looking into the solution in more detail... Did you have a chance to run our problem yourself?

@jcoupey
Copy link
Collaborator

jcoupey commented Nov 29, 2018

Maybe the rounding down to minutes comes from the way you interface jsprit with OSRM? You could either have jsprit use second-rounded times, or have VROOM use the same minute-rounding convention here.

Did you have a chance to run our problem yourself?

I can't solve your problem using another OSRM instance and expect to reproduce the results (different data, OSRM versions and profiles etc.). But I can take a look if you share a standalone problem.

@mikeppp
Copy link

mikeppp commented Nov 29, 2018

Hi Julian,
I tried to run the script but got:
Traceback (most recent call last):
File "./vroom-scripts/src/add_osrm_matrix.py", line 5, in
from utils.osrm import table
File "/home/mikep/osrm_python/vroom-scripts/src/utils/osrm.py", line 3, in
import requests
ImportError: No module named requests

Now we are running osrm inside docker is that the problem??

@jcoupey
Copy link
Collaborator

jcoupey commented Nov 29, 2018

ImportError: No module named requests

You're just missing a python module, pip install requests should do.

@mikeppp
Copy link

mikeppp commented Nov 29, 2018

Ok attached is the generated problem matrix file...
sl_problem_matrix.txt

@jcoupey
Copy link
Collaborator

jcoupey commented Nov 30, 2018

Thanks for sharing the full instance. I'm getting the exact same solution as your initial vroom_result.txt file with total travel time cost 9780.

Using the travel times from the matrix confirms this cost for VROOM route (start -> 9 -> 21 -> 20 -> 13 -> 6 -> 12 -> 7 -> 1 -> 16 -> 19 -> 17 -> 18 -> 3 -> 4 -> 15 -> 5 -> 14 -> end).
Based on the same matrix, the cost for jsprit route (start -> 9 -> 20 -> 21 -> 6 -> 12 -> 7 -> 1 -> 17 -> 18 -> 16 -> 19 -> 13 -> 14 -> 5 -> 4 -> 15 -> 3 -> end) is 10435, so actually higher. Not really a fair comparison though if jsprit is using different costs internally...

@mikeppp
Copy link

mikeppp commented Nov 30, 2018

Hi Julian,

Thank you for the analysis. We have dissected the route by hand and it does seem valid. This is good news actually!

I think you are definitely ready or close to ready to release then.

We will conduct some tests over the next little while. I now know how to create and share the instance!

Cheers

@jcoupey
Copy link
Collaborator

jcoupey commented Dec 3, 2018

@mikeppp thanks for your feedback, happy to have more if you can. Do you have bigger instances to test on (like multiple vehicles or higher number of jobs)?

FWIW, I've been feeding a few instances to a linear programming solver recently, and it turns out that for the instance you shared, the solution with cost 9780 is optimal (with regard to total travel time).

@mikeppp
Copy link

mikeppp commented Dec 3, 2018

@jcoupey Thank you!

Depending on time, yes it is our goal to do a much LARGER request with LOTS of orders and users. We will be working on that problem shortly...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants