-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Topological Prediction] wrong edge_ids, missing edges, and 0.0 transversability probability #215
Comments
Hi just found the problem, will fix tomorrow morning, the probability 0 is a problem that I don't know how to handle in theory it should be solved by exploration, but for now I can bootstrap the value |
Hi, #209 also includes a fix for this, now all unknown edges (those with no statistics) have a probability of 0.5 and duration estimated by distance. Minimum probability now is 0.01 to avoid 0 |
The edge_ids are still prepended by the topological map_name:
|
ooops, sorry I forgot to take that out, give me a second |
now its done same PR |
I got this when having everything fully integrated, but i only saw it once in a lot of calls:
|
hmmm that is very weird :/ could it be some ros timing issue? |
it looks like there was some object that wasnt set somewhere |
The predict is also a bottleneck in the whole process of getting expected travel times. You think it can be done faster? |
hmmm, not really, I depend on the fremenserver times. |
Currently, one prediction takes ~25ms. How long a prediction is supposed to take ? |
The actual calulation takes much less time, the problem is the communication overhead because you are querying for each state separately. |
the predictions are called quite a lot of times for each scheduling problem. we should be able to reduce the number of calls, as i think now it's a bit brute force. We'll have to call it one time per edge at least though. Getting something that calls for all edges in one go would already be a nice improvement then Also, just to make sure, once the model is built (I assume we'll have a node that rebuild the fremen models once in a while), the prediction times are constant right? |
The fremen models are now build incrementally - every time you add an observation, the fremen model is updated. |
just to clarify, right now the models are built only when the node is started, but I am planning to add a service to rebuild the models when called adding the new info |
Hi, forecast reduces operation time greatly :) only one question/suggestion though, now the adaptive order doesn't work any-more as the action server takes only one order I'm going to use order 1 as it the most chosen one unless @gestom suggests otherwise |
isnt the adaptative thing one of the interesting things about the model building? Why doesnt it work anymore? |
yes, but is not vital, it doesn't work since it is a different order for each edge but on the action server I can only pass one order for all edges |
great!!! :) Sorry didn't know you were already working on this 👍 |
👍 |
The adaptive order works a bit differently now. In the old version, you would simply set the order to -1 and the system would take the model with the lowest reconstruction error. Now, you provide measured data and ask fremen by the 'evaluate' action how do the models fit these data. You get the prediction/reconstruction errors for given orders and then select the order which you want. This allows to evaluate the predictive capability of different models for the data that you explicitly choose - this is useful e.g. for cross-validation. |
the new version in #209 does the prediction in 0.6 secs |
I just tested the topological prediction in the following map:
When I run the service,I get this:
The edge_ids are prepended with the topo map name, not all edges have predictions (I imagine because they weere never executed, but we should use a distance based duration then), and some edges have 0 probability, which will create problems, as the robot will never try to execute it again @Jailander
The text was updated successfully, but these errors were encountered: