New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question on (example) usage #7
Comments
Thanks for the suggestion! Initially the stats are being displayed in the python main.py birrt maps/room1.png --save-output and by default it will save a timestamped csv under |
Example output:
where it saves a line per iteration. |
Great, this looks like a very nice solution. However, it does not seem to work as intended on my side and I'm not sure why. When I run the following command:
The result is an empty log file in a new folder called It might be relevant that I'm on Windows 10, if it works on Linux I can try to investigate why it does not work on Windows. Finally, I was wondering if you are planning to add this functionality to the quick start part of the documentation. The documentation is complete as is, but an example of how to use this functionality would be very valuable in my opinion. Let me know if I can help troubleshooting by trying out specific things/fixes. |
Thanks @OlgerSiebinga for your thorough details on the matter! I have found out that the empty logs in windows is due to the use of colon |
Great! This fixed the issue! The example in the documentation is also very helpful. |
According to the submitted paper, with sbp-env "one can quickly swap out different components to test novel ideas" and "validate ... hypothesis rapidly". However, from the examples in the documentation, it is unclear to me how I can obtain performance metrics on the planners when a run a test.
Is there a way to save such metrics to a file or print them when running planners in sbp-env? If not, this might be a nice feature to implement in a future version. Otherwise, you could consider adding an example to the documentation on how to compare different planners in the same scenario.
(this question is part of the JOSS review openjournals/joss-reviews#3782)
The text was updated successfully, but these errors were encountered: