-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MORSE remote viewer #623
Comments
We never really work on this question, so there is no ready solution for that. Through, Morse provides a multi-node approach, where some nodes can be 'viewer-only' (minimal logic). This mode can probably be used to synchronise a computing "Morse" to a remote blender (maybe even Blen4Web with the adequate code) but it probably requires some works to work out of the box in the cloud. |
HLA IEEE1516-2010 defines API for Web Services (WSDL) so it may be an interesting piste to follow. |
If I can suggest, the approach of the Gazebo Web viewer is quite interesting and also based on ROS. |
Also, I like the multi-node approach, but is it possible to have a compute-only Morse node? |
If by "compute-only", you mean "without any graphical output", I'm afraid it is not possible for the moment, as there is no "background mode" for Blender Game Engine. It is probably doable, but it require probably a lot of work (and in any case, you need some rendering to simulate camera). |
@giodegas "compute-only" MORSE (more frequently called headless simulation) is possible, as long as you have a OpenGL stack available. I post here the explanation I gave a while ago on that matter: Brief summaryI stream the pose of the robot with ROS at 60Hz with no display/no GPU In the same conditions, I stream a camera at 2.7Hz. First, let's clarify one thing: MORSE requires OpenGL. There is So 'headless' MORSE still means that your OS of choice provides an OpenGL does not mandate a GPU, though. It is hence perfectly possible In the Linux world, the best option, today, for CPU-based 3D Grab the lastest version of Mesa here (tested with 9.2.2): Compile with:
This will result in a new To run MORSE with this library:
Some quick performance results with the default environment:
(edit Intel HD4000
-> 60 FPS with hardware acceleration (Intel HD4000 IvyBridge) LLVMpipe
-> 14 FPS with LLVMpipe on a Intel Core i7, 8GB RAM FASTMODE=True(edit In fast mode, you only get the wireframe of the models: this is fine if you do not do any vision-based sensing.
-> 54 FPS with LLVMpipe on a Intel Core i7, 8GB RAM Last thing: on a headless server, you may want to run
A quick test shows that I can stream one camera with ROS at 2.7Hz :-/, |
Thank you. I am trying to "dockerize" morse. news: I made it work with X (almost) with this new docker image: giodegas/morse-xvfb with the command:
resulting in an acceptable frame rate. |
it is a problem related how boot2docker is built. I escalated the issue there. |
For those still interested to investigate a remote viewer setup for Morse I suggest to have a look at the NASA virtual robotics simulator Experience Curiosity based on Blend4Web. |
A bit late to the party, but I had the same need as @giodegas recently, so I put together a quick and dirty web client for watching a remote MORSE simulation. It uses Three.js for rendering the simulation in the browser, and crossbar.io for dealing with the websockets. Here's the code: https://github.com/yarox/morseweb. |
looking forward to play with it ASAP. |
Thanks @yarox it's always great to see new project around morse 👍 |
Thanks @yarox. I see you use some custom services. Can you list what you need, and see if we can integrate in base Morse ? I think some stuff is already in, at least in 1.3 such as:
How is done the translation between the blend model and the "json" model ? |
Thank you, guys! I forgot to tell you that I tested this on an Ubuntu 14.04 machine with MORSE 1.2.2 (the latest packaged version for Ubuntu). So maybe there are some discrepancies between the services I found and the ones provided in newer versions of MORSE. @adegroote, the services morseweb needs are:
Last two services are not really critical; they are used just for displaying information to the user. The translation between the Blender model and the json model was done using Three.js Blender Export. As discussed here, only one material per mesh is allowed in the models. |
Cool work! It would be probably easy to export the required meshes 'on the fly', as a special final step of the builder: once the simulation is created, we iterate over all the objects, and we automatically export them to THREE.js with the add-on (possibly using a simple caching mechanism not to export twice the same object). Should be fairly straightfoward. Another option, though, is to generate the THREE.js models for all the components during the installation of MORSE. That would actually make a lot of sense, I guess... (but we would still need a mechanism for custom scenes/components made by the user...) |
Also, to efficiently send the state of the simulation to the external viewer, I suppose we should rely on the multi-node mode of MORSE, that is meant to distribute the simulation over several instances of MORSE: each instance simulate some robots, and get the state of the other robots from the other instances. In the case of the remote viewer, the viewer could be a 'fake' instance that does not simulate anything, but get the state of simulation from another (headless) instance. |
On 30/Jan - 04:11, Séverin Lemaignan wrote:
Yes, definitively. It allows to get rid of the extra pose sensor |
I just implemented a script to export Blender models to Three.js. It's very basic, but could be used as starting point.
That's a good idea. Any pointers on how to follow that route? Thanks! |
The documentation for multinode is here https://www.openrobots.org/morse/doc/latest/multinode.html basically, you need to call multinode_server in another terminal, and configure the main node to export all the robots. From morseweb, you need to connect to multinode_server and sends message like you should receive something like ['morse_node', {robots : pos. ...}]. Normally, you never need to kill multinode_server |
You can find an example here: Then, you can get pose using: from pymorse.stream import StreamJSON, PollThread
node_stream = StreamJSON('localhost', 65000)
poll_thread = PollThread()
poll_thread.start()
# simulate node data
node_stream.publish(['node1', {'robot1': [[0, 0, 0], [0, 0, 0]]}])
node_stream.publish(['node1', {'robot2': [[0, 0, 1], [1, 0, 0]]}])
node_stream.publish(['node1', {'robot3': [[0, 1, 0], [0, 1, 0]]}])
# or while node_stream.is_up():
for i in range(42):
# multinode_server sends data after receiving
node_stream.publish(['viewer', {}])
# Wait 1ms for incomming data or return the last one received
data = node_stream.get(timeout=.001) or node_stream.last()
print(data)
node_stream.close()
poll_thread.syncstop() |
Great! I'll take a look at those examples and try to come up with something. |
I took @pierriko code example and reimplemented the pose tracking using multi-node. I let a simulation run for some time and it seemed to work fine. @adegroote, any updates on the services? A service to get the name of the environment would be really nice. Also, is there something like Do you guys see something else I need to address? Thanks. |
I will try to work on this this afternoon. My idea is to extend supervision details to retrieve the element information. You can use the clock sensor to get the time, but I think I will expand the "multinode socket" protocol to add both real_time and simulated_time (and it makes completely sense). I will propose a branch for future test. |
That'll be great! Let me know if you need help with any of that. |
Can you try https://github.com/adegroote/morse/tree/morseweb_support ? If it handles your need, I will propose a PR for these features. |
Those changes worked perfectly, thank you! Just one more thing: I also need to know the time when the simulation started, like With that last bit and the changes you already provided, it would be possible to get rid of the Again, thanks a lot! |
Do you have other need for start_time than the way to compute relative_time since the beginning of the simulation ? Does a method "Environment.use_relative_time()" would be sufficient for you ? |
Otherwise, I'm wondering what are the benefits of three3d.js over blend4web (the latter would allow to export richer model (at least from the documentation)). |
@adegroote It would be appreciated to have an "agnostic" Morse Web simulator, that now supports three3d.js and Blend4Web, and more to come in the future... Btw, you guys are doing a great job! |
Fantastic! :-) [http://www.plymouth.ac.uk/images/email_footer.gif]http://www.plymouth.ac.uk/worldclass This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, Plymouth University accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. Plymouth University does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form. |
@adegroote, yes, I just need the start time to compute the elapsed time from the beginning of the simulation. If I can get that information with As for the Blend4Web vs Three.js, I tried Blend4Web first, and I found it a bit cumbersome for what I wanted to do: copy the objects' positions from a MORSE instance. I think Blend4Web is more suited to create some models in Blender, export them, and then let the browser handle the animation, physics, and all that. Since we already have Blender doing that for us, I thought using a library for just drawing the 3D stuff was a more straightforward approach. Also, the licensing was simpler. I agree that Blend4Web would allow for richer models, but it seems that the guys behind Three.js are constantly improving the exporter. @giodegas, that's great! If you use morseweb, please file any issues you have or any features you'd like to see! |
@yarox, can you test my updated branch ? You need to add env.use_relative_time(True) to enable the new behaviour. |
@adegroote, I tested it and it works! Thank you! |
It has been merged in master branch, which should become soon 1.4. |
@yarox are you still working on this? |
Yes, this is cool. |
Is there any way to have MORSE running in a cloud and have a local blender to do the rendering?
Or any kind of WebGL remote viewer, like Blend4Web ?
The text was updated successfully, but these errors were encountered: