Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MORSE remote viewer #623

Closed
giodegas opened this issue May 26, 2015 · 36 comments
Closed

MORSE remote viewer #623

giodegas opened this issue May 26, 2015 · 36 comments
Labels

Comments

@giodegas
Copy link

Is there any way to have MORSE running in a cloud and have a local blender to do the rendering?
Or any kind of WebGL remote viewer, like Blend4Web ?

@adegroote
Copy link
Contributor

We never really work on this question, so there is no ready solution for that. Through, Morse provides a multi-node approach, where some nodes can be 'viewer-only' (minimal logic). This mode can probably be used to synchronise a computing "Morse" to a remote blender (maybe even Blen4Web with the adequate code) but it probably requires some works to work out of the box in the cloud.

@adegroote
Copy link
Contributor

HLA IEEE1516-2010 defines API for Web Services (WSDL) so it may be an interesting piste to follow.

@giodegas
Copy link
Author

If I can suggest, the approach of the Gazebo Web viewer is quite interesting and also based on ROS.
Shall be possible to integrate it or a simplified version of it?

@giodegas
Copy link
Author

Also, I like the multi-node approach, but is it possible to have a compute-only Morse node?

@adegroote
Copy link
Contributor

If by "compute-only", you mean "without any graphical output", I'm afraid it is not possible for the moment, as there is no "background mode" for Blender Game Engine. It is probably doable, but it require probably a lot of work (and in any case, you need some rendering to simulate camera).

@severin-lemaignan
Copy link
Contributor

@giodegas "compute-only" MORSE (more frequently called headless simulation) is possible, as long as you have a OpenGL stack available.

I post here the explanation I gave a while ago on that matter:


Brief summary

I stream the pose of the robot with ROS at 60Hz with no display/no GPU
at all on a Intel Core i7.

In the same conditions, I stream a camera at 2.7Hz.


First, let's clarify one thing: MORSE requires OpenGL. There is
currently no way around, and, as a 3D simulator, we are likely to keep
this requirement in the foreseeable future.

So 'headless' MORSE still means that your OS of choice provides an
OpenGL implementation.

OpenGL does not mandate a GPU, though. It is hence perfectly possible
to run MORSE in the cloud, for instance on a cluster that does not
provide GPUs. Obviously, no GPU means no 3D hardware acceleration, but
depending on your application, it can still be perfectly ok. Read on for
some 'benchmarks'.

In the Linux world, the best option, today, for CPU-based 3D
acceleration is LLVMpipe. It is really easy to install (it took me ~5min
to compile and test MORSE on the CPU).

Grab the lastest version of Mesa here (tested with 9.2.2):
ftp://ftp.freedesktop.org/pub/mesa/

Compile with:

$ scons build=release libgl-xlib

This will result in a new libGL.so that uses the CPU instead of the GPU.
Blender runs pretty well with it.

To run MORSE with this library:

LD_LIBRARY_PATH=<path to your Mesa>/build/linux-x86_64/gallium/targets/libgl-xlib morse run <env>

Some quick performance results with the default environment:

$ morse create test

(edit test/default.py and add env.show_framerate() at the end for the FPS)

Intel HD4000
$ morse run test

-> 60 FPS with hardware acceleration (Intel HD4000 IvyBridge)

LLVMpipe
$ LD_LIBRARY_PATH=<mesa>/build/linux-x86_64/gallium/targets/libgl-xlib morse run test

-> 14 FPS with LLVMpipe on a Intel Core i7, 8GB RAM

FASTMODE=True

(edit test/default.py and switch fastmode to True)

In fast mode, you only get the wireframe of the models: this is fine if you do not do any vision-based sensing.

$ LD_LIBRARY_PATH=<mesa>/build/linux-x86_64/gallium/targets/libgl-xlib morse run test

-> 54 FPS with LLVMpipe on a Intel Core i7, 8GB RAM

Last thing: on a headless server, you may want to run Xvfb to create a
'fake' display to run MORSE. You need to create it with a depth of 24bit:

$ Xvfb -screen 0 1024x768x24 :1 &
$ LD_LIBRARY_PATH=<LLVMpipe path> DISPLAY=:1 morse <...>

A quick test shows that I can stream one camera with ROS at 2.7Hz :-/,
but in fast mode, I stream the pose of the robot at 60Hz without issue.

@giodegas
Copy link
Author

Thank you. I am trying to "dockerize" morse.
Please have a look to my latest addition to issue #601:

news: I made it work with X (almost) with this new docker image: giodegas/morse-xvfb with the command:

docker run -it  -e DISPLAY=$DISPLAY giodegas/morse-xvfb /bin/bash

resulting in an acceptable frame rate.
Just one more fix...
Please have a look at the morse output log and a morse screenshot .
Why the background is all black?

@giodegas
Copy link
Author

giodegas commented Jun 1, 2015

it is a problem related how boot2docker is built. I escalated the issue there.

@giodegas
Copy link
Author

For those still interested to investigate a remote viewer setup for Morse I suggest to have a look at the NASA virtual robotics simulator Experience Curiosity based on Blend4Web.
It runs nicely from the mobile Firefox with WebGL in my Android Lollipop 5.x smartphone..
Since WebGL is OpenGL in a browser, I guess the morse camera sensor should work...

@yarox
Copy link

yarox commented Jan 29, 2016

A bit late to the party, but I had the same need as @giodegas recently, so I put together a quick and dirty web client for watching a remote MORSE simulation. It uses Three.js for rendering the simulation in the browser, and crossbar.io for dealing with the websockets.

Here's the code: https://github.com/yarox/morseweb.

@giodegas
Copy link
Author

looking forward to play with it ASAP.
Thank you a lot!

@PierrickKoch
Copy link
Member

Thanks @yarox it's always great to see new project around morse 👍

@adegroote
Copy link
Contributor

Thanks @yarox. I see you use some custom services. Can you list what you need, and see if we can integrate in base Morse ? I think some stuff is already in, at least in 1.3 such as:

How is done the translation between the blend model and the "json" model ?

@yarox
Copy link

yarox commented Jan 29, 2016

Thank you, guys! I forgot to tell you that I tested this on an Ubuntu 14.04 machine with MORSE 1.2.2 (the latest packaged version for Ubuntu). So maybe there are some discrepancies between the services I found and the ones provided in newer versions of MORSE.

@adegroote, the services morseweb needs are:

  • supervision_services.details, to get the models of the robots in the simulation (this one is already provided in 1.2.2).
  • fakerobot.extra.get_camera_position, to get the camera position (this is provided in v1.3 as supervision_services.get_camarafp_position).
  • fakerobot.extra.get_environment, to get the name of the environment the simulation is running on
  • fakerobot.extra.get_start_time, to get the time the simulation started.
  • I subscribed to the sensor's timestamp data field for computing the elapsed simulation time (equivalent to TimeServices.now), but I also needed to know the elapsed time in real time.

Last two services are not really critical; they are used just for displaying information to the user.

The translation between the Blender model and the json model was done using Three.js Blender Export. As discussed here, only one material per mesh is allowed in the models.

@severin-lemaignan
Copy link
Contributor

Cool work! It would be probably easy to export the required meshes 'on the fly', as a special final step of the builder: once the simulation is created, we iterate over all the objects, and we automatically export them to THREE.js with the add-on (possibly using a simple caching mechanism not to export twice the same object). Should be fairly straightfoward.

Another option, though, is to generate the THREE.js models for all the components during the installation of MORSE. That would actually make a lot of sense, I guess... (but we would still need a mechanism for custom scenes/components made by the user...)

@severin-lemaignan
Copy link
Contributor

Also, to efficiently send the state of the simulation to the external viewer, I suppose we should rely on the multi-node mode of MORSE, that is meant to distribute the simulation over several instances of MORSE: each instance simulate some robots, and get the state of the other robots from the other instances. In the case of the remote viewer, the viewer could be a 'fake' instance that does not simulate anything, but get the state of simulation from another (headless) instance.

@adegroote
Copy link
Contributor

On 30/Jan - 04:11, Séverin Lemaignan wrote:

Also, to efficiently send the state of the simulation to the external
viewer, I suppose we should rely on the
multi-node
mode of MORSE, that is meant to distribute the simulation over several
instances of MORSE: each instance simulate some robots, and get the
state of the other robots from the other instances. In the case of the
remote viewer, the viewer could be a 'fake' instance that does not
simulate anything, but get the state of simulation from another
(headless) instance.

Yes, definitively. It allows to get rid of the extra pose sensor
needed on each robot and is probably a bit more efficient (or at least
can be).

@yarox
Copy link

yarox commented Jan 30, 2016

It would be probably easy to export the required meshes 'on the fly', as a special final step of the builder

I just implemented a script to export Blender models to Three.js. It's very basic, but could be used as starting point.

Also, to efficiently send the state of the simulation to the external viewer, I suppose we should rely on the multi-node mode of MORSE, that is meant to distribute the simulation over several instances of MORSE

That's a good idea. Any pointers on how to follow that route?

Thanks!

@adegroote
Copy link
Contributor

The documentation for multinode is here

https://www.openrobots.org/morse/doc/latest/multinode.html

basically, you need to call multinode_server in another terminal, and configure the main node to export all the robots. From morseweb, you need to connect to multinode_server and sends message like
['morseweb', {}]

you should receive something like ['morse_node', {robots : pos. ...}].

Normally, you never need to kill multinode_server

@PierrickKoch
Copy link
Member

You can find an example here:
https://github.com/morse-simulator/morse/blob/master/examples/tutorials/multinode.py

Then, you can get pose using:

from pymorse.stream import StreamJSON, PollThread
node_stream = StreamJSON('localhost', 65000)
poll_thread = PollThread()
poll_thread.start()

# simulate node data
node_stream.publish(['node1', {'robot1': [[0, 0, 0], [0, 0, 0]]}])
node_stream.publish(['node1', {'robot2': [[0, 0, 1], [1, 0, 0]]}])
node_stream.publish(['node1', {'robot3': [[0, 1, 0], [0, 1, 0]]}])

# or while node_stream.is_up():
for i in range(42):
    # multinode_server sends data after receiving
    node_stream.publish(['viewer', {}])
    # Wait 1ms for incomming data or return the last one received
    data = node_stream.get(timeout=.001) or node_stream.last()
    print(data)

node_stream.close()
poll_thread.syncstop()

@yarox
Copy link

yarox commented Feb 1, 2016

Great! I'll take a look at those examples and try to come up with something.

@yarox
Copy link

yarox commented Feb 2, 2016

I took @pierriko code example and reimplemented the pose tracking using multi-node. I let a simulation run for some time and it seemed to work fine.

@adegroote, any updates on the services? A service to get the name of the environment would be really nice. Also, is there something like TimeServices.now but that I can subscribe to instead?

Do you guys see something else I need to address? Thanks.

@adegroote
Copy link
Contributor

I will try to work on this this afternoon. My idea is to extend supervision details to retrieve the element information. You can use the clock sensor to get the time, but I think I will expand the "multinode socket" protocol to add both real_time and simulated_time (and it makes completely sense). I will propose a branch for future test.

@yarox
Copy link

yarox commented Feb 3, 2016

That'll be great! Let me know if you need help with any of that.

@adegroote
Copy link
Contributor

Can you try https://github.com/adegroote/morse/tree/morseweb_support ?
You can extract environment information from supervision.details()['environment'].
Time is available in the multinode message, through the entry __time. It is a list containing simulation_time, simulation_dt, real_time.

If it handles your need, I will propose a PR for these features.

@yarox
Copy link

yarox commented Feb 3, 2016

Those changes worked perfectly, thank you! Just one more thing: I also need to know the time when the simulation started, like fakerobot.extra.get_start_time.

With that last bit and the changes you already provided, it would be possible to get rid of the ExtraServices component.

Again, thanks a lot!

@adegroote
Copy link
Contributor

Do you have other need for start_time than the way to compute relative_time since the beginning of the simulation ? Does a method "Environment.use_relative_time()" would be sufficient for you ?

@adegroote
Copy link
Contributor

Otherwise, I'm wondering what are the benefits of three3d.js over blend4web (the latter would allow to export richer model (at least from the documentation)).

@giodegas
Copy link
Author

giodegas commented Feb 4, 2016

@adegroote It would be appreciated to have an "agnostic" Morse Web simulator, that now supports three3d.js and Blend4Web, and more to come in the future...
;-)
How about that?

Btw, you guys are doing a great job!
I will soon start my new course "Intelligent Systems and Robotics Laboratory" at DISIM, University of L'Aquila, Italy, adopting Morse and Morse Web .
I will also invite my students to collaborate to the Morse project in some way.
Thank you.

@severin-lemaignan
Copy link
Contributor

I will soon start my new course "Intelligent Systems and Robotics Laboratory" at DISIM, University of L'Aquila, Italy, adopting Morse and Morse Web .
I will also invite my students to collaborate to the Morse project in some way.

Fantastic! :-)


[http://www.plymouth.ac.uk/images/email_footer.gif]http://www.plymouth.ac.uk/worldclass

This email and any files with it are confidential and intended solely for the use of the recipient to whom it is addressed. If you are not the intended recipient then copying, distribution or other use of the information contained is strictly prohibited and you should not rely on it. If you have received this email in error please let the sender know immediately and delete it from your system(s). Internet emails are not necessarily secure. While we take every care, Plymouth University accepts no responsibility for viruses and it is your responsibility to scan emails and their attachments. Plymouth University does not accept responsibility for any changes made after it was sent. Nothing in this email or its attachments constitutes an order for goods or services unless accompanied by an official order form.

@yarox
Copy link

yarox commented Feb 4, 2016

@adegroote, yes, I just need the start time to compute the elapsed time from the beginning of the simulation. If I can get that information with Environment.use_relative_time(), I'm ok with that; whatever is easier for you guys :)

As for the Blend4Web vs Three.js, I tried Blend4Web first, and I found it a bit cumbersome for what I wanted to do: copy the objects' positions from a MORSE instance. I think Blend4Web is more suited to create some models in Blender, export them, and then let the browser handle the animation, physics, and all that. Since we already have Blender doing that for us, I thought using a library for just drawing the 3D stuff was a more straightforward approach. Also, the licensing was simpler. I agree that Blend4Web would allow for richer models, but it seems that the guys behind Three.js are constantly improving the exporter.

@giodegas, that's great! If you use morseweb, please file any issues you have or any features you'd like to see!

@adegroote
Copy link
Contributor

@yarox, can you test my updated branch ? You need to add env.use_relative_time(True) to enable the new behaviour.

@yarox
Copy link

yarox commented Feb 4, 2016

@adegroote, I tested it and it works! Thank you!

@adegroote
Copy link
Contributor

It has been merged in master branch, which should become soon 1.4.

@nicolaje
Copy link
Collaborator

@yarox are you still working on this?
You did a good job which I only noticed yet, would you mind adding a tutorial to the documentation?

@dgerod
Copy link
Contributor

dgerod commented Feb 2, 2018

Yes, this is cool.

@giodegas giodegas closed this as completed Mar 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants