Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feedback from users #71

Open
markaren opened this issue Feb 15, 2019 · 8 comments
Open

Feedback from users #71

markaren opened this issue Feb 15, 2019 · 8 comments

Comments

@markaren
Copy link
Member

Hi,
I would very much like to hear from you, the person reading this.
Why are you interested in this project, have you tried it?
What is unclear, what can be improved?

Note that it is also possible to chat on gitter!

@chrbertsch
Copy link

This is very interesting project!
What is missing and what could be perhaps the "killer application" would be, of one could wrap the the client again as an FMU. Thus the client could be imported in any FMI importing tool.
Use cases would be distributed co-simulation or wrapping 32bit FMUs a 64-bit FMUs ...

@markaren
Copy link
Member Author

markaren commented Jun 14, 2019

Thanks for the input @chrbertsch . It's an interesting idea. While manageable, I think wrapping the clients as an FMU is quite hard. Since the FMU is static, it must be compiled against a target FMU running on the server. I guess one could modify the XML to take the IP address as an input, so it would not be specific to a particular host machine.

Or perhaps we could supply a FMU without a modelDescription.xml, to be populated by the user itself.
I should think a little about his. Regardless, before I potentially act on this, I plan to add FMI export capabilities to FMI4j (NTNU-IHB/FMI4j#26). This module could then potentially be re-used for something like this.

@jnjaeschke
Copy link

I would be interested to know the performance overhead of calling an FMU using FMU-Proxy in comparison to calling the FMU "natively" using FMI4cpp in a C++ app. I'm thinking of implementing a tool that could simulate connected ME/CS fmus regardless of architecture and OS. I guess having a system of connected ME FMUs which are called via gRPC does have a noticable performance overhead?

Thank you,
Jan

@markaren
Copy link
Member Author

markaren commented Jun 27, 2019

Yes, there is some overhead. I did some tests on a subset of the fmi-crosscheck FMUs. See below:

image

Edit 1: JVM server and client. Client a normal laptop with an i7. Server is a desktop with a 2 year old i7.
Edit2: Baseline is using FMI4j, which is about 10-15% slower than say FMI4cpp.

@jnjaeschke
Copy link

Wow, that was quick, thank you! :)

Is the number in the "no. calls" column the 'absolute' number of function calls (say, do step, get values, set values, ...) or just the number of steps (do step calls)?
If the set/get values rpc calls are included: is there a separate call for each variable/vr? Or are they bundled?

Actually I thought (and hoped) that gRPC was doing better... did you try the flatbuffer option?

I did recognize that it is faster to have few larger messages than many small ones though. I'm not sure if that is due to converting to protobuf or establishing the actual call.

Do you have some experience regarding the streaming feature of gRPC? One could think of a bidirectional stream for controlling simulation (sending dostep, getting values). That way you reduce the number of separate calls over the wire to 1. On the other hand, you lose the "rpc" functionality with having dedicated methods.

@markaren
Copy link
Member Author

no.calls is just do_step, but I have been meaning to "upgrade" the RPC to include set/get/dostep in a single call so that should give similar performance.

flatbuffers does not exists for gRPC Java, and it can't be enabled on gRPC C++ as it would break compatiablity with other languages.

I have tried the stream API before, but I found that it had no use in this project. I.e. the list of ScalarVariables could be a stream, but I reckon using a stream is only beneficial when the amount of data to receive is huuge.

@jnjaeschke
Copy link

But were there calls to set/get during the benchmarking or was it just do_step?

I agree, for getting list of variables etc its much better to use one single large message. But it could be useful if there are multiple messages that are not yet available at the rpc call time, such as logging:

In a gRPC-based application I am working on I am using a server stream for logging (stream is started on instantiate and runs in a separate thread until freeInstance. When logging callback is invoked, a new message is pushed to the stream).

Thinking of that, this could also be useful for communication during simulation. Start the stream on start of simulation, send "do_step requests" / receive data during simulation, stop it after simulation finished. But I guess, for ModelExchange this still gets very messy...

@markaren
Copy link
Member Author

Just do_step unfortunatly.

Using the stream for logging sounds like a good idea. fmu-proxy currently offers no way of retreiving log data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

3 participants