New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
java support #359
Comments
I'm interested in this as well as I would like to directly integrate a trained pytorch model into a production Java application. |
Might be related: @EmergentOrder is creating a java binding to onnx #499 |
@isaacmg @schlichtanders It currently lives in my own fork of JavaCPP Presets, but if it gets merged into the main repo it would be published to public maven repos for easy access, like the rest of the presets. |
Added a branch built against ONNX 1.1.0 |
What is the status of Java support on onnx? Could someone comment? |
@hadim @schlichtanders @isaacmg @bddppq The Java binding (for the latest ONNX release, 1.2.2) has been merged into JavaCPP Presets master, meaning it will be released with JavaCPP Presets 1.4.3, and subsequently available via public maven repos. |
Thanks @EmergentOrder for setting this up, we'll help maintain the onnx preset bindings that were contributed. Replying to the initial issue, BigDL shouldn't be needed to just run some basic java production systems. Spark is a heavy dependency :). That being said, the deeplearning4j project was waiting for onnx to stabilize and for some of the python frameworks to get their export up to snuff before we were going to integrate with this. We have a TF import/graph api that runs TF graphs now and will be doing the same for onnx. |
@EmergentOrder BTW, snapshot artifacts are already available: http://bytedeco.org/builds/ |
I've been building out Scala support, starting with a numerically generic, typeful Scala API at https://github.com/EmergentOrder/onnx-scala |
JavaCPP Presets 1.4.3 has been released, with the ONNX 1.3.0 preset. 1.4.4 will include support for macOS and the ONNX IR, model checker, optimizer, version converter and shape inference. See this PR |
Finally, there are options to run ONNX models on the JVM. I haven't tested it yet, but BIDMach added support for loading ONNX files recently. Varying degrees of support are also available in Vespa, Menoh-java, DIANNE and Hydro-serving. Lantern uses Scala, but compiles to C++ and currently can't be called from the JVM (see feiwang3311/Lantern#47) Then there is the nGraph JavaCPP Preset that I've been working on. Just call the method Lastly, I'm going to build on the nGraph preset to create a backend for ONNX-Scala. I have the Relu op currently working, with others on the way (working towards Squeezenet for a start). |
@EmergentOrder how many of these do more than just load 5 off the shelf CNN models? Has anyone actually implemented the full spec? |
ONNX 1.3.0 has 116 ops, 12 of which are experimental. https://github.com/onnx/backend-scoreboard is a good place to start. Tensorflow - 93/116 node tests passed - 8/8 model tests passed nGraph - 85 ops - 14 validated workloads The rest are based on my quick perusal of the sources. Menoh/Menoh-Java - 19 ops |
@EmergentOrder thanks a lot for the overview. We're not claiming support for it in dl4j until the full ops are there. Our TF import library is fairly far along now, our onnx efforts will be based on that. Good to know things are moving, but we definitely need to raise the standards here a bit about what "supported" means. We'll do our best to publish an update when onnx import is far enough along to be considered functional. |
ONNX-Scala now supports 24 ops and counting (out of 136 standard ops as of ONNX 1.5.0). |
This question is in the other direction. Does javacpp preset onnx support creating ONNX models? I am asking because I want to create ONNX models from Lantern (https://github.com/feiwang3311/Lantern), which now uses javacpp preset onnx for reading ONNX models. |
You can create ONNX models in memory using the raw JavaCPP ONNX Preset, then export them as a |
Thanks. I am still not quite sure of the part that export onnx models via javacpp onnx preset. Say I have this code to read a model from file:
And just assume that I want to write the The |
You can take the ModelProto you have and do: |
BigDL will soon support it. And also BigDL is not strictly Spark binded, it also support pure Java |
Update on the status of backends (no change for those with Java support not re-listed from last time): ONNX 1.5.0 has 136 ops. nGraph: 98/136 ops (via the JavaCPP Preset) - 17 validated workloads MXNet: 87/136 ops (not yet available via the MXNet JavaCPP Preset or the MXNet Scala or Java APIs, but it could be) -12 validated workloads ONNX Runtime: Claims full coverage of ops for ONNX 1.2+ (No JVM API, would be nice to have) |
@EmergentOrder ONNX Runtime indeed is planning to add Java API. If you'd like to help, please join at https://github.com/microsoft/onnxruntime |
@prasanthpul Good to hear! |
@wzhongyuan Likewise, good to hear! |
@prasanthpul If you guys need help with JavaCPP, be sure to let me know! |
Initial PR for ONNX Runtime Java API is here: microsoft/onnxruntime#1723 |
There is another PR into the microsoft onnxruntime repo: microsoft/onnxruntime#2215 |
To easily bring the learned models into a production systems for scoring or classification or the like, it would be very helpful to see some Java support.
Maybe Deeplearning4j or BigDL might be ways to integrate.
The text was updated successfully, but these errors were encountered: