-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does this run headlessly? #15
Comments
Yes it does,
|
Actually, if you do try to run this headlessly using you hit a headless exception, java.awt.HeadlessException. Could this be fixed to run in headless mode? |
Which version of the plugin are you using? Are you just running that command or are you doing something else? |
Version 1.0.1. Update: Same effect with 1.1.0 I'm running that command in a jypthon script whilst using script.py:
Console:
out:
|
I see. |
I tried this but it's still complaining about a headless exception. |
Does the plugin work when you try to run it from the desktop ImageJ application? |
This does
This doesn't
And if you have imp.show() that automatically crashes FIJI in headless mode.:
|
@ctr26 as an alternative, you could try running the N2V_Fiji plugin in your case. That plugin is based on |
What I'm trying to do is setup up a decent docker image of deepimagej because I'd like to be able to run lots of models on the varied data on our cluster. Having a docker image of deepimagej would make it really powerful at scale.BestCraigOn 8 Mar 2020 19:33, Jan Eglinger <notifications@github.com> wrote:@ctr26 as an alternative, you could try running the N2V_Fiji plugin in your case. That plugin is based on CSBDeep and imagej-tensorflow and should not have issues with running headless, as it avoids using ImageJ1 structures. /cc @frauzufall @ctrueden
—You are receiving this because you were mentioned.Reply to this email directly, view it on GitHub, or unsubscribe.
|
Thank you for the detailed explanation @ctr26 . |
For CARE and N2V trained in python, the recommanded way of running prediction on the cluster is using python directly. This script will do exactly what you asked for, including proper normalization. How are you doing normalization? The DeepImageJ Noise2Void model preprocessing normalization looks weird, @tibuch @alex-krull is this even correct? @carlosuc3m do you tell people somewhere how this is adjusted for other Noise2Void models? Won't this lead to wrong results? If you don't use the pre/postprocessing and do normalization yourself, the recommended way for running N2V prediction in Java is CSBDeep. Have a look at this N2V training notebook, at the end there is this line:
.. which exports the model into a ZIP which you can directly plug into CSBDeep to run prediction in Java. It's just a zipped SavedModelBundle, CSBDeep can also run other image-to-image networks. Here is a script which runs prediction on a whole folder of images: https://github.com/CSBDeep/CSBDeep_fiji/blob/master/script/CARE_generic.py It runs headless. If this does not work, please file an issue here. We are working on making the N2V python export models also compatible with N2V Fiji so that you can run prediction with proper normalization in Java using N2V for Fiji. |
@ctr26 The issue you commented should already fixed. Please feel free to try the new version and let me know if anything is still wrong with it. Here is the link to the correct version of the plugin https://github.com/deepimagej/deepimagej-plugin/releases/download/1.1.0/DeepImageJ_-1.1.0-SNAPSHOT.jar |
Hello @carlosuc3m, I just looked into the DeepImageJ Noise2Void model linked by @frauzufall. How did you come up with the preprocessing macro? In Noise2Void we compute mean and standard deviation over all given input training data and for each channel independently. These values are then stored in the Are you planning to use the modelzoo descriptions in the future as well? |
Hi @tibuch and @frauzufall ! Exactly! We took the information from the config.json file (in our case it looks like: {"mean": "0.18905598", "std": "0.18313955", "n_dim": 2, "axes": "YXC", ....}). Then, we reproduced the normalization performed in code of N2V. We are migrating to the modelzoo description but it is true that there are no specific fields for this kind of parameter. It could be a new proposal for the .yaml file. On the other hand, we designed the pre and post-processing macros to deal with this kind of problem that is straightforward for the developer. |
Hi @esgomezm, I see, but I don't understand why you normalize to [0,1] before you normalize with
Additionally this model should only be applied to data which has similar structures and the same noise statistics as the data used for training. Which is impossible to determine with your current format. @ctr26 please make sure that the model you are using is trained on appropriate data! We address this issue with the modelzoo format, where the Furthermore the modelzoo yaml files is meant to be language agnostic. The big advantage of a language agnostic model description is we are not bound to a single programming language. Which means that everyone can contribute models and everyone can make use of these models. This is the current state of the yaml file as it gets exported by the fiji/n2v plugin. :
As you can see And you can also see that we are not there yet. This model is missing some crucial information for reproducability as well as save application. I thought that we want to converge to the modelzoo format. For us java users this means that we have to write some additional code which can parse yaml files and look up the correct methods. @frauzufall has already done a lot of work in this direction. But we also need a plugin which asks the user after training to provided the necessary information to write up a proper yaml file. This includes the name of the user ( I think it would be fantastic if we could combine our efforts in bringing deep learning to ImageJ. @frauzufall has a lot of experience with the ImageJ2, imagej-tensorflow and CSBDeep as core execution engine of TensorFlow models in Java and you have a nice user interface. |
To be honest I was surprised that it's possible to run N2V using a pre-trained model as my assumption was that a) it wasn't general in that way and b) you had to train a new N2V model for every new noisy image (or maybe every new noisy environment). A standardised cross platform model-format in a centralised repository would be very useful for this project, not least because these models are stored on a googledrive and that's hard to access using docker files. |
Hi @tibuch,
This is because the training image was normalized between 0 and 1 before training. The image used for the training and the given example image is the same one in this case. However, as @ctr26 said, this specific model is not provided to process new data, but as an example of models that can be loaded in Fiji using DeepImageJ.
Yes, we developed the
Definitely! Let's keep discussing it in imagej-modelzoo issue #1 |
This is not what he said, he assumed, since the model was on DeepImageJ, that he can run this on his data on the cluster, and wondered how this works since the authors of N2V say you have to train yourself. The existence of the model of DeepImageJ made him think otherwise (@ctr26 correct me if I'm wrong), and no one from your side told him he should not run this on his data when he posted this issue. Don't you think the model should clearly indicate what you said about it on the website and in Fiji? |
In general, when using any of the published models to process new data, the user should check the obtained results and verify that they are valid. Same as when you use any other plugin in ImageJ, users are expected to read the information provided by the authors of each model. I think this is general for any bioimage workflow and not only for deep learning models. We have added a specific note for users in this direction that I hope it serves to avoid this kind of confusion. |
Can you pass it images to process from a ijm script for instance?
The text was updated successfully, but these errors were encountered: