New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error in evaluating trained model --test #1135
Comments
Hi Leroy, thank you for reaching out! As a first step, would you be able to share the config file you used for training/testing? Also, could you screenshot the content of the Thanks! |
Hi Marie, Below is the screenshot of the pred_masks folder, and these are the config file and bids_dataframe.csv: |
Thank @llasanudin for the additional information. The error comes from the In your case, you have A quick fix is to replace the underscore before "manual" by a dash ( The correct filenames of the prediction in the Note that you do not need to re-train the model, once the file are renamed and the config file fixed, the already trained model should work, but I recommend you use the corrected dataset and I will follow-up shortly with more details for the ivadomed team so we can work on a more permanent fix. |
Additional details on the issue for the dev team:
A potential fix would be to split with the In any case, I think it would be good to look into this. AFAIK, this is the only place that "limits" the supported syntax of the |
Thank you very much for the help, but I do have some follow-up questions:
Here are some files that may be useful: config and evaluation metrics.zip Many thanks, |
In the "testing" phase, we usually computes evaluation metrics for the testing set only, which are the images that were not seen by the model during the training phase. If you want to output the segmentations of all the images of your dataset, you can run ivadomed with the command
Unfortunately, I was not able to reproduce this error using your |
Thank you for your answer, that makes perfect sense. For the second question, I was trying to compare between 2 models (both were trained with the same dataset but different depths). The command line was Here are some additional files (bids dataframe, evaluation metrics, and config file) from both models that may be useful: |
Thanks for the additional information. I have a somewhat similar error than you and there may be a bug with the Are you able to run the script without the |
Thanks @llasanudin, unfortunately I don't have an answer for you at the moment. |
@llasanudin I guess you're running this on Google Colab... As a quick fix, I would suggest you the following:
|
Hi everyone, thank you very much for the help! Your suggestions did work and I was able to generate the plots by not running it from colab. Just another quick question on this, is it possible to test the model for the entire dataset? I had already segmented all images from my dataset using Many thanks, |
Hi @llasanudin , For your other question, as I mentioned earlier, it is unusual to test the model on images that were used in training, so ivadomed doesn't have an automatic way to do this. However, I can suggest a workaround for your specific case.
Let me know if any of these steps is not clear. |
Hello @mariehbourget So I am helping @llasanudin with the project under discussion in this thread. And I want to get a better understanding of the joblib file utilisation. At the moment we want to be able to leave out specific images to be in the test set but still randomise the train and validate image sets. My understanding of how the training process normally works is that for each iteration of the CNN training the train/validate sets are randomly drawn from the fraction of images not reserved in the test set. However, if we use the joblib file do the train/validate steps still work like that? Best, Michael PS. I can move this to a new issue if that is easier for you all. |
Hi @GrimmSnark and @llasanudin, The procedure I wrote above with the joblib file was for the special case where you wanted to evaluate the images from the entire dataset after the training (even on images that were used for training).
The normal dataset splitting goes like this:
For that specific case i.e. choosing what is in the test set but keeping train/valid sets random, we have a specific parameter that would work better for your situation. Example:
Let me know if I interpreted your question correctly and if you have any questions. |
Hello @mariehbourget Thank you again for your extremely fast reply! Looking at your response, the data_testing parameter seems to be exactly what we want to use. Michael |
Hi,
So I am training a segmentation model of axon and myelin from microscopy images using axondeepseg. The whole process up to creating the trained model worked just fine. However when I am trying to test the trained model, it kept on producing this error: AttributeError: 'NoneType' object has no attribute 'lower'. I was wondering if you could point out where I went wrong (the full output is shown below).
Many thanks,
Leroy
The text was updated successfully, but these errors were encountered: