-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wrong input_dims when use_img #2
Comments
Hi, Thanks for your interest. Can you tell me which dataset are you using, and what dimension have you changed to? It looks a bit strange that the ARI of the major training is even lower than pretraining. |
I was using spatialLIBD/151673 just as Because I tried to train the model with MAE features, there were several input dimensions of the layers different from merely using gene expression, including:
For details, please see: compare. Basically, it was because Instead of using
to compared their results, I found merely pretraining was 0.499, which was higher than 0.439 of both pretraining and major training. I am wondering if I have set something wrong. Thank you! |
It's a bit difficult for me to tell from this code directly, but I suggest trying to debug it in two steps. First, can you obtain similar results without using image features? For data spatialLIBD/151673, there is a slight improvement when using image features from MAE but not much, because as you can see from the histology images, the image patches look similar among spots. So, if you can't obtain similar results from this step, there is probably something wrong with your initial settings, environment, etc. From your result, I suspect there is something wrong with this step, because I didn't come across a situation where the performance is worse than the pretraining after the major training. If the first step is okay, then you can try to reduce the input dimension of the image features from 748 to a lower dimension, e.g. 100 with PCA (or even smaller), and see how the performance goes. By reducing the proportion of image features, you can check if the image features are successfully extracted by MAE. P.S. Note that you may not get exact same results every time you run the experiment, which is due to the Cuda non-deterministic characterics of pytorch-geometrics. |
Hello,
I noticed that several layers had wrong input dimensions when I turned use_img to True, and I have corrected them in my repo forked from yours: frickyinn/conST.
But with MAE image features, the ARI result were little lower than merely using gene expression. I think that maybe I was using the wrong hyper-parameters when I tried to change the dimensions. So could you help me solve this?
Thank you!
The text was updated successfully, but these errors were encountered: