-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose encoder output #65
Comments
Hey @maykcaldas, I am not really sure what you are trying to do with the tokenizer if you want to access the encoder output. If you want to access the encoder output, have a look at DECIMER/Predictor_EfficientNet2.py In that file, add a function called
If you then call that function, you get the EfficientNet V2 encoder output. I hope this helps! Have a nice day! :) |
Hey @OBrink , thanks for your suggestion! I tried it, but I had some problems:
After solving these three issues, I could run the Predictor_EfficientNet2.py script. It was missing the tokenizer_Isomeric_SELFIES and max_length_Isomeric_SELFIES pickle files, I used the ones provided in zenodo: https://zenodo.org/record/8093783/files/models.zip (needed to rename the files).
|
Hey @maykcaldas & @smichtavy, I am sorry for the confusion, I will look into cleaning up a couple of things in the repository. I found an easier solution that saves us a lot of trouble, and I confirmed that it works:
If I run this on
Let us know if you have any further trouble! I'll wait until I hear from you to close this issue. Have a nice weekend! :) |
Thank you very much! Feel free to close this issue :) Have a great weekend you too! |
I've reopened the issue since we have to update the code on the Predictor code to use checkpoints. Will close it once the issue is solved. |
Issue Type
Questions
Source
PyPi
DECIMER Image Transformer Version
2.3.0
OS Platform and Distribution
MacBook Pro M1, 2020
Python version
3.10
Current Behaviour?
Hey!
Is there a way to access the encoder output using decimer's loaded model?
I'm interested in the embedded representation that is fed to the decoder, not the smiles itself. I was wondering if it's possible to access them once the
Transformer
class calls the encoder and the decoder separately.I could reproduce the
predict_SMILES
function by loading the model from the checkpoint available in zenodo, but since it's a TF model, I can only__call__
it.Is there any possible way to load these weights into the
Transformer
class so I can call thet_encoder
to access theenc_output
?Having an argument in the call to expose the hidden_states would also work fine.
Any suggestion is welcome!
Thanks!
Mayk
Which images caused the issue? (This is mandatory for images related issues)
No response
Standalone code to reproduce the issue
Relevant log output
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: