New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need Help Regarding TabNetEncoder output and TabNetPretraining #446
Comments
Hello, The encoder outputs two things, the different steps outputs and an auxiliary value useful for computting the loss tabnet/pytorch_tabnet/tab_network.py Line 188 in bcae5f4
You can see here how the auxiliary value for the loss is used : tabnet/pytorch_tabnet/abstract_model.py Line 509 in bcae5f4
The forward_mask simply outputs the mask instead of the loss auxiliary so that it can be used for inference to get feature importances: see the explain method here tabnet/pytorch_tabnet/abstract_model.py Line 303 in bcae5f4
What you can do:
|
It does not make sense to load a state dict at each forward step, the first two lines should probably be inside the init method. I don't think it's a good idea to reshape the output of tabnet's encoder to a 8x8 image. What kind of model is your NetRelu ? You should probably use flattened representations of both model and concatenate them before passing them to a simple Linear head. |
Hye, Optimox. As you answered previously, I can use TabNetEncoder to produce custom sizes of embedding. Upon testing the layer, I realised it produced three outputs, I dont know which one is more important.
The paper that I tried to copy also used unsupervised training. TabNetPretraining layer also produced three different outputs. I checked the outputs' meaning on GitHub but I also doesnt know which one is important and how to use that to pass to TabNetEncoder.
The paper that I'm trying to duplicate is here (https://ieeexplore.ieee.org/document/9658729). Basically, the methods that I need to duplicate from the paper are:
I already managed the transformer part, but I'm still stuck at the TabNet part.
Lastly, what is the difference between forward and forward mask?
Thank you.
The text was updated successfully, but these errors were encountered: