You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Great work but I have a question in the MMILB class. In Line 173 of src/modules/encoders.py I found a encoder : self.entropy_prj=nn.Sequential(.......)
In the forward method, it seems that when estimating the entropy of Y, the code does not use the input embeddings of Y directly. Instead, the code first passes the input embeddings to self.entropy_prj and uses its output to estimate the entropy of Y. I didn't find this encoder in the paper. So why this encoder is used?
The text was updated successfully, but these errors were encountered:
Hi. It seems (sorry the work is done too long ago) that the projection is to reduce the dimensionality so that the calculation can smoothly continue, otherwise it sometimes produces NaN values.
Great work but I have a question in the MMILB class. In Line 173 of src/modules/encoders.py I found a encoder :
self.entropy_prj=nn.Sequential(.......)
In the forward method, it seems that when estimating the entropy of Y, the code does not use the input embeddings of Y directly. Instead, the code first passes the input embeddings to self.entropy_prj and uses its output to estimate the entropy of Y. I didn't find this encoder in the paper. So why this encoder is used?
The text was updated successfully, but these errors were encountered: