You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to visualize the architecture of mind-vis down to the layer for better understanding but I have trouble finding a description of the architecture.
In the paper it is written that the encoder depth is 24. Does it mean that it has 24 layers? If so where can I look up the input and output size for each layer?
Do you have any general recommendations on comprehending the exact architecture of mind-vis?
With regards, Alexander.
The text was updated successfully, but these errors were encountered:
Okay. It seems I missed the "implementation" part in the paper. Sorry for bothering you. Still if you could share some general recommendations about understanding mind-vis for a begginer in machine learning, I would appreciate it a lot.
Hello!
I am trying to visualize the architecture of mind-vis down to the layer for better understanding but I have trouble finding a description of the architecture.
In the paper it is written that the encoder depth is 24. Does it mean that it has 24 layers? If so where can I look up the input and output size for each layer?
Do you have any general recommendations on comprehending the exact architecture of mind-vis?
With regards, Alexander.
The text was updated successfully, but these errors were encountered: