You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your research, I got a lot of inspiration from yours.
I'm trying to adapt the PC softmax strategy at the different domains (1d signal), but it's not working well as I thought.
My question is, I'm not sure my implementation of the PC softmax you suggested in the paper well.
For example, suppose that there is a model that gives the logit result value. During the training phase, we just use vanilla softmax. And on the inference phase, we will post-compensate (in other words, fix the logit value or something?) the logit value. For this, first, apply softmax on the model logit value and minus the log value of source data prior and plus the target data prior.
Describe it with the pseudo-code, (on the inference phase) output = argmax( softmax(logit) - logS + logT )
Now I'm implementing it with this code, and strangely, other losses work well (even LADE), but only PC softmax does not seem to work well.
If you check this implementation, it will be of great help to me.
The text was updated successfully, but these errors were encountered:
I believe it should be: output = argmax( softmax(logit - logS + logT ))
which would be equivalent with output = argmax( logit - logS + logT )
where S and T is p_s(y) and p_t(y) I presume.
Thanks for your research, I got a lot of inspiration from yours.
I'm trying to adapt the PC softmax strategy at the different domains (1d signal), but it's not working well as I thought.
My question is, I'm not sure my implementation of the PC softmax you suggested in the paper well.
For example, suppose that there is a model that gives the logit result value. During the training phase, we just use vanilla softmax. And on the inference phase, we will post-compensate (in other words, fix the logit value or something?) the logit value. For this, first, apply softmax on the model logit value and minus the log value of source data prior and plus the target data prior.
Describe it with the pseudo-code, (on the inference phase)
output = argmax( softmax(logit) - logS + logT )
Now I'm implementing it with this code, and strangely, other losses work well (even LADE), but only PC softmax does not seem to work well.
If you check this implementation, it will be of great help to me.
The text was updated successfully, but these errors were encountered: