You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Came across the recent biorXiv posting of this work - awesome stuff! I should probably comment through the official Disqus comment thread but whatever.
It was interesting to note how you honed down the input data and implemented a couple of regularization strategies to improve the accuracy of the tandem-CNN model. I'm curious if you have messed around with different loss functions? Though the probability vectors at hand that are being predicted are short, some probability theory can still be applied - I think. See this for some insight on choosing a loss function. If you know, or can reason out, the distribution of the noise, a tailored loss function may improve the accuracy.
Thanks for your input! I never changed the loss function; I always stuck with cross_entropy_with_logits.
By selecting only experts to train on, we significantly decreased the amount of noise in the data. Also, in the results section of the paper, we tested the accuracy when we trained on half of the experts and predicted on the other half, and we found that the accuracies (0.38 and 0.11) were significantly better than random guessing, while not meeting our best accuracies (0.51 and 0.34). So, I think that there is some variation in solving strategies, but not a drastic variation.
Came across the recent biorXiv posting of this work - awesome stuff! I should probably comment through the official Disqus comment thread but whatever.
It was interesting to note how you honed down the input data and implemented a couple of regularization strategies to improve the accuracy of the tandem-CNN model. I'm curious if you have messed around with different loss functions? Though the probability vectors at hand that are being predicted are short, some probability theory can still be applied - I think. See this for some insight on choosing a loss function. If you know, or can reason out, the distribution of the noise, a tailored loss function may improve the accuracy.
Some other off the shelf loss functions: https://www.tensorflow.org/api_docs/python/tf/losses.
Best of luck!
The text was updated successfully, but these errors were encountered: