New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It is difficult to train in Large dataset. #14
Comments
Hi, yes, it needs a lot of computation to update the affinity. I think one
way to address this problem is you can just compute a partial affinity
matrix, instead of a full affinity matrix.
…On Tue, May 2, 2017 at 9:03 AM, Paper99 ***@***.***> wrote:
I use the data in 'datasets' files. It can run
I use 80000*32*32 samples to train the jointed net. But when I finished
the first CNN update, it is difficult to run the next step. This code
seemly have a large amount of computation in computing the 'Affinity',
How can I solve this problem?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#14>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADtr546U_E4njsVtA4doMOH5oi3kGpalks5r1oDKgaJpZM4NNmhk>
.
|
Thanks for your advice. |
Hi, I think one way to solve this is using some fast knn algorithm to build connections for close samples, and then compute the affinity for these close samples. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I use 80000 samples to train the jointed net. But when I finished the first CNN update, it is difficult to run the next step. This code seemly have a large amount of computation in computing the 'Affinity',
How can I solve this problem?
The text was updated successfully, but these errors were encountered: