You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Do you thing that we have smt not right here?
Or does the framework have something indicate that it's running, maybe print the number of current epoch it's running?
And btw when I run I see this in terminal FYI:
2018-04-30 16:19:57.630351: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.71GiB. The caller indicates that this
is not a failure, but may mean that there could be performance gains if more memory is available.
The text was updated successfully, but these errors were encountered:
I'm following this tutorial for detecting Atrial fibrillation but when run
Training pipeline
it take so much time.I'm using Tesla K80 and I leave it ran all night, more than 7 hours, but now it's still running.
In this block it's running 1000 epochs:
Do you thing that we have smt not right here?
Or does the framework have something indicate that it's running, maybe print the number of current epoch it's running?
And btw when I run I see this in terminal FYI:
The text was updated successfully, but these errors were encountered: