You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
pyCaffe rewrites the contents of data layer when calling net.forward.
Therefore calling net.forward() multiple times without changing the data produces different results.
The input data has no effect on the output loss.
Steps to reproduce
Load a network through pycaffe in training phase
set any image to the data layer
net.blobs['data'].data[...] = image
net.blobs['label'].data[...] = label
out = net.forward()
Tried solutions
Tried to debug the python code, but was unsuccessful
Change occurs here in pycaffe.py line:131
self._forward(start_ind, end_ind)
Found it out by checking the value of self.blobs['data'].data[...] before and after this function call
System configuration
Operating system: Ubuntu Linux 16.04
Compiler: 5.4.0
CUDA version (if applicable): 9.0
CUDNN version (if applicable): 7.0
BLAS: OpenBLAS
Python version (if using pycaffe): Python 3.6
MATLAB version (if using matcaffe): Not Applicable
Issue checklist
[ X] read the guidelines and removed the first paragraph
[ X] written a short summary and detailed steps to reproduce
[ -] explained how solutions to related problems failed (tick if found none)
[ X] filled system configuration
[ -] attached relevant logs/config files (tick if not applicable)
The text was updated successfully, but these errors were encountered:
What about the subsequent layers? Are you sure those aren't initialized to random values every time you run the test? I'm 99% sure this is the case - please look into this. If you're still convinced Caffe behaves this way, please provide a minimal working example that demonstrates this so we could reproduce the bug.
Issue summary
pyCaffe rewrites the contents of data layer when calling net.forward.
Therefore calling net.forward() multiple times without changing the data produces different results.
The input data has no effect on the output loss.
Steps to reproduce
Load a network through pycaffe in training phase
set any image to the data layer
net.blobs['data'].data[...] = image
net.blobs['label'].data[...] = label
out = net.forward()
Tried solutions
Tried to debug the python code, but was unsuccessful
Change occurs here in pycaffe.py line:131
self._forward(start_ind, end_ind)
Found it out by checking the value of self.blobs['data'].data[...] before and after this function call
System configuration
Issue checklist
The text was updated successfully, but these errors were encountered: