-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added opencv vector<Mat> to memory data layer with tests #1416
Conversation
mtamburrano
commented
Nov 7, 2014
- AddMatVector to load Opencv Mat into MemoryDataLayer
- MemoryDataLayer accepts blobs with dynamic batch size, especially useful when you need to predict a number of images that is not known a priori
Thank you, very useful and works fine. |
@pleasurelong dataset.hpp is in caffe/include/caffe/dataset.hpp. I don't know why you don't have it, are you sure you have not accidentally deleted it? |
@mtamburrano This is OK, I do not need this file now. thanks for your replay ^ ^ |
@bhack my 2 cents here, this short PR is very simple and works well against the current dev tree. It seems to be of interest to several users. To me it'd make sense to not delay it, and update it in other PR, as needed. |
@sguada Can you make a passage through here? |
Maybe there was a mistaken approach with The idea is that MemoryData layer can hold more data than needed by batch_size and that forward passes consumes part of the queue data. So let's rename If you want to add dynamic |
so, what about when added_data_ (or data_queue_) has size < batch_size? The forward pass should not do anything until the queue is large enough or it should be considered as an error? |
I think that should be an error, data_queue_ should be a multiple of batch_size. So if one wants to add less data should change first the size of batch_size. |
Ok then, just to recap:
|
The forward pass already takes batch_size elements and move the current pos to the appropriate position. Currently when it gets to the end loop back to the beginning. Replies:
Since the user pass the data should know how many forward passes have to do, but yeah you can also add a bool var |
yes, I meant what you write, I probably badly explained it. Additionally the line 49 |
@mtamburrano MemoryDataLayer was originally developed by @longjon to be able to pass data directly in memory by calling This layer when the data is added through |
@@ -175,6 +175,26 @@ void DataTransformer<Dtype>::Transform(const vector<Datum> & datum_vector, | |||
} | |||
} | |||
|
|||
template<typename Dtype> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code should be surrounded by #ifndef OSX
restructured MemoryDataLayer following @sguada's directions:
|
Can be merged now? |
What could be alternative for OS X? |
@@ -270,10 +270,13 @@ class MemoryDataLayer : public BaseDataLayer<Dtype> { | |||
virtual inline int ExactNumTopBlobs() const { return 2; } | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Define Reshape
method to do the reshape of the tops, and use it in DataLayerSetUp
. Remove the reshaping from the Forward
since Reshape
method is called by the net before calling Forward
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is not needed anymore because #1313, right?
I haven't read this carefully, but since I've been asked about
|
@longjon Thank you for the feedbacks. It was not claimed an interaction with #1313 at code level but only for maintaining "semantic constancy" inside the caffe code caused by divergent comments between core members in two different PRs. We are waiting a final reply from @sguada so that we can allocate working time to the last actions needed to let this merge. |
"Can't change batch_size before all data haven't been consumed" | ||
<< " by the upper layers"; | ||
batch_size_ = new_size; | ||
added_data_.Reshape(batch_size_, channels_, height_, width_); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ChangeBatchSize should be just a setter, don't need to reshape anything else. That will happen in the corresponding methods.
@Nerei do you have any hints on how we could remove ifdefs for OSX introduced by the inclusion of opencv in caffe? This will also reply to @StevenLobo2 question. |
Edits done, I hope this time everything is fine ;) |
@deepcnn I don't see where do you are feeding the memory data layer. Please ask usage questions on the caffe-users mailing list. Thanks! |
@shelhamer @mlapin Can we remove OSX opencv ifdef from here now that #1236 is merged? |
I think so, as long as you follow #1236 in removing OpenCV includes from
|
@deepcnn Please use your thread in the mailing list. We are not talking about your issue here. Actually this PR cannot work on OSX caused by OSX ifdef wrapping. |
@shelhamer I think this is ready and reviewed multiple times by @sguada and partially by @longjon. We don't have OSX building infrastructure in our team and on Travis. Can we merge this? Some core developers with OSX want to test the removal of ifdefs? |
@shelhamer I think that you have OSX. Can you brew this iced caffe or can we merge this PR? |
@bhack I'll take a look after the ICML deadline 02/06 -- in other news, CUDA 7 is at last compatible with libc++ so the OS X installation for 10.9 + 10.10 will be so much simpler. I'll check how this works with pycaffe at the same time so we can see about having an interactive solving example. |
@shelhamer We can eventually allocate some working time for the week after 02/06 (we are waiting for this from CVPR deadline and we have also others PR in the review queue almost stalled). I really understand research deadline but it is becoming really hard for us to reserve working hours without a minimal coordination of BVCL review plan (monthly bimonthly or whatever you want). |
It appears that the line: void DataTransformer<Dtype>::Transform(const cv::Mat& cv_img is no longer surrounded by a @mtamburrano can you rebase this regardless? The PR currently breaks the build with: src/caffe/data_transformer.cpp:195:2: error: #endif without #if
#endif
^
src/caffe/data_transformer.cpp:197:2: error: unterminated conditional directive
#ifndef OSX |
Feed cv::Mats to MemoryDataLayer and set batch size on-the-fly.
I merged this with a little grooming of my own in 02d9170. I hope this is useful, as it has been requested, but I'm not entirely excited myself for the creeping dependence on OpenCV. While it is helpful in some environments, it is a heavy dependency in others, so I plan to have it split off according to #1738 eventually. |