Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Layer for Feature Extraction #13

Closed
ghost opened this issue Jun 20, 2016 · 7 comments
Closed

Layer for Feature Extraction #13

ghost opened this issue Jun 20, 2016 · 7 comments

Comments

@ghost
Copy link

ghost commented Jun 20, 2016

Hi, thanks for your sharing first of all.

Which layer is the best for feature extraction? Did you study any test about it?

Thanks.

@forresti
Copy link
Owner

What type of application/task are you trying to use the features for?

@ghost
Copy link
Author

ghost commented Jun 21, 2016

Thanks for your answer.

I have studied classifying objects with AlexNet. I am using its FC7 Layer for extracting features. Feature is a 4096 sized vector. And cosine similarity is enough for classify job.

Wondering to do same steps with SqueezeNet.

@forresti
Copy link
Owner

One option would be....

Take fire9/concat output, which is 13x13x512... which can be flattened into
a 86528-dimensional vector (rather than 4096). Can your system handle such
a big vector?

Or, you're welcome to put extra layers on the end and fine-tune. If you
want a FC-style layer, you can put global average pooling (like pool_final)
after fire9/concat, and then put your new layers on the end. I think you'll
likely be able to get reasonable accuracy with a little bit of fine-tuning
of the pretrained model with your new layers. Or, we have provided the
configuration files that you'll need to train from scratch.

The space of CNN architectures is enormous! Happy exploring. :)

On Tue, Jun 21, 2016 at 1:25 AM, fsrfsr notifications@github.com wrote:

Thanks for your answer.

I have studied classifying objects with AlexNet. I am using its FC7 Layer
for extracting features. Feature is a 4096 sized vector. And cosine
similarity is enough for classify job.

Wondering to do same steps with SqueezeNet.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#13 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AB7SqmTMYgjFYT2m43GvhKXzALxll239ks5qN5_dgaJpZM4I5aib
.

Forrest Iandola
http://www.forrestiandola.com

@ghost
Copy link
Author

ghost commented Jun 28, 2016

Yes I had used fire9/concat layer but as you said it is a big vector so testing is very slow.

Yes, exploring the architectures occupies my days.. :) Thanks for your suggestions.

@emsi
Copy link

emsi commented Mar 1, 2017

Take fire9/concat output, which is 13x13x512... which can be flattened into
a 86528-dimensional vector (rather than 4096). Can your system handle such
a big vector?

Those are huge and take really lot of time to simply shuffle and send to GPU for processing (my dataset is ~24000x13x13x512). My GPU is idling 60% of time or more.

Another approach is to use output of fire9/relu_squeeze1x1 which is 13x13x64 a more reasonable 10816D at a cost of the highest feature recognition loss (which might even be desirable depending on how different particular dataset is in comparison to ImagNet).

@amir-sha
Copy link

I took fire9/concat output and use it as a feature vector for 26500 patches in 22 class. then i cluster them using Kmean algorithm but the final result is not satisfying at all !
1-most of the feature vector contains -0 is it normal for fire9/concat result ?!
2- K-mean only cluster my patches to 5 class other classes are empty !

here is some part of my code

//get out put of layer fire9/concat
void deepNetwork::imageClassifier(Mat img  , Mat &netRes ){

    setInput(img);
    net.setInput(inputBlob, "data");        
    netRes = net.forward("fire9/concat"); 
}
//extract fetures and store them in a vector 
void caffeModel_thread (Mat patch, deepNetwork dnn,string name)
{
    int id;
    double prob;
    Mat output;
    {
        lock_guard<mutex> ownlock(mtx);
        dnn.imageClassifier(patch,output);
    }
    vector< Mat> matVec;
    vector<float> patchfec;
    patchfec.push_back(stof(name));

    for (int p = 0; p < output.size[1]; ++p)
    {
        Mat temp(output.size[2],output.size[3],CV_32F, output.ptr<float>(0,p));  
        matVec.push_back(temp);
    }

    for (int k=0;k<matVec.size();k++)
        for (int i=0;i<matVec[k].rows;i++)
            for (int j=0;j<matVec[k].cols;j++)
                patchfec.push_back(matVec[k].at<float>(i,j));
    {
        lock_guard<mutex> ownlock(mtx);
        ftvector.push_back(patchfec);
    }
}



@WSCZou
Copy link

WSCZou commented Apr 9, 2019

I want to use pre-trian squeezenet for feature extraction and fed into the kNN. But I don't know how to achieve it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants