Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Polymorphism with classifiers #44

Closed
lumyx2 opened this issue Jan 27, 2016 · 3 comments
Closed

Polymorphism with classifiers #44

lumyx2 opened this issue Jan 27, 2016 · 3 comments

Comments

@lumyx2
Copy link

lumyx2 commented Jan 27, 2016

Hello Nick,

Since the forum is down, I thought it best to message you here.

I was planning on using several different classifiers at the same time. For this purposed I defined an array of Classifiers and assigned different subtypes to each index. However, when trying to load the stored models I realised that the function called resides in MLBase and not in each of the appropriate ML Classifiers. Upon inspection I noticed that this function is not virtual and it just returns false. I guess it wasn't intended to be used in that way. Would you mind letting me know how I could achieve polymorphism? Could it be that I need to do an array of pipelines? I just want to use the library the way it was intended to be used. Don't want to go against its design principles.

Thanks a lot.
M

@ngillian-google
Copy link
Contributor

In what way are you trying to run the classifiers at the same time?

e.g., are you trying to create an ensemble (where all the classifiers are trying to recognize the same gestures, but you are combining multiple classifiers together to improve the accuracy), or are you using different classifiers at the same time to recognize different types of gestures (e.g., a basic classifier to detect that a user's hands are in a specific interaction area - such as in front of their torso - and a more complex classifier to recognize specific gestures that occur in the interaction area)?

@lumyx2
Copy link
Author

lumyx2 commented Jan 28, 2016

Hello Nick,

They are all trying to recognise the same gesture. I am very much at a
testing/building stage so I am not sure what performs better. The idea was
to have an array of Classifiers and go through all of them and output the
predictions to compare accuracies while in use with live data.

On 28 January 2016 at 06:15, ngillian-google notifications@github.com
wrote:

In what way are you trying to run the classifiers at the same time?

e.g., are you trying to create an ensemble (where all the classifiers are
trying to recognize the same gestures, but you are combining multiple
classifiers together to improve the accuracy), or are you using different
classifiers at the same time to recognize different types of gestures
(e.g., a basic classifier to detect that a user's hands are in a specific
interaction area - such as in front of their torso - and a more complex
classifier to recognize specific gestures that occur in the interaction
area)?


Reply to this email directly or view it on GitHub
#44 (comment).

@nickgillian
Copy link
Owner

Ahh, so if you are trying to run multiple classifiers in parallel for testing purposes, then I would advice setting up a vector of pipelines, with each pipeline holding one of the classifiers. This also gives you the option to add slightly different preprocessing/feature extraction modules or settings for each pipeline if you need it.

Then for each pipeline you would call:

for(size_t i = 0; i<pipelines.size(); i++){
pipelines[ i ].train( trainingData );
}

and

for(size_t i = 0; i<pipelines.size(); i++){
pipelines[ i ].predict( inputData );
}

for training and predicting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants