New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature dispatching in base for CLARS #4350
feature dispatching in base for CLARS #4350
Conversation
switch (data->get_feature_type()) \ | ||
{ \ | ||
case F_DREAL: \ | ||
return train_templated_name<float64_t>(data->as<CDenseFeatures<float64_t>>()); \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we could get away with static casting here as well, but I am always more comfortable using a dynamic cast as done in as
@@ -315,7 +329,6 @@ class CMachine : public CStoppableSGObject | |||
|
|||
virtual const char* get_name() const { return "Machine"; } | |||
|
|||
protected: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you want train_machine
to be public? why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a mistake... no need for public ;)
result=train_with_string(data); | ||
break; | ||
default: | ||
SG_ERROR("train_machine is not yet implemented for %s!\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the error message should change and say that.
"Training %s with %s is not supported.\n", get_name(), data->get_name()
get_name()); | ||
bool result=false; | ||
|
||
switch (data->get_feature_class()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
keep in mind data might be nullptr
.
This actually raises a problem: as we cannot dispatch in that case ...
I guess we need to add a "default" option for training if features are not provided.
messy.
Or we just enforce the user to pass features from now on (API change that will have an impact all over shogun)
@vigsterkr @lisitsyn @iglesias thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shubham808 I think for now you can add a "train_without_features" that could be overloaded by subclasses. We can then remove that later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default option will also work for features classes not yet in the switch cases, train_without_features makes more sense
} | ||
else | ||
REQUIRE(data->get_feature_class() == C_DENSE, | ||
"Feature-class must be of type C_DENSE (%d)\n", data->get_feature_class(), C_DENSE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice, so we shaved off all this code in every class that dispatches the templated type for training
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Look good to me.
Why don't you add LDA as well so we get a feeling for how this works in there.
@@ -25,6 +25,20 @@ | |||
namespace shogun | |||
{ | |||
|
|||
#define TRAIN_DENSE_DISPATHER(train_templated_name) \ | |||
virtual bool train_with_dense(CFeatures* data) \ | |||
{ \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As of now, I am quite puzzled how we could combine this idea with iterative machine, since making the iteration
method virtual and templated is not an option, but neither is to do the dispatching inside that method :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we make the macro in a way that we can use it do overload the train_with_dense
to dispatch the template type, but also that we can use it to overload iterate_dense
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think we will need a new macro for iterate_dense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the code is essentially the same, so why not try to create one macro for both.
- accept the to be overloaded name as a parameter, here
train_with_dense
- accept a variable number of parameters that are passed on to the definition of the overloaded method and the call of the typed implementation
Then DENSE_DISPATHER(train_with_dense, train_templated)
would expand to
virtual void train_with_dense(CFeatures* data) { ... train_templated(data); ... }
and DENSE_DISPATHER(iteration, iteration_templated)
would expand to
virtual void iteration(CFeatures* data) { ... iteration_templated(data); ... }
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think iteration need not be implemented everywhere we have train_dense or ? (since dispatcher will be used in for eg LDA which is not an iterativemachine)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nope, but it would be cool to use the same macro
@@ -25,6 +25,20 @@ | |||
namespace shogun | |||
{ | |||
|
|||
#define TRAIN_DENSE_DISPATHER(train_templated_name) \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo "DISPATCHER"
|
||
protected: | ||
|
||
virtual bool train_with_dense(CFeatures* data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe an easier name would be train_dense
return train_templated_name<float32_t>(data->as<CDenseFeatures<float32_t>>()); \ | ||
case F_LONGREAL: \ | ||
return train_templated_name<floatmax_t>(data->as<CDenseFeatures<floatmax_t>>()); \ | ||
} \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
needs a default case
Ill close this one as we solved this using CRTP instead of macros |
[WIP] This is very minimal example of how we want dispatching to work based on the gist by @karlnapf . All feature classes and types are not covered.