Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature dispatching in base for CLARS #4350

Conversation

shubham808
Copy link
Contributor

@shubham808 shubham808 commented Jun 29, 2018

[WIP] This is very minimal example of how we want dispatching to work based on the gist by @karlnapf . All feature classes and types are not covered.

switch (data->get_feature_type()) \
{ \
case F_DREAL: \
return train_templated_name<float64_t>(data->as<CDenseFeatures<float64_t>>()); \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we could get away with static casting here as well, but I am always more comfortable using a dynamic cast as done in as

@@ -315,7 +329,6 @@ class CMachine : public CStoppableSGObject

virtual const char* get_name() const { return "Machine"; }

protected:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you want train_machine to be public? why?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a mistake... no need for public ;)

result=train_with_string(data);
break;
default:
SG_ERROR("train_machine is not yet implemented for %s!\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the error message should change and say that.
"Training %s with %s is not supported.\n", get_name(), data->get_name()

get_name());
bool result=false;

switch (data->get_feature_class())
Copy link
Member

@karlnapf karlnapf Jun 30, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

keep in mind data might be nullptr.
This actually raises a problem: as we cannot dispatch in that case ...
I guess we need to add a "default" option for training if features are not provided.
messy.
Or we just enforce the user to pass features from now on (API change that will have an impact all over shogun)
@vigsterkr @lisitsyn @iglesias thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shubham808 I think for now you can add a "train_without_features" that could be overloaded by subclasses. We can then remove that later

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

default option will also work for features classes not yet in the switch cases, train_without_features makes more sense

}
else
REQUIRE(data->get_feature_class() == C_DENSE,
"Feature-class must be of type C_DENSE (%d)\n", data->get_feature_class(), C_DENSE)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice, so we shaved off all this code in every class that dispatches the templated type for training

Copy link
Member

@karlnapf karlnapf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Look good to me.
Why don't you add LDA as well so we get a feeling for how this works in there.

@@ -25,6 +25,20 @@
namespace shogun
{

#define TRAIN_DENSE_DISPATHER(train_templated_name) \
virtual bool train_with_dense(CFeatures* data) \
{ \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As of now, I am quite puzzled how we could combine this idea with iterative machine, since making the iteration method virtual and templated is not an option, but neither is to do the dispatching inside that method :(

Copy link
Member

@karlnapf karlnapf Jul 2, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we make the macro in a way that we can use it do overload the train_with_dense to dispatch the template type, but also that we can use it to overload iterate_dense ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think we will need a new macro for iterate_dense

Copy link
Member

@karlnapf karlnapf Jul 2, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the code is essentially the same, so why not try to create one macro for both.

  • accept the to be overloaded name as a parameter, here train_with_dense
  • accept a variable number of parameters that are passed on to the definition of the overloaded method and the call of the typed implementation

Then DENSE_DISPATHER(train_with_dense, train_templated) would expand to
virtual void train_with_dense(CFeatures* data) { ... train_templated(data); ... }

and DENSE_DISPATHER(iteration, iteration_templated) would expand to
virtual void iteration(CFeatures* data) { ... iteration_templated(data); ... }

Copy link
Contributor Author

@shubham808 shubham808 Jul 2, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think iteration need not be implemented everywhere we have train_dense or ? (since dispatcher will be used in for eg LDA which is not an iterativemachine)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nope, but it would be cool to use the same macro

@@ -25,6 +25,20 @@
namespace shogun
{

#define TRAIN_DENSE_DISPATHER(train_templated_name) \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo "DISPATCHER"


protected:

virtual bool train_with_dense(CFeatures* data)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe an easier name would be train_dense

return train_templated_name<float32_t>(data->as<CDenseFeatures<float32_t>>()); \
case F_LONGREAL: \
return train_templated_name<floatmax_t>(data->as<CDenseFeatures<floatmax_t>>()); \
} \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs a default case

@karlnapf
Copy link
Member

karlnapf commented Aug 6, 2018

Ill close this one as we solved this using CRTP instead of macros

@karlnapf karlnapf closed this Aug 6, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants