Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added lightning.KernelSVC (binary only) #176

Merged
merged 6 commits into from Mar 16, 2020
Merged

added lightning.KernelSVC (binary only) #176

merged 6 commits into from Mar 16, 2020

Conversation

StrikerRUS
Copy link
Member

@StrikerRUS StrikerRUS commented Mar 8, 2020

Refer to #172 (comment). KernelSVC is the last model from lightning library that can be added to m2cgen.

Adding only binary classification to not make PR too huge.

Comment on lines +196 to +201
feature_norm = ast.SqrtExpr(
utils.apply_op_to_expressions(
ast.BinNumOpType.ADD,
*[utils.mul(ast.FeatureRef(i), ast.FeatureRef(i))
for i in range(len(support_vector))]),
to_reuse=True)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have no idea how to avoid zero norm 🙁

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the lighting's behavior for zero feature vector norm? Can we get away with the same logic that we use for Support Vector norm?

Copy link
Member Author

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand the logic correctly it does the same thing (falls back to 1.0), shall we do the same here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, I'm not sure that complicating generated code in a such way worth it...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say we should always prefer correctness even if it means a slightly more complex code. Btw, does it really become that complex? Especially considering that we reuse the expression (to_reuse=True) here. AST doesn't look bad at all. Thanks for implementing this! 👍

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should always prefer correctness even if it means a slightly more complex code.

Yeah, no doubt!

In languages which implement to_reuse=True it looks OK, but if they don't... Though, there are many other places where without caching code looks terrible.

I thought about more elegant solution, but didn't come to any.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about more elegant solution, but didn't come to any.

Well, TBH, the existing solution looks more than good to me at least at this point 👍

@coveralls
Copy link

coveralls commented Mar 8, 2020

Coverage Status

Coverage decreased (-0.3%) to 95.522% when pulling 10a7e46 on kernelsvc into c342ca1 on master.

m2cgen/assemblers/svm.py Outdated Show resolved Hide resolved
m2cgen/ast.py Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
Comment on lines +196 to +201
feature_norm = ast.SqrtExpr(
utils.apply_op_to_expressions(
ast.BinNumOpType.ADD,
*[utils.mul(ast.FeatureRef(i), ast.FeatureRef(i))
for i in range(len(support_vector))]),
to_reuse=True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the lighting's behavior for zero feature vector norm? Can we get away with the same logic that we use for Support Vector norm?

Copy link
Member

@izeigerman izeigerman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fantastic addition to the catalog, thank you 👍 A few minor comments

@izeigerman
Copy link
Member

Let's add a fallback logic for the feature norm and this is good to go. Thanks for addressing comments 👍

@StrikerRUS StrikerRUS force-pushed the kernelsvc branch 2 times, most recently from be4fde2 to 91d0c3f Compare March 15, 2020 20:05
ast.NumVal(0.0),
ast.CompOpType.EQ),
ast.NumVal(1.0),
ast.SqrtExpr(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Nitpick] This part can be extracted into a separate variable and cached, this way making expected AST a little simpler:

feature_norm = ast.SqrtExpr(
                            ast.BinNumExpr(
                                ast.FeatureRef(0),
                                ast.FeatureRef(0),
                                ast.BinNumOpType.MUL),
                            to_reuse=True)

...
ast.IfExpr(
                    ast.CompExpr(
                        feature_norm,
                        ast.NumVal(0.0),
                        ast.CompOpType.EQ),
                    ast.NumVal(1.0),
                    feature_norm)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, will do this in a separate PR.

@izeigerman izeigerman merged commit b363f62 into master Mar 16, 2020
@izeigerman izeigerman deleted the kernelsvc branch March 16, 2020 00:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants