Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TMVA] Add new Evaluation Metric ( meanAbsoluteError between two matrices ) #2376

Merged
merged 1 commit into from Aug 2, 2018

Conversation

ravikiran0606
Copy link
Contributor

@ravikiran0606 ravikiran0606 commented Jul 27, 2018

Need:

The need for a new evaluation metric for testing the convergence of the optimizer is essential. The already existing metric was maximumRelativeError() between two matrices which takes the maximum of all the relative errors between its individual elements. But the relative error between these elements depends on the element values. i.e

Relative error between a and b = abs(a-b)/(abs(a)+abs(b)). Let use consider 2 cases,

case a) If two values are a = 0.0001 , b = 0.0002, relative error = 0.3333
case b) If two values are a = 10.0001 b = 10.0002 relative error = 4.99992e-6

Since the unit tests for optimizer is written in a way so that a sample 3 layer DNN will learn this function Y = K * X. So, If X = I ( Identity matrix ), then Y = K * I = K. This should be equivalent to the output of the trained DNN when I is feed as Input. Let Y' be the output of the trained DNN. So I need to compare the matrices K and Y' for approximate equality with a certain threshold.

So If I use maximumRelativeError for comparing the approximate equality for two matrices, then even though the difference is small for two cases, the relative error is significantly different. So there is a need for a new evaluation metric.

Goal:

The goal of this PR is to implement new evaluation metric meanAbsoluteError() between two matrices which takes the mean of all the absolute errors of individual elements.

Absolute error between a and b = abs(a-b).

So both the cases described above will have the same absolute error. So I propose this would be a good choice of metric for comparing two matrices for approximate equality as needed for testing optimizers.

…ices ) needed for evaluating the optimizers.
@phsft-bot
Copy link
Collaborator

Can one of the admins verify this patch?

@ravikiran0606
Copy link
Contributor Author

@lmoneta @stwunsch Can you review this PR?

*/

//______________________________________________________________________________
template <typename Matrix1, typename Matrix2>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it needed to template the type of both matrices? Shouldn't they be of the same type?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that the implementation of the maximumRelativeError has done the some... Nevertheless, is it needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I guess we need the template arguments. Because in certain cases, the matrices can be of different types like TMatrixT or TCpuMatrix or TCudaMatrix.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alright, fine then.

@stwunsch
Copy link
Contributor

@phsft-bot build

@phsft-bot
Copy link
Collaborator

Starting build on slc6/gcc48, slc6-i686/gcc49, centos7/clang39, centos7/gcc62, centos7/gcc7, fedora28/native, ubuntu16/native, mac1013/native, windows10/vc15 with flags -Dimt=ON -Dccache=ON
How to customize builds

@lmoneta
Copy link
Member

lmoneta commented Aug 2, 2018

These changes are fine with me. I can merge it

@lmoneta lmoneta merged commit 78e3f0c into root-project:master Aug 2, 2018
@ravikiran0606 ravikiran0606 deleted the Evaluation-Metric branch August 2, 2018 12:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants