Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding the concept of training/eval mode to the VM #491

Merged
merged 6 commits into from
Oct 10, 2021
Merged

Conversation

dcu
Copy link
Collaborator

@dcu dcu commented Jun 14, 2021

I'm submitting this to start the discussion and understand if this feasible
Without this I think the only option to disable the Dropout op in eval mode,
for example, is recreating the graph which is hard when you want to create a
higher level framework and still let people use gorgonia directly

I'm submitting this to start the discussion and understand if this feasible
Without this I think the only option to disable the Dropout op in eval mode,
for example, is recreating the graph which is hard when you want to create a
higher level framework and still let people use gorgonia directly
@codecov
Copy link

codecov bot commented Jun 15, 2021

Codecov Report

Merging #491 (dedfbf0) into master (0e17857) will increase coverage by 0.09%.
The diff coverage is 70.89%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #491      +/-   ##
==========================================
+ Coverage   55.57%   55.66%   +0.09%     
==========================================
  Files          73       74       +1     
  Lines       11966    12076     +110     
==========================================
+ Hits         6650     6722      +72     
- Misses       4443     4476      +33     
- Partials      873      878       +5     
Flag Coverage Δ
unittests 55.66% <70.89%> (+0.09%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
node.go 65.47% <ø> (ø)
op.go 65.27% <ø> (ø)
vm_genera.go 50.13% <0.00%> (-0.28%) ⬇️
vm.go 52.30% <50.00%> (-0.16%) ⬇️
op_dropout.go 71.30% <71.30%> (ø)
nn.go 47.69% <100.00%> (-0.76%) ⬇️
op_nn.go 77.11% <100.00%> (-0.63%) ⬇️
vm_tape.go 44.54% <100.00%> (+0.32%) ⬆️
weights.go 45.95% <0.00%> (-0.51%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0e17857...dedfbf0. Read the comment docs.

@dcu dcu requested a review from chewxy June 18, 2021 01:00
@chewxy
Copy link
Member

chewxy commented Jun 22, 2021

I'm back. Still sick and sinuses still heavily infected but I think this is somewhat a good idea though I think the interface should be one method:

type EvalModeOp interface{
    SetTraining(isTraining bool) error
}

@owulveryck
Copy link
Member

Hi!

Good idea indeed.
I’ve got a question. Why is the ExprGraph carrying the evalMode flag (besides commodity)?
My understanding is that the evalMode is a runtime only option; but it somehow alters the graph structure which is counter intuitive to me.

@dcu
Copy link
Collaborator Author

dcu commented Jun 22, 2021

Hi!

Good idea indeed.
I’ve got a question. Why is the ExprGraph carrying the evalMode flag (besides commodity)?
My understanding is that the evalMode is a runtime only option; but it somehow alters the graph structure which is counter intuitive to me.

I was thinking about dropout since it is not a full operation so the decision can't be delayed until runtime but you're totally right eval mode is a runtime thing so it won't work anyway. I guess we should make dropout an Op

dcu added 2 commits June 22, 2021 09:15
Since the graph is only a runtime concept it doesn't make sense to have it
in the graph itself.
… mode

This one replaces the previous interface and unifies the methods to change
the Op mode.
@dcu
Copy link
Collaborator Author

dcu commented Jun 22, 2021

I'm back. Still sick and sinuses still heavily infected but I think this is somewhat a good idea though I think the interface should be one method:

type EvalModeOp interface{
    SetTraining(isTraining bool) error
}

I made this change. please take a look again
should I go ahead and make a dropout op?

This implementation should be faster than previous one and also
support the new TrainModeOp interface which disables it when it is
not training.
@dcu
Copy link
Collaborator Author

dcu commented Jun 26, 2021

I just implemented Dropout as an Op, it should be faster than previous implementation

@dcu dcu marked this pull request as ready for review October 9, 2021 22:51
Copy link
Member

@chewxy chewxy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM...

@chewxy chewxy merged commit c11c38c into master Oct 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants