Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Wait for #2562 ] [ Weight ] Add Var32 Tensor in Weight @open sesame 05/03 15:14 #2563

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

jijoongmoon
Copy link
Collaborator

@jijoongmoon jijoongmoon commented May 2, 2024

In this PR

We will add Var32 Tensor if the Variable Weight is not Full
precision (FP32). This eables the Weight Update with full precision
and only Apply Gradient Process ueses this Tensor. Therefore, the
lifespan of this tensor should be "ApplyGradient".

Also, modify TensorPool to generate Weigth considering Mixed Precision.

Self evaluation:

  1. Build test: [X]Passed [ ]Failed [ ]Skipped
  2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon jijoong.moon@samsung.com

@taos-ci
Copy link
Collaborator

taos-ci commented May 2, 2024

📝 TAOS-CI Version: 1.5.20200925. Thank you for submitting PR #2563. Please a submit 1commit/1PR (one commit per one PR) policy to get comments quickly from reviewers. Your PR must pass all verificiation processes of cibot before starting a review process from reviewers. If you are new member to join this project, please read manuals in documentation folder and wiki page. In order to monitor a progress status of your PR in more detail, visit http://ci.nnstreamer.ai/.

@jijoongmoon jijoongmoon changed the title [Wait for 2562 ] [ Weight ] Add Var32 Tensor in Weight [Wait for #2562 ] [ Weight ] Add Var32 Tensor in Weight May 2, 2024
@taos-ci
Copy link
Collaborator

taos-ci commented May 2, 2024

:octocat: cibot: @jijoongmoon, A builder checker could not be completed because one of the checkers is not completed. In order to find out a reason, please go to http://ci.nnstreamer.ai/nntrainer/ci/repo-workers/pr-checker/2563-202405022049330.26662302017212-69f3534d52f32fdab88ce58bdce83b8705f66bb1/.

@jijoongmoon jijoongmoon changed the title [Wait for #2562 ] [ Weight ] Add Var32 Tensor in Weight [Wait for #2562 ] [ Weight ] Add Var32 Tensor in Weight @open sesame 05/02 22:07 May 2, 2024
jihochu and others added 4 commits May 2, 2024 22:10
It add loss scale property as model common property.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
This PR enables the loss scale factor in Weight.
. Change the WeightSpec to incluide the loss factor
. Add LossScaleForMixed Property as an layer common property, so that
  it can set the scale factor in initContext.
. Add Loss Scale in initContext
. Set the LossScaleForMixed Property when there is LossScale Model
  Property

Resolves:

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
This PR split the Variable and Gradient Dim in Var_Grad and Weight.
By this way we can set different Variable Type and Gradient in Wegiht.
. add dim_g for gradient in WeightSpec.
. manager need to update to support WeightSpec.
. Create Tensors according to dim_v and dim_g
. Create Weight chaged in Weight.h

Resolves:

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Loss Scale is more like Rigid Property of model, rather than flexible
property.

Resolves:

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
@taos-ci
Copy link
Collaborator

taos-ci commented May 2, 2024

:octocat: cibot: @jijoongmoon, A builder checker could not be completed because one of the checkers is not completed. In order to find out a reason, please go to http://ci.nnstreamer.ai/nntrainer/ci/repo-workers/pr-checker/2563-202405022211050.57468700408936-eefbe653350644e0bf0e43e3276384706241e941/.

@taos-ci
Copy link
Collaborator

taos-ci commented May 2, 2024

:octocat: cibot: @jijoongmoon, A builder checker could not be completed because one of the checkers is not completed. In order to find out a reason, please go to http://ci.nnstreamer.ai/nntrainer/ci/repo-workers/pr-checker/2563-202405030740520.23194599151611-705f655f1a469bab837b732dd8b13b88f498ccdb/.

@taos-ci
Copy link
Collaborator

taos-ci commented May 2, 2024

:octocat: cibot: @jijoongmoon, A builder checker could not be completed because one of the checkers is not completed. In order to find out a reason, please go to http://ci.nnstreamer.ai/nntrainer/ci/repo-workers/pr-checker/2563-202405030812070.23390698432922-2165c364e068493e5d5cbf356fdf8dbb5fcf8583/.

@taos-ci
Copy link
Collaborator

taos-ci commented May 3, 2024

:octocat: cibot: @jijoongmoon, nntrainer/tensor/weight.h does not include Doxygen tags such as @file @brief @author @bug. You must include the Doxygen tags in the source code. Please refer to a Doxygen manual at http://github.com/nnstreamer/TAOS-CI/blob/main/ci/doc/doxygen-documentation.md

1 similar comment
@taos-ci
Copy link
Collaborator

taos-ci commented May 3, 2024

:octocat: cibot: @jijoongmoon, nntrainer/tensor/weight.h does not include Doxygen tags such as @file @brief @author @bug. You must include the Doxygen tags in the source code. Please refer to a Doxygen manual at http://github.com/nnstreamer/TAOS-CI/blob/main/ci/doc/doxygen-documentation.md

Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jijoongmoon, 💯 All CI checkers are successfully verified. Thanks.

@jijoongmoon jijoongmoon changed the title [Wait for #2562 ] [ Weight ] Add Var32 Tensor in Weight @open sesame 05/02 22:07 [Wait for #2562 ] [ Weight ] Add Var32 Tensor in Weight @open sesame 05/03 15:14 May 3, 2024
@taos-ci
Copy link
Collaborator

taos-ci commented May 3, 2024

:octocat: cibot: @jijoongmoon, nntrainer/tensor/weight.h does not include Doxygen tags such as @file @brief @author @bug. You must include the Doxygen tags in the source code. Please refer to a Doxygen manual at http://github.com/nnstreamer/TAOS-CI/blob/main/ci/doc/doxygen-documentation.md

Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jijoongmoon, 💯 All CI checkers are successfully verified. Thanks.

We will add Var32 Tensor if the Variable Weight is not Full
precision (FP32). This eables the Weight Update with full precision
and only Apply Gradient Process ueses this Tensor. Therefore, the
lifespan of this tensor should be "ApplyGradient".

. Modify TensorPool to generate Weigth considering Mixed Precsion.

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jijoongmoon, 💯 All CI checkers are successfully verified. Thanks.

Copy link
Member

@DonghakPark DonghakPark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! (need rebase)

Copy link
Member

@skykongkong8 skykongkong8 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Comment on lines +436 to +437
TensorDim var32_dim(dim_v);
var32_dim.setDataType(ml::train::TensorDim::DataType::FP32);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it be convenient for you to have a new TensorDim constructor that takes data type?
(e.g., TensorDim var32_dim(dim_v, ml::train::TensorDim::DataType::FP32))

If so, I'll create a new PR with the change.

TensorDim var32_dim(dim_v);
var32_dim.setDataType(ml::train::TensorDim::DataType::FP32);
std::string var32_suffix = ":fp32";
std::string var32_name = name + var32_suffix;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicated with lines 78-79?

* @return false otherwise
*/
bool isMixedPrecision() const {
return var->getDataType() == ml::train::TensorDim::DataType::FP32;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm quite confused here. if this function returns true when the variable is not in full precision, shouldn't it be != instead of ==?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in #2566

Copy link
Contributor

@djeong20 djeong20 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@EunjuYang EunjuYang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@myungjoo
Copy link
Member

@jijoongmoon Let's rebase and move on.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants