Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix network precision under specific situation #1391

Merged
merged 10 commits into from
Dec 31, 2021

Conversation

njzjz
Copy link
Member

@njzjz njzjz commented Dec 30, 2021

No description provided.

@codecov-commenter
Copy link

codecov-commenter commented Dec 30, 2021

Codecov Report

Merging #1391 (506d003) into devel (1a51b5d) will increase coverage by 0.09%.
The diff coverage is 94.54%.

Impacted file tree graph

@@            Coverage Diff             @@
##            devel    #1391      +/-   ##
==========================================
+ Coverage   75.53%   75.63%   +0.09%     
==========================================
  Files          91       92       +1     
  Lines        7506     7531      +25     
==========================================
+ Hits         5670     5696      +26     
+ Misses       1836     1835       -1     
Impacted Files Coverage Δ
deepmd/descriptor/se_r.py 91.54% <60.00%> (+0.04%) ⬆️
deepmd/descriptor/se_a.py 94.15% <80.00%> (+0.28%) ⬆️
deepmd/common.py 81.57% <100.00%> (+1.57%) ⬆️
deepmd/descriptor/se.py 72.72% <100.00%> (+4.30%) ⬆️
deepmd/descriptor/se_t.py 97.35% <100.00%> (+0.01%) ⬆️
deepmd/fit/dipole.py 93.24% <100.00%> (+0.18%) ⬆️
deepmd/fit/ener.py 91.70% <100.00%> (ø)
deepmd/fit/fitting.py 100.00% <100.00%> (ø)
deepmd/fit/polar.py 49.27% <100.00%> (+0.49%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1a51b5d...506d003. Read the comment docs.

@njzjz njzjz marked this pull request as draft December 30, 2021 06:06
@njzjz njzjz changed the title fix filter network precision fix network precision Dec 30, 2021
@njzjz njzjz changed the title fix network precision fix network precision under specific situation Dec 30, 2021
@njzjz njzjz marked this pull request as ready for review December 30, 2021 06:38
@njzjz njzjz requested a review from denghuilu December 30, 2021 06:38
@@ -736,7 +736,7 @@ def _filter_lower(
if (not self.uniform_seed) and (self.seed is not None): self.seed += self.seed_shift
else:
# we can safely return the final xyz_scatter filled with zero directly
return tf.cast(tf.fill((natom, 4, outputs_size[-1]), 0.), GLOBAL_TF_FLOAT_PRECISION)
return tf.cast(tf.fill((natom, 4, outputs_size[-1]), 0.), self.filter_precision)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cast the result back to global precision before return

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any cast in line 722 or line 747

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there something wrong?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any cast in line 722 or line 747

747 should be cast back...

Why do we need to cast at L722?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of _filter_lower, I think we should cast back at line 839 before _filter returns.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not writing a decorator for doing that? It casts the inputs of _filter to filter_precision and casts back to global precision when _filter returns

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea!

Comment on lines +520 to +521
"""A decorator that casts and casts back the input
and output tensor of a method.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please write more on the logic behind cast_precision?

  • It casts i tensor from GLOBAL_TF_FLOAT_PRECISION to precision defined by property precision.
  • It casts o tensor from precision to GLOBAL_TF_FLOAT_PRECISION.
  • It checks the i/o list and only cast when an i or o is tensor and the tensor matches GLOBAL_TF_FLOAT_PRECISION or precision, respectively.

@wanghan-iapcm wanghan-iapcm enabled auto-merge (squash) December 31, 2021 04:09
@wanghan-iapcm wanghan-iapcm merged commit 082a199 into deepmodeling:devel Dec 31, 2021
@njzjz njzjz deleted the fix-precision branch December 31, 2021 04:37
@@ -392,15 +392,15 @@ def _pass_filter(self,
[ 0, start_index* self.ndescrpt],
[-1, natoms[2+type_i]* self.ndescrpt] )
inputs_i = tf.reshape(inputs_i, [-1, self.ndescrpt])
layer = self._filter_r(tf.cast(inputs_i, self.filter_precision), type_i, name='filter_type_'+str(type_i)+suffix, natoms=natoms, reuse=reuse, trainable = trainable, activation_fn = self.filter_activation_fn)
layer = self._filter_r(self.filter_precision, type_i, name='filter_type_'+str(type_i)+suffix, natoms=natoms, reuse=reuse, trainable = trainable, activation_fn = self.filter_activation_fn)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first variable should be "inputs_i" instead of “self.filter_precision”. It couldn't pass UT but it was merged into devel.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact, this line is not covered by UT.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's covered by source/tests/test_model_compression_se_r.py, a UT just added in PR #1361.

liangadam added a commit to liangadam/deepmd-kit that referenced this pull request Jan 11, 2022
wanghan-iapcm pushed a commit that referenced this pull request Jan 11, 2022
* Finish  model compression for se_r descriptor!

* Improve error type.

* Update compress.md

* Improve error type.

* Improve error type.

* Improve exception handling and unittest.

* Add gtest for tabulate_fusion se_r.

* Fix variable mistake from #1391

Co-authored-by: huangliang <huangla@dp.tech>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants