Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Elementwise pow op #1133

Merged
merged 20 commits into from Jan 24, 2024
Merged

Conversation

skewballfox
Copy link
Contributor

@skewballfox skewballfox commented Jan 11, 2024

This is still in a draft state, I'll fill out the checklist once it's further along

Pull Request Template

Checklist

  • Confirmed that run-checks all script has been executed.
  • Made sure the book is up to date with changes in this PR.

Related Issues/PRs

Provide links to relevant issues and dependent PRs.

Changes

In progress addition of pow function. Assumes that the type conversion when rhs is differently typed than lhs/output, happens to rhs prior to the operation

Testing

I've created the python files, but I haven't yet generated the ONNX models. If someone beats me to it, their is a slight difference in spec between pytorch and onnx regarding pow. for pytorch, it seems it always outputs a float tensor, so the pow_int needs to convert the output post op

@skewballfox skewballfox changed the title Draft: Elementwise pow op [Draft]: Elementwise pow op Jan 11, 2024
@skewballfox skewballfox marked this pull request as draft January 11, 2024 15:05
@skewballfox skewballfox changed the title [Draft]: Elementwise pow op Elementwise pow op Jan 11, 2024
Copy link
Member

@nathanielsimard nathanielsimard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my comment bellow

burn-tensor/src/tensor/ops/int_tensor.rs Outdated Show resolved Hide resolved
tensor,
exponent,
|lhs, rhs| lhs.f_pow_tensor_(rhs).unwrap(),
|lhs, rhs| rhs.f_pow_tensor_(lhs).unwrap(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's equivalent, we should use the readonly version in this case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I resolved it, but just to clarify, you mean switching to f_pow (which takes &self rather than &mut self) right?

@@ -1589,6 +1589,50 @@ where
/// For calculating abs of the elements of a tensor, users should prefer the [Tensor::abs](Tensor::abs) function,
/// which is more high-level and designed for public use.
fn abs<const D: usize>(tensor: Self::Primitive<D>) -> Self::Primitive<D>;

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this file, you should actually add the user API operation implementations where the static dispatch is done. This isn't inside a trait, but the implementation uses the Kind to dispatch to the right implementation.

Example with powf_scalar:

    /// Applies element wise powf operation with a scalar.
    ///
    /// `y = x + s`
    pub fn powf_scalar<E: ElementConversion>(self, other: E) -> Self {
        Self::new(K::powf_scalar(self.primitive, other))
    }

@skewballfox skewballfox marked this pull request as ready for review January 23, 2024 18:03
Copy link
Member

@nathanielsimard nathanielsimard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for this PR. Just some minor comments, but should be good to go pretty soon. Tagging @louisfd to review as well.

burn-autodiff/src/utils.rs Outdated Show resolved Hide resolved
burn-import/src/onnx/to_burn.rs Outdated Show resolved Hide resolved
burn-tensor/src/tensor/api/numeric.rs Outdated Show resolved Hide resolved
burn-autodiff/src/tests/pow.rs Outdated Show resolved Hide resolved
Copy link
Member

@louisfd louisfd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I double-checked the backward pass and it's well done!

Copy link

codecov bot commented Jan 23, 2024

Codecov Report

Attention: 76 lines in your changes are missing coverage. Please review.

Comparison is base (c9cc132) 86.01% compared to head (e6ffa25) 85.96%.
Report is 3 commits behind head on main.

Files Patch % Lines
burn-tensor/src/tensor/api/numeric.rs 60.00% 24 Missing ⚠️
burn-tensor/src/tensor/ops/int_tensor.rs 0.00% 15 Missing ⚠️
burn-tch/src/ops/base.rs 0.00% 12 Missing ⚠️
burn-ndarray/src/ops/int_tensor.rs 66.66% 6 Missing ⚠️
burn-tch/src/ops/tensor.rs 14.28% 6 Missing ⚠️
burn-tensor/src/tensor/ops/tensor.rs 0.00% 6 Missing ⚠️
burn-import/src/onnx/to_burn.rs 81.25% 3 Missing ⚠️
burn-import/src/burn/node/binary.rs 93.75% 2 Missing ⚠️
burn-autodiff/src/ops/tensor.rs 98.80% 1 Missing ⚠️
burn-core/src/optim/rmsprop.rs 66.66% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1133      +/-   ##
==========================================
- Coverage   86.01%   85.96%   -0.06%     
==========================================
  Files         522      522              
  Lines       58674    59179     +505     
==========================================
+ Hits        50469    50873     +404     
- Misses       8205     8306     +101     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@nathanielsimard nathanielsimard merged commit 3b7d9fe into tracel-ai:main Jan 24, 2024
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants