-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Behavior mismatch between PyTorch GroupNorm
and ggml_group_norm
#803
Comments
Try this: ggml_tensor * a = ggml_reshape_3d(ctx0, model.a, model.a->ne[0], 1, model.a->ne[1]);
struct ggml_tensor * result = ggml_group_norm(ctx0, a, 32);
|
OMG it seems to work perfectly I am so grateful and your response time was amazing also! |
Would you be interested in a PR adding a comment explaining the normalization behavior in ggml.c? |
Sure I think it would be good to document these things, but ultimately that's up to @ggerganov. |
Yes, a comment in |
Here are my two tester files which include what should be the same operation on the same tensor with PyTorch and ggml. The same tensor is used, and it's saved as a literal in the tester file in each case. It has dimension 43 X 1024. This is using the cuda backend.
Pytorch example:
https://github.com/balisujohn/ggml_pytorch_groupnorm_mismatch/blob/master/tester2.py
GGML example:
https://github.com/balisujohn/ggml_pytorch_groupnorm_mismatch/blob/master/examples/simple/simple-backend.cpp
Pytorch Output:
GGML Output (DATA shows first 3 and last 3 entries of tensor in row major order):
torch version: 2.1.0
ggml version: e1daebb (most recent as of time of posting)
cuda version: 12.0
The text was updated successfully, but these errors were encountered: