-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add blake/flake8/isort refactoring + pip installable #69
Conversation
|
FWIW, pytest has be rerun on #72 this morning, which includes all small modif since last pytest run in this PR. |
|
||
Optimized kernels for `transformer` models. | ||
|
||
## Install dependencies | ||
|
||
**IMPORTANT**: This package requires `pytorch` being installed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should mention also that we need python3.9 +
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3.9 is 2 years old, but ok to add it if you think it makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
setup.py
Outdated
try: | ||
import torch | ||
|
||
assert torch.__version__ >= "1.11.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in requirements we ask for : torch>=1.12.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
functorch | ||
torchdynamo | ||
transformers | ||
onnxruntime |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
onnxruntime-gpu (not sure) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we list namespaces and even for ORT gpu it's still onnxruntime
.
test/models/ort_utils.py
Outdated
@@ -1,20 +1,36 @@ | |||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should it be after the license ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
import ctypes as C | ||
from ctypes.util import find_library | ||
from typing import Dict, Optional, Tuple | ||
|
||
import cupy as cp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not present in the requirements
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
already present cupy-cuda117
|
# Conflicts: # optimizer/dynamo_backend.py # src/nucle/optimizer/normalizer.py
src/nucle
+ add setup.pyOnly unrelated change: remove layernorm 8K len test as not needed.
No other modification have been performed to "ease" as much as possible the review
fix #42
fix #65
fix #70