-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Support export of Llama with DynamicCache and transformers>=4.48 #24291
base: main
Are you sure you want to change the base?
Conversation
@@ -7,6 +7,7 @@ | |||
|
|||
import numpy as np | |||
import torch | |||
import transformers |
Check notice
Code scanning / CodeQL
Module is imported with 'import' and 'import from' Note
Module 'onnxruntime.test.python.transformers' is imported with both 'import' and 'import from'.
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 7 days ago
To fix the problem, we should remove the from transformers import AutoConfig, AutoTokenizer
statement and access these components directly from the transformers
module. This will make the code more consistent and easier to understand.
- Remove the
from transformers import AutoConfig, AutoTokenizer
statement. - Update the references to
AutoConfig
andAutoTokenizer
to usetransformers.AutoConfig
andtransformers.AutoTokenizer
, respectively.
-
Copy modified line R32 -
Copy modified line R67
@@ -10,3 +10,2 @@ | ||
import transformers | ||
from transformers import AutoConfig, AutoTokenizer | ||
|
||
@@ -32,3 +31,3 @@ | ||
def get_sample_inputs( | ||
config: AutoConfig, | ||
config: transformers.AutoConfig, | ||
device: torch.device, | ||
@@ -67,3 +66,3 @@ | ||
def get_sample_with_past_kv_inputs( | ||
config: AutoConfig, | ||
config: transformers.AutoConfig, | ||
device: torch.device, |
import torch | ||
import transformers |
Check notice
Code scanning / CodeQL
Module is imported with 'import' and 'import from' Note
Module 'onnxruntime.test.python.transformers' is imported with both 'import' and 'import from'.
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 7 days ago
To fix the problem, we should remove the from transformers import AutoConfig
statement and access AutoConfig
through the transformers
module instead. This will ensure that the module is only imported once and will make the code more consistent and easier to understand.
- Remove the
from transformers import AutoConfig
statement. - Replace all instances of
AutoConfig
withtransformers.AutoConfig
.
-
Copy modified line R35 -
Copy modified line R41 -
Copy modified line R104
@@ -28,3 +28,2 @@ | ||
from models.torch_export_patches.cache_helper import make_dynamic_cache | ||
from transformers import AutoConfig | ||
|
||
@@ -35,3 +34,3 @@ | ||
|
||
def get_sequence_lengths(args: argparse.Namespace, config: AutoConfig): | ||
def get_sequence_lengths(args: argparse.Namespace, config: transformers.AutoConfig): | ||
past_sequence_length, curr_sequence_length = (8, 1) if args.use_past_kv else (0, 8) | ||
@@ -41,3 +40,3 @@ | ||
|
||
def get_inputs(args: argparse.Namespace, config: AutoConfig): | ||
def get_inputs(args: argparse.Namespace, config: transformers.AutoConfig): | ||
# Dummy values for parity | ||
@@ -104,3 +103,3 @@ | ||
pytorch_model: None | torch.nn.Module = None, | ||
config: None | AutoConfig = None, | ||
config: None | transformers.AutoConfig = None, | ||
): |
onnxruntime/python/tools/transformers/models/torch_export_patches/onnx_export_errors.py
Fixed
Show fixed
Hide fixed
def _catch_produce_guards_and_solve_constraints( | ||
previous_function: Callable, | ||
fake_mode: "FakeTensorMode", | ||
gm: "torch.fx.GraphModule", | ||
dynamic_shapes: dict[str, Any] | tuple[Any] | list[Any] | None, | ||
equalities_inputs: "EqualityConstraint", # noqa: F821 | ||
original_signature: inspect.Signature, | ||
_is_torch_jit_trace: bool = False, | ||
verbose: int = 0, | ||
): |
Check notice
Code scanning / CodeQL
Explicit returns mixed with implicit (fall through) returns Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 7 days ago
To fix the problem, we need to add an explicit return statement at the end of the _catch_produce_guards_and_solve_constraints
function. This will ensure that the function's return value is always clear, even when an exception is caught and the if
conditions are not met. The explicit return statement should return None
, as this is the implicit return value when no return statement is present.
-
Copy modified line R44
@@ -43,3 +43,3 @@ | ||
) | ||
|
||
return None | ||
|
def patch__check_input_constraints_for_graph( | ||
previous_function: Callable, | ||
input_placeholders: list[torch.fx.Node], | ||
flat_args_with_path, | ||
range_constraints, | ||
verbose: int = 0, | ||
) -> None: |
Check notice
Code scanning / CodeQL
Explicit returns mixed with implicit (fall through) returns Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 7 days ago
To fix the problem, we need to add an explicit return statement at the end of the function patch__check_input_constraints_for_graph
. This ensures that the function always returns None
explicitly when no exception is raised, making the code easier to read and understand.
- Add an explicit
return None
statement at the end of the functionpatch__check_input_constraints_for_graph
. - This change should be made in the file
onnxruntime/python/tools/transformers/models/torch_export_patches/patches/patch_torch.py
.
-
Copy modified line R67
@@ -66,3 +66,3 @@ | ||
) | ||
|
||
return None | ||
|
# if config.print_specializations: | ||
# self.log.warning( | ||
# "Specializing %s to %s", self.var_to_sources[a][0].name(), tgt |
Check notice
Code scanning / CodeQL
Commented-out code Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 7 days ago
To fix the problem, we should remove the commented-out code entirely. This will clean up the code and eliminate any potential confusion for future developers. The specific lines to be removed are 304-308.
-
Copy modified lines R304-R308
@@ -303,7 +303,7 @@ | ||
|
||
# if config.print_specializations: | ||
# self.log.warning( | ||
# "Specializing %s to %s", self.var_to_sources[a][0].name(), tgt | ||
# ) | ||
# self.log.debug("SPECIALIZATION", stack_info=True) | ||
|
||
|
||
|
||
|
||
|
||
assert msg != "range_refined_to_singleton", ( |
# if input_ids.shape[1] == 0: | ||
# inputs_embeds = inputs_embeds[:, -cache_position.shape[0] :] | ||
# else: | ||
# if cache_position[-1] >= input_ids.shape[1]: | ||
# input_ids = input_ids[:, -cache_position.shape[0] :] | ||
# else: | ||
# if input_ids.shape[1] != cache_position.shape[0]: | ||
# input_ids = input_ids[:, cache_position] |
Check notice
Code scanning / CodeQL
Commented-out code Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 7 days ago
The best way to fix the problem is to remove the commented-out code entirely. This will improve the readability of the code and eliminate any confusion for future developers. If the commented-out code is needed for reference, it should be documented separately or included in a way that clearly indicates its purpose.
- Remove the commented-out code block from lines 281 to 288.
- Ensure that the remaining code is properly formatted and functional.
-
Copy modified line R280
@@ -279,11 +279,3 @@ | ||
else: | ||
# This is the code we need to implemented with torch.cond. | ||
# if input_ids.shape[1] == 0: | ||
# inputs_embeds = inputs_embeds[:, -cache_position.shape[0] :] | ||
# else: | ||
# if cache_position[-1] >= input_ids.shape[1]: | ||
# input_ids = input_ids[:, -cache_position.shape[0] :] | ||
# else: | ||
# if input_ids.shape[1] != cache_position.shape[0]: | ||
# input_ids = input_ids[:, cache_position] | ||
|
||
def branch_1(inputs_embeds, cache_position): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can commit the suggested changes from lintrunner.
onnxruntime/python/tools/transformers/models/llama/convert_to_onnx.py
Outdated
Show resolved
Hide resolved
onnxruntime/python/tools/transformers/models/torch_export_patches/patch_inputs.py
Outdated
Show resolved
Hide resolved
onnxruntime/python/tools/transformers/models/torch_export_patches/patch_inputs.py
Fixed
Show fixed
Hide fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can commit the suggested changes from lintrunner.
Description
Replaces #24231.
LLM cannot be exportable with transformers>=4.48 and torch script export. It requires to use the new exporters (torch.onnx.expor(..., dynamo=True). This PR implements patches to overcome the pieces of code in transformers which don't export and convert dynamic_axes into dynamic shapes.
The modification were tested with a tiny LLM: