🐛 Describe the bug
What changes were introduced between versions 1.0 and 1.1/1.2 that could explain why some models exported with XNNPACK delegates in ExecuTorch 1.1/1.2 consume nearly 3× more runtime memory compared to the same models exported with version 1.0?
I encountered this while exporting Kokoro modules using these scripts: first and second.
When profiling memory usage on Android (arm64-v8a), the combined runtime usage for both models is ~800 MB when exported with ExecuTorch 1.0, but suddenly increases to ~2.2 GB when exported with versions 1.1 or 1.2, without any changes in the models.
For reference, the export configurations used are:
python -m export.export_duration_predictor --bundled=true --input-size=inp-16 --max-tokens=32,64,128 --pad-lstm=true --dynamic=true --dtype=fp32
python -m export.export_synthesizer --input-size=inp-16 --dynamic=true --max-tokens=128 --max-duration=296 --dtype=fp32
Both models are exported with dynamic shapes enabled and include the LSTM padding fix (which pads LSTMs to a static sequence length).
Versions
Collecting environment information...
PyTorch version: 2.11.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 26.1 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.3.19.1)
CMake version: version 3.31.10
Libc version: N/A
Python version: 3.12.11 (main, Jun 3 2025, 15:41:47) [Clang 17.0.0 (clang-1700.0.13.3)] (64-bit runtime)
Python platform: macOS-26.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Caching allocator config: N/A
CPU:
Apple M4 Pro
Versions of relevant libraries:
[pip3] executorch==1.2.0a0+0b0e2c5
[pip3] numpy==2.0.0
[pip3] pytorch_tokenizers==1.2.0
[pip3] torch==2.11.0
[pip3] torchao==0.17.0+git02105d46c
[pip3] torchaudio==2.11.0
[pip3] torchdata==0.11.0
[pip3] torchsr==1.0.4
[pip3] torchtune==0.0.0
[pip3] torchvision==0.26.0
[conda] Could not collect
cc @GregoryComer @digantdesai @cbilgin
🐛 Describe the bug
What changes were introduced between versions 1.0 and 1.1/1.2 that could explain why some models exported with XNNPACK delegates in ExecuTorch 1.1/1.2 consume nearly 3× more runtime memory compared to the same models exported with version 1.0?
I encountered this while exporting Kokoro modules using these scripts: first and second.
When profiling memory usage on Android (arm64-v8a), the combined runtime usage for both models is ~800 MB when exported with ExecuTorch 1.0, but suddenly increases to ~2.2 GB when exported with versions 1.1 or 1.2, without any changes in the models.
For reference, the export configurations used are:
python -m export.export_duration_predictor --bundled=true --input-size=inp-16 --max-tokens=32,64,128 --pad-lstm=true --dynamic=true --dtype=fp32python -m export.export_synthesizer --input-size=inp-16 --dynamic=true --max-tokens=128 --max-duration=296 --dtype=fp32Both models are exported with dynamic shapes enabled and include the LSTM padding fix (which pads LSTMs to a static sequence length).
Versions
Collecting environment information...
PyTorch version: 2.11.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 26.1 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.3.19.1)
CMake version: version 3.31.10
Libc version: N/A
Python version: 3.12.11 (main, Jun 3 2025, 15:41:47) [Clang 17.0.0 (clang-1700.0.13.3)] (64-bit runtime)
Python platform: macOS-26.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Caching allocator config: N/A
CPU:
Apple M4 Pro
Versions of relevant libraries:
[pip3] executorch==1.2.0a0+0b0e2c5
[pip3] numpy==2.0.0
[pip3] pytorch_tokenizers==1.2.0
[pip3] torch==2.11.0
[pip3] torchao==0.17.0+git02105d46c
[pip3] torchaudio==2.11.0
[pip3] torchdata==0.11.0
[pip3] torchsr==1.0.4
[pip3] torchtune==0.0.0
[pip3] torchvision==0.26.0
[conda] Could not collect
cc @GregoryComer @digantdesai @cbilgin