Arm backend: Install TOSA and VGF tooling from pip#18840
Conversation
Install tosa-tools and the ML SDK model-converter/VGF packages from pip instead of cloning tosa-tools from source. Update the VGF runtime to the 0.9 decoder API and export VK_LAYER_PATH for the pip-installed emulation layer so the Vulkan ML layers are discovered at runtime. Signed-off-by: per.held@arm.com Change-Id: I2d3c2acd21a1bbe587d9af0b44479407afdab82d
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18840
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 6 New Failures, 4 Unrelated FailuresAs of commit 5e21dc4 with merge base 2eaa16c ( NEW FAILURES - The following jobs have failed:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
BROKEN TRUNK - The following jobs failed but was present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Pull request overview
This PR updates the Arm backend setup to install TOSA tooling and Arm ML SDK VGF/emulation-layer tooling from PyPI instead of cloning/building from source, and updates the VGF runtime integration to match the 0.9 decoder API.
Changes:
- Replace
tosa-toolssource checkout/build withtosa-toolsPyPI installation. - Update VGF runtime decoding to use the 0.9 decoder API (size-aware decoders).
- Export/propagate
VK_LAYER_PATHfor the pip-installed Vulkan ML emulation layer and bump ML SDK package pins to 0.9.0.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| examples/arm/setup.sh | Removes tosa-tools git clone/build and installs TOSA tooling via a requirements file. |
| backends/arm/scripts/mlsdk_utils.sh | Adds propagation of VK_LAYER_PATH from pip-installed emulation layer tooling. |
| backends/arm/runtime/VGFSetup.h | Updates process_vgf signature to accept the VGF blob size. |
| backends/arm/runtime/VGFSetup.cpp | Updates VGF decoder construction to the 0.9 size-aware API. |
| backends/arm/runtime/VGFBackend.cpp | Passes VGF blob size into process_vgf. |
| backends/arm/requirements-arm-vgf.txt | Pins ML SDK VGF/emulation-layer packages to 0.9.0. |
| backends/arm/requirements-arm-tosa.txt | Adds tosa-tools PyPI package pin and removes now-unneeded items. |
Comments suppressed due to low confidence (1)
backends/arm/runtime/VGFSetup.cpp:364
- Typo in error message: "internalsr" looks unintended and makes logs harder to search/understand. Consider changing to "internals" or similar.
if (not(header_decoder && module_decoder && sequence_decoder &&
resource_decoder && constant_decoder && header_decoder->IsValid() &&
header_decoder->CheckVersion())) {
ET_LOG(Error, "Failed to process VGF file internalsr");
return false;
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| @@ -335,36 +335,6 @@ if [[ $is_script_sourced -eq 0 ]]; then | |||
| CMAKE_POLICY_VERSION_MINIMUM=3.5 \ | |||
| pip install --no-dependencies -r "$et_dir/backends/arm/requirements-arm-tosa.txt" | |||
There was a problem hiding this comment.
pip install --no-dependencies will skip transitive deps for the newly added tosa-tools PyPI package. Either remove --no-dependencies for this install or explicitly list all tosa-tools runtime/build dependencies in requirements-arm-tosa.txt to avoid incomplete/fragile installs.
| pip install --no-dependencies -r "$et_dir/backends/arm/requirements-arm-tosa.txt" | |
| pip install -r "$et_dir/backends/arm/requirements-arm-tosa.txt" |
| // Prepare temporary decoders | ||
| unique_ptr<vgflib::HeaderDecoder> header_decoder = | ||
| vgflib::CreateHeaderDecoder(vgf_data); | ||
| vgflib::CreateHeaderDecoder(vgf_data, vgflib::HeaderSize(), vgf_size); | ||
| if (!header_decoder) { | ||
| ET_LOG(Error, "Failed to create VGF header decoder"); | ||
| return false; | ||
| } | ||
|
|
||
| unique_ptr<vgflib::ModelSequenceTableDecoder> sequence_decoder = | ||
| vgflib::CreateModelSequenceTableDecoder( | ||
| vgf_data + header_decoder->GetModelSequenceTableOffset()); | ||
| vgf_data + header_decoder->GetModelSequenceTableOffset(), | ||
| header_decoder->GetModelSequenceTableSize()); | ||
| unique_ptr<vgflib::ModuleTableDecoder> module_decoder = | ||
| vgflib::CreateModuleTableDecoder( | ||
| vgf_data + header_decoder->GetModuleTableOffset()); | ||
| vgf_data + header_decoder->GetModuleTableOffset(), | ||
| header_decoder->GetModuleTableSize()); |
There was a problem hiding this comment.
header_decoder offsets/sizes are used to compute pointers and create the other decoders before verifying header_decoder->IsValid() / CheckVersion(). If the blob header is malformed, this can lead to out-of-bounds pointer arithmetic/reads. Validate the header (and ideally bounds-check offset+size against vgf_size) before constructing the table/constant decoders.
|
To add the ciflow label This helps ensure we don't trigger CI on this PR until it is actually authorized to do so. Please ping one of the reviewers if you do not have access to approve and run workflows. |
|
Error seems unrelated |
Install tosa-tools and the ML SDK model-converter/VGF packages from pip instead of cloning tosa-tools from source. Update the VGF runtime to the 0.9 decoder API and export VK_LAYER_PATH for the pip-installed emulation layer so the Vulkan ML layers are discovered at runtime. Signed-off-by: per.held@arm.com
Install tosa-tools and the ML SDK model-converter/VGF packages from pip instead of cloning tosa-tools from source.
Update the VGF runtime to the 0.9 decoder API and export VK_LAYER_PATH for the pip-installed emulation layer so the Vulkan ML layers are discovered at runtime.
Signed-off-by: per.held@arm.com
Change-Id: I2d3c2acd21a1bbe587d9af0b44479407afdab82d
cc @digantdesai @freddan80 @per @zingo @oscarandersson8218 @mansnils @Sebastian-Larsson @robell