[AMD] Add sd15 support for VitisAI.#2359
Conversation
|
@microsoft-github-policy-service agree company="AMD" |
|
@liujij can you please add unit tests for this pass, and fix the format issue? We are planning to release new Olive version this Friday and this PR will be included in the new release |
There was a problem hiding this comment.
Pull request overview
Adds a new Olive ONNX pass to generate AMD Vitis AI NPU-ready Stable Diffusion (SD 1.5) submodels from an ONNX input, and registers the pass in the package pass registry so it can be invoked via standard Olive workflows.
Changes:
- Introduce
VitisGenerateModelSDpass to run Vitismodel_generateinsdmode and produce an ONNX artifact for downstream passes. - Add config parameters for
model_typeand optionalresolutions. - Register
VitisGenerateModelSDinolive_config.jsonso Olive can import and run it.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
olive/passes/onnx/vitis_ai/vitis_generate_model_sd.py |
New Vitis AI pass that wraps model_generate for SD submodel generation and normalizes output to model.onnx. |
olive/olive_config.json |
Registers the new pass module in Olive’s built-in pass catalog. |
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
|
@devang-ml can you please review this? |
|
Hi @devang-ml / @xieofxie / @xiaoyu-work , This PR adds optimized Stable Diffusion model generation support for the Vitis AI Execution Provider. I’ve resolved all format issue and added unit tests. Could you please help review this PR? If everything looks good, could you kindly help merge it? Thanks! |
| @@ -0,0 +1,252 @@ | |||
| # ------------------------------------------------------------------------- | |||
| saved_mods = {k: sys.modules.pop(k) for k in list(sys.modules) if k == "model_generate" or k.startswith("model_generate.")} | ||
| real_import = builtins.__import__ | ||
|
|
||
| def guarded_import(name, globals=None, locals=None, fromlist=(), level=0): |
| saved_mods = {k: sys.modules.pop(k) for k in list(sys.modules) if k == "model_generate" or k.startswith("model_generate.")} | ||
| real_import = builtins.__import__ | ||
|
|
||
| def guarded_import(name, globals=None, locals=None, fromlist=(), level=0): |
| saved_mods = {k: sys.modules.pop(k) for k in list(sys.modules) if k == "model_generate" or k.startswith("model_generate.")} | ||
| real_import = builtins.__import__ | ||
|
|
||
| def guarded_import(name, globals=None, locals=None, fromlist=(), level=0): |
| saved_mods = {k: sys.modules.pop(k) for k in list(sys.modules) if k == "model_generate" or k.startswith("model_generate.")} | ||
| real_import = builtins.__import__ | ||
|
|
||
| def guarded_import(name, globals=None, locals=None, fromlist=(), level=0): |
| (tmp_path / "b.onnx").write_bytes(b"y") | ||
| p = _make_pass() | ||
| h = SimpleNamespace(model_path=str(tmp_path)) | ||
| with pytest.raises(ValueError, match="Multiple .onnx model files found"): |
| p = _make_pass() | ||
| h = SimpleNamespace(model_path=str(tmp_path)) | ||
| with pytest.raises(ValueError, match="Multiple .onnx model files found"): | ||
| p._resolve_onnx_input_path(h) |
| def test_resolve_onnx_input_path_dir_no_onnx_raises(tmp_path): | ||
| p = _make_pass() | ||
| h = SimpleNamespace(model_path=str(tmp_path)) | ||
| with pytest.raises(FileNotFoundError, match="No .onnx file found"): |
| p = _make_pass() | ||
| h = SimpleNamespace(model_path=str(tmp_path)) | ||
| with pytest.raises(FileNotFoundError, match="No .onnx file found"): | ||
| p._resolve_onnx_input_path(h) |
| missing = tmp_path / "nope" | ||
| h = SimpleNamespace(model_path=str(missing)) | ||
| with pytest.raises(FileNotFoundError, match="Model path does not exist"): | ||
| p._resolve_onnx_input_path(h) |
Describe your changes
This PR adds optimized Stable Diffusion model generation support for the Vitis AI Execution Provider.
Checklist before requesting a review
lintrunner -a(Optional) Issue link