-
Notifications
You must be signed in to change notification settings - Fork 397
Add MAUI example for mobile targets #128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
mobile/examples/Maui/MauiVisionSample/MauiVisionSample/Models/Mobilenet/MobilenetSample.cs
Outdated
Show resolved
Hide resolved
| { | ||
| await AwaitLastTaskAsync().ConfigureAwait(false); | ||
|
|
||
| return await OnProcessImageAsync(image); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is _prevAsyncTask not set here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use the _prevAsyncTask when we're re/creating the InferenceSession. As that can be slow we don't actually await that operation here (we just save the Task for it in _prevAsyncTask and return) so it can continue in the background.
We do await the OnProcessImageAsync here so I believe it's treated differently because of that.
|
are there instructions for how to build it? also, maybe not in this PR, but it would be good to have a CI build that builds this sample. |
Add a lot more comments/explanations. Simplify a few things.
Not sure how feasible that would be. Separate repo to our usual CIs, would probably need AppCenter to be able to test, to build the iOS app you need a mac device, and also requires a pre-release VS which is not available in the standard CI images. |
|
|
||
| ## Image acquisition | ||
|
|
||
| There are 3 potential ways to acquire the image to process in the sample app. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are 3 potential ways to acquire the image
nit: from info below, seems like there are 2 ways to acquire the image? ( 3 stages for image processing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The 3 I'm referring to are using the included sample image (GetSampleImageAsync), picking an existing image from the device (PickPhotoAsync), or taking a photo (TakePhotoAsync).
| This model is included in the repository, but has been updated using the onnxruntime python package tools to: | ||
| - remove unused initializers, | ||
| - `python -m onnxruntime.tools.optimize_onnx_model --opt_level basic <model>.onnx <updated_model>.onnx` | ||
| - make the initializers constant |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not obvious to me how the initializers are made constant with the command, I thought it updated the ONNX opset to 14?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry - I included the wrong command. It was a previous opset update that causes the issue. Updated with correct info to fix the issue.
| ## Overview | ||
| The app enables you to take/pick a photo on the device or use a sample image to explore the following models. | ||
|
|
||
| ### [Mobilenet](https://github.com/onnx/models/tree/main/vision/classification/mobilenet) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: should we use a permalink here too, like the Ultraface link?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went the other way and changed the link for ultraface so that if new versions of the model are added (e.g. one with the fixes we are manually making) they will show up.
Cost is that if the page moves completely the link will break, but that seems less likely. e.g. for mobilenet new versions were added relatively recently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
j
…re added they will show up. Cost is that if the page moves completely the link will break, but that seems less likely.
maybe we can consider it when it doesn't require a pre-release VS. we do have some existing CI builds in this repo. |
* bug fix in OVEP csharp sample * Samples updated * cpp sample update * Improve the SNPE EP sample with command line option to switch SNPE backend (#120) * Improve the sample with command line option to switch SNPE backend, and set the input file path. Fix an issue for Android build, need to use libc++_shared.so from SNPE SDK. * 1. Update the API call according the API change in Ort, SessionOptionsAppendExecutionProvider_SNPE -> SessionOptionsAppendExecutionProvider 2. format update * Add table of contents to Python samples (#115) * update doc for Snpe EP to reflect the API change (#122) * update doc for snpe to reflect the API change * Set default format to QuantFormat.QDQ (#123) * Add MAUI example for mobile targets (#128) * Add short term workaround to issue with iOS publish where the CoreML frameworks is not added to the link list. Pending real fix from MAUI folks. (#131) Also update ORT to 1.12.1 which has a better Android build. * Quantization tool example bug fix (#133) In ResNet50DataReader, it uses an onnx session to obtain the model input shape. However it passes a madeup model name to the onnx session, resulting in file not found error. This change provide the original float model path to the data reader * sample notebooks for yolov4 and tiny-yoloV2 (#136) * sample notebooks for yolov4 and tiny-yoloV2 * folder restucturing for notebooks * folder restucturing for notebooks Co-authored-by: krishnendukx <krishnendux.kar@intel.com> Co-authored-by: krishnendukx <111554749+krishnendukx@users.noreply.github.com> * Update MauiVisionSample SkiaSharp dependency version to 2.88.1. (#135) Includes this fix in SkiaSharp: mono/SkiaSharp#2198 * add qdq debugging example (#134) Adding example run_qdq_debug.py * Adding quantization example for gpt-2 medium (#140) add gpt2 qdq example * Remove deprecated API usage (#144) Co-authored-by: nmaajidk <n.maajid.khan@intel.com> Co-authored-by: Hector Li <hecli@microsoft.com> Co-authored-by: Nat Kershaw (MSFT) <nakersha@microsoft.com> Co-authored-by: Yufeng Li <liyufeng1987@gmail.com> Co-authored-by: Scott McKay <skottmckay@gmail.com> Co-authored-by: Chen Fu <1316708+chenfucn@users.noreply.github.com> Co-authored-by: sfatimar <sahar.fatima@intel.com> Co-authored-by: krishnendukx <krishnendux.kar@intel.com> Co-authored-by: krishnendukx <111554749+krishnendukx@users.noreply.github.com> Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com> Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
shahicasper
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
h
Largely replicate the Xamarin example with Windows, iOS and Android targets using MAUI.
Note: Currently requires Visual Studio 2022 Preview for .net 6 support.