Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lack of support for UWP apps in ML.NET #2252

Open
CESARDELATORRE opened this issue Jan 26, 2019 · 11 comments
Open

Lack of support for UWP apps in ML.NET #2252

CESARDELATORRE opened this issue Jan 26, 2019 · 11 comments
Labels
P2 Priority of the issue for triage purpose: Needs to be fixed at some point. UWP Bugs related UWP

Comments

@CESARDELATORRE
Copy link
Contributor

CESARDELATORRE commented Jan 26, 2019

Current version (0.9 and 0.10) don't support UWP apps properly.

See additional details in this Blog Post:
https://xamlbrewer.wordpress.com/2019/01/25/machine-learning-with-ml-net-in-uwp-clustering/

Related issues:
#1736
#1595

SUGGESTED APPROACH:

Usually, the common scenario for UWP apps (visual desktop applications) is just about scoring a model.
The same common scenario for ARM based platforms like Xamarin on iOS and Android.
(ARM has nothing to do with this #2252 issue, it just happens that we still don't support ARM, neither)

The suggested approach would be to split ML.NET components/NuGet packages so the scoring components are segregated from the rest of ML.NET.

Achieving support in UWP and ARM just for the "scoring part" of ML.NET might be easier and require less cost/work in testing and development than achieving support for the whole ML.NET (training/test model area).

@TomFinley
Copy link
Contributor

Hi @CESARDELATORRE this has been a longstanding desire. As you observe, the predicting logic is trivial compared to the training logic, in terms of complexity along pretty much every conceivable dimension -- e.g., it takes thousands upon thousands of lines of code to write a good linear predictor, but about two lines of code to write the dot product necessary to predict with it. 😄 So this seems like a great idea, just a fair amount of work. We'd want to not only separate trainers from their associated model parameters, but also estimators from their associated transformers, etc., etc.

Considering that it is so much work, I have a question about whether it can be delayed a bit. One question though: would this be considered a breaking change in the API? So let's imagine you have two classes A and B in a single assembly and nuget, and you decide in a later version, "I am going to put A in nuget X and B in separate nuget Y," is that considered just fine from .NET perspective?

Just wondering if this is something we'd need to prioritize somehow as something that must happen prior to v1, and how completely. (E.g., if we could do it in stages and it is not considered a breaking change, we could perhaps do the most popular things first before moving to everything else, etc.)

@markusweimer
Copy link
Member

An alternative approach to consider is to emit prediction pipelines as code. @interesaaat leads a research effort in that direction called Pretzel. It might be time to act on that research?

@CESARDELATORRE
Copy link
Contributor Author

@TomFinley - Since it'll be breaking the compilation, I think it can be considered a breaking change..
Splitting the "scoring" assemblies from the training bits should be done as soon as possible, in my opinion. It will give us a lot more flexibility when incrementally updating/improving in areas like UWP, Unity and ARM support.

@sharpwood
Copy link

When will ML.NET officially support UWP? UWP is Microsoft's latest development framework. This project does not even support UWP? @TomFinley

@XamlBrewer
Copy link

The ML.NET v0.11.0 NuGet package largely supports UWP (and that is just awesome).

There's still one big issue: compiling a Release build fails. (ilc.exe code 1200).

Reproduction steps:

  • create UWP app in Visual Studio 2017,
  • add NuGet package,
  • set to Release mode,
  • compile.

@ianier
Copy link

ianier commented Apr 28, 2019

Hi @TomFinley, I agree with @sharpwood that UWP should be treated as a first class citizen. In my opinion this issue should be given much higher priority and addressed before RTM.

@codemzs
Copy link
Member

codemzs commented Apr 28, 2019

CC: @abetaha

@wschin wschin added the P0 Priority of the issue for triage purpose: IMPORTANT, needs to be fixed right away. label May 21, 2019
@codemzs codemzs added P1 Priority of the issue for triage purpose: Needs to be fixed soon. and removed P0 Priority of the issue for triage purpose: IMPORTANT, needs to be fixed right away. labels May 22, 2019
@MattWhilden
Copy link

@XamlBrewer I'm able to build with the latest set of tools for UWP. Here's my set up:

  • VS 2019 (don't think that matters)
  • New UWP project
  • Min OS version of at least 16299 (latest .NET Native needs this)
  • Update Microsoft.NETCore.UniversalWindowsPlatform to 6.2.9
  • Add Microsoft.ML v1.3.1
  • Set to Release, x86. Build

It may work on something old but I haven't chased it all the way back. I also don't have any experience with the ML tools so I can't say with any certainty if it works when you actually run it.

@XamlBrewer
Copy link

Yes @MattWhilden I confirm that it does compile with the upgrade to UniversalWindowsPlatform to v6.2.9. There's still a lot of runtime exceptions however.

@MattWhilden
Copy link

@XamlBrewer If you can provide more information I'd love to take a look for you. You can also reach me at dotnetnative@microsoft.com if the details of the project are more sensitive than you're comfortable sharing publicly.

The fastest thing for us to work with are ilcRepro files but there are lots of options.

@XamlBrewer
Copy link

XamlBrewer commented Sep 25, 2019

I upgraded the sample project to latest stable UWP and ML.NET. I have the impression that all trainers are operational in UWP - AutoML works like a charm (although a LOT slower in x86 than in x64 mode).

I see two major show stoppers:

  1. MlContext.Data.LoadFromEnumerable(), MlContext.Data.CreateEnumerable() and MlContext.Data.Cache() are awarded with System.Reflection.TargetInvocationException: 'Exception has been thrown by the target of an invocation.' PlatformNotSupportedException: Dynamic code generation is not supported on this platform.

  2. Saving a model crashes with "System.ArgumentException: 'The path is empty.'" Here's the code I used to save a model:
    var storageFolder = ApplicationData.Current.LocalFolder;
    string modelPath = Path.Combine(storageFolder.Path, modelName);
    _mlContext.Model.Save(Model, inputSchema: null, filePath: modelPath);

All code works in Debug mode. In Release mode the exceptions are identical in x86 and x64 mode.

On calling CreatePredictionEngine() for a LbfgsMaximumEntropy transformer, I get System.Reflection.TargetInvocationException: 'Exception has been thrown by the target of an invocation.' PlatformNotSupportedException: Dynamic code generation is not supported on this platform.

@harishsk harishsk added P2 Priority of the issue for triage purpose: Needs to be fixed at some point. and removed P1 Priority of the issue for triage purpose: Needs to be fixed soon. labels Jan 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P2 Priority of the issue for triage purpose: Needs to be fixed at some point. UWP Bugs related UWP
Projects
None yet
Development

No branches or pull requests