-
Notifications
You must be signed in to change notification settings - Fork 4.4k
backport CoreML features to macos < 14 #3255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…s those are exclusive to macos 14
completionHandler:(void (^)(id<MLFeatureProvider> output, NSError * error)) completionHandler { | ||
[NSOperationQueue.mainQueue addOperationWithBlock:^{ | ||
NSError *error = nil; | ||
id<MLFeatureProvider> prediction = [self predictionFromFeatures:input error:&error]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the prediction/inference perhaps be run on a background thread and not the main thread as it being done here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that was my first impulse as well but the original documentation doesn't state anything about parallelism/threading. then it occurred to me that coreml itself decides where to run (cpu or gpu) 0 and the completionHandler
variants are only to enable swift's async/await 1.
that is why i decided against active concurrency management in those methods. async-marked methods in swift actually get their handler-equivalent auto-generated. a difference may be that they use dispatch queues instead of operations though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just a little concerned that doing the inference on the main thread might possibly lock/stall an application while the inference is running which could be substantial. Would you be able to try something like the following and see if this works:
[NSOperationQueue.new addOperationWithBlock:^{
id<MLFeatureProvider> prediction = [self predictionFromFeatures:input error:&error];
dispatch_async(dispatch_get_main_queue(), ^{
completionHandler(prediction, error);
});
}];
completionHandler:(void (^)(id<MLFeatureProvider> output, NSError * error)) completionHandler { | ||
[NSOperationQueue.mainQueue addOperationWithBlock:^{ | ||
NSError *error = nil; | ||
id<MLFeatureProvider> prediction = [self predictionFromFeatures:input options:options error:&error]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the prediction/inference perhaps be run on a background thread and not the main thread as it being done here?
i'm on 12.7 and
[MLModel predictionFromFeatures:]
versions with completion handler are only available on macos 14 which is a little too cutting edge for me.before this i'd get: