Skip to content

backport CoreML features to macos < 14 #3255

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

glaszig
Copy link

@glaszig glaszig commented Jun 16, 2025

i'm on 12.7 and [MLModel predictionFromFeatures:] versions with completion handler are only available on macos 14 which is a little too cutting edge for me.

before this i'd get:

whisper-encoder-impl.m:183:17: error: no visible @interface for 'MLModel'
declares the selector 'predictionFromFeatures:completionHandler:'
    [self.model predictionFromFeatures:input completionHandler:^(id<MLFeatureProvider> prediction, NSError *predictionError) {
     ~~~~~~~~~~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

completionHandler:(void (^)(id<MLFeatureProvider> output, NSError * error)) completionHandler {
[NSOperationQueue.mainQueue addOperationWithBlock:^{
NSError *error = nil;
id<MLFeatureProvider> prediction = [self predictionFromFeatures:input error:&error];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the prediction/inference perhaps be run on a background thread and not the main thread as it being done here?

Copy link
Author

@glaszig glaszig Jun 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that was my first impulse as well but the original documentation doesn't state anything about parallelism/threading. then it occurred to me that coreml itself decides where to run (cpu or gpu) 0 and the completionHandler variants are only to enable swift's async/await 1.

that is why i decided against active concurrency management in those methods. async-marked methods in swift actually get their handler-equivalent auto-generated. a difference may be that they use dispatch queues instead of operations though.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just a little concerned that doing the inference on the main thread might possibly lock/stall an application while the inference is running which could be substantial. Would you be able to try something like the following and see if this works:

[NSOperationQueue.new addOperationWithBlock:^{
    id<MLFeatureProvider> prediction = [self predictionFromFeatures:input error:&error];
    
    dispatch_async(dispatch_get_main_queue(), ^{
        completionHandler(prediction, error);
    });
}];

completionHandler:(void (^)(id<MLFeatureProvider> output, NSError * error)) completionHandler {
[NSOperationQueue.mainQueue addOperationWithBlock:^{
NSError *error = nil;
id<MLFeatureProvider> prediction = [self predictionFromFeatures:input options:options error:&error];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the prediction/inference perhaps be run on a background thread and not the main thread as it being done here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants