Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds option to specify graph entry point name #1663

Closed
wants to merge 7 commits into from

Conversation

kernhanda
Copy link
Contributor

@kernhanda kernhanda commented Sep 1, 2022

This changes the behavior when a single entry point is specified. Prior to this change, the entry point would be ignored if there was only a single entry point and run_main_graph would be emitted. After this change, the entry point name is honored, even if there's only one entry point.

@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

1 similar comment
@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

@tungld
Copy link
Collaborator

tungld commented Sep 1, 2022

@jenkins-droid test this please

@tungld
Copy link
Collaborator

tungld commented Sep 6, 2022

@kernhanda thank you for the patch! could you please look at why lit tests failed? You can check it locally by using make check-onnx-lit. Thanks!

Signed-off-by: Kern Handa <kern.handa@gmail.com>
Signed-off-by: Kern Handa <kern.handa@gmail.com>
Signed-off-by: Kern Handa <kern.handa@gmail.com>
@@ -13,7 +11,7 @@ module {
// CHECK: llvm.mlir.global external constant @_entry_point_0_in_sig("[in_sig]\00")
// CHECK: llvm.mlir.global external constant @_entry_point_0_out_sig("[out_sig]\00")

// CHECK-LABEL: llvm.func @run_main_graph
// CHECK-LABEL: llvm.func @run_first_entry
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the default continue to be generated as run_main_graph?

Also, we should have at least one test to validate providing a different name.

Copy link
Contributor Author

@kernhanda kernhanda Sep 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the default continue to be generated as run_main_graph?

This is open for debate, IMO. I'm fine if the answer is "yes", but I'd like some clarification.

With this change, if the user provides their own graph name, thus overriding main_graph, for an ONNX model, the entry point emitted would still be run_main_graph, somewhat defeating the purpose. I could update the behavior such that run_main_graph is emitted if the user didn't override the default graph name.

The other scenario is where an existing MLIR file is being compiled, where there is only one kernel entry point specified. In this case, the current behavior is to disregard the name of the entry point and always emit run_main_graph. TBH, this doesn't make sense to me, but if there are reasons/dependencies that rely upon this behavior, I can make the change more selective.

Also, we should have at least one test to validate providing a different name.

I'll add an ONNX model graph test, then.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So in the source, main_graph is still the default sometimes. There should be a test for this and if it's never the default, it should not be in the source. It sounds like we need at least 3 tests for the entry points.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the test to ensure the default scenario is tested and then a scenario where the single entry point has a non-default name.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the three tests I would expect to see are:

  1. default "main_graph". My understanding is that if no name is specified, this is the name picked. Now both tests are specifying a name (though one of them is specifying "main_graph")
  2. entry point with name in the graph
  3. Using the command line option to specify a name other than "main_graph"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there any existing tests that import a basic ONNX model? I see that the repository has a mnist.onnx in the docs directory, but AFAICT, there are no tests that act on an ONNX model.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have tests that import an ONNX model directly but @sorenlassen recenlty add tests that create a small ONNX model in JSON and pass it to onnx-mlir command line. One example is: https://github.com/onnx/onnx-mlir/blob/main/test/mlir/onnx/parse/test_abs.json. Hope it helps.

@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

1 similar comment
@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

Signed-off-by: Kern Handa <kern.handa@gmail.com>
Signed-off-by: Kern Handa <kern.handa@gmail.com>
@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

@kernhanda
Copy link
Contributor Author

@jenkins-droid test this please

@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

@tungld
Copy link
Collaborator

tungld commented Sep 7, 2022

@jenkins-droid test this please

@tungld
Copy link
Collaborator

tungld commented Sep 7, 2022

I see a big advantage of specifying a custom entry point name: users have two (single entry) onnx models and they want to run them in the same Python/C program. In this case, entry points should be different.

I could update the behavior such that run_main_graph is emitted if the user didn't override the default graph name.

I prefer this. Since we say that default graph name is main_graph when importing an onnx model, it's clear to users. Also, Java and C interfaces are using run_main_graph as the default entry point.

The other scenario is where an existing MLIR file is being compiled, where there is only one kernel entry point specified. In this case, the current behavior is to disregard the name of the entry point and always emit run_main_graph. TBH, this doesn't make sense to me, but if there are reasons/dependencies that rely upon this behavior, I can make the change more selective.

I am not so convince that onnx-mlir users will compile an MLIR file as input. It's mainly for onnx-mlir developers. So it looks fine to use the entry point as it is.

@kernhanda
Copy link
Contributor Author

I see a big advantage of specifying a custom entry point name: users have two (single entry) onnx models and they want to run them in the same Python/C program. In this case, entry points should be different.

Yes, that's exactly the motivation here.

I prefer this. Since we say that default graph name is main_graph when importing an onnx model, it's clear to users. Also, Java and C interfaces are using run_main_graph as the default entry point.

With this change, if you import a model and don't specify the graph name, it does default to main_graph.

I am not so convince that onnx-mlir users will compile an MLIR file as input. It's mainly for onnx-mlir developers. So it looks fine to use the entry point as it is.

Do you think it's necessary to make the change so that if a MLIR file has a single entry point named something other than main_graph that it should still result in an entry point named run_main_graph?

^ This is the only question at the moment as it's the only difference in existing functionality. Otherwise, if you don't specify the graph name option, the behavior is the same as it is without this change.

@tungld
Copy link
Collaborator

tungld commented Sep 20, 2022

Do you think it's necessary to make the change so that if a MLIR file has a single entry point named something other than main_graph that it should still result in an entry point named run_main_graph?

If users specify an entry point in an MLIR via onnx.EntryPoint we should respect that and should not change to main_graph. In summary we only allow users to set an entry point (main_graph by default) when importing an onnx model that does not have a concept of entry point. What do you think?

@kernhanda
Copy link
Contributor Author

Do you think it's necessary to make the change so that if a MLIR file has a single entry point named something other than main_graph that it should still result in an entry point named run_main_graph?

If users specify an entry point in an MLIR via onnx.EntryPoint we should respect that and should not change to main_graph. In summary we only allow users to set an entry point (main_graph by default) when importing an onnx model that does not have a concept of entry point. What do you think?

I agree! This PR matches that functionality then 😄

@sstamenova requested tests that exercise the model import scenario, but I couldn't find any existing tests that do this. Any guidance on how to do this or if it's necessary would be appreciated.

std::string dynEntryPointName = "run_" + staticEntryPointFuncName.str();
if (singleEntryPoint)
dynEntryPointName = DEFAULT_DYN_ENTRY_POINT;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kernhanda because we don't use singleEntryPoint anymore, could you please remove it elsewhere, e.g. in function arguments? Thanks!

@tungld
Copy link
Collaborator

tungld commented Nov 1, 2022

@kernhanda any update on this?

@tungld
Copy link
Collaborator

tungld commented Jun 29, 2023

@kernhanda we would like to have this functionality, so do you have time to update this PR? If not I can take it over and create a new PR. thanks!

@kernhanda kernhanda closed this Jul 16, 2024
@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants