Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Good First Issue][TF FE]: Support MatrixInverse operation for TensorFlow #22957

Closed
rkazants opened this issue Feb 20, 2024 · 13 comments · Fixed by #23881
Closed

[Good First Issue][TF FE]: Support MatrixInverse operation for TensorFlow #22957

rkazants opened this issue Feb 20, 2024 · 13 comments · Fixed by #23881
Assignees
Labels
category: TF FE OpenVINO TensorFlow FrontEnd good first issue Good for newcomers no_stale Do not mark as stale
Milestone

Comments

@rkazants
Copy link
Contributor

rkazants commented Feb 20, 2024

Context

OpenVINO component responsible for support of TensorFlow models is called as TensorFlow Frontend (TF FE). TF FE converts a model represented in TensorFlow opset to a model in OpenVINO opset.

In order to infer TensorFlow models with MatrixInverse operation by OpenVINO, TF FE needs to be extended with this operation support.

What needs to be done?

For MatrixInverse operation support, you need to implement the corresponding loader into TF FE op directory and to register it into the dictionary of Loaders. One loader is responsible for conversion (or decomposition) of one type of TensorFlow operation.

Here is an example of loader implementation for TensorFlow Einsum operation:

OutputVector translate_einsum_op(const NodeContext& node) { 
     auto op_type = node.get_op_type(); 
     TENSORFLOW_OP_VALIDATION(node, op_type == "Einsum", "Internal error: incorrect usage of translate_einsum_op."); 
     auto equation = node.get_attribute<std::string>("equation"); 
  
     OutputVector inputs; 
     for (size_t input_ind = 0; input_ind < node.get_input_size(); ++input_ind) { 
         inputs.push_back(node.get_input(input_ind)); 
     } 
  
     auto einsum = make_shared<Einsum>(inputs, equation); 
     set_node_name(node.get_name(), einsum); 
     return {einsum}; 
 } 

In this example, translate_einsum_op converts TF Einsum into OV Einsum. NodeContext object passed into the loader packs all info about inputs and attributes of Einsum operation. The loader retrieves an attribute of the equation by using the NodeContext::get_attribute() method, prepares input vector, creates Einsum operation from OV opset and returns a vector of outputs.

Responsibility of a loader is to parse operation attributes, prepare inputs and express TF operation via OV operations sub-graph. Example for Einsum demonstrates the resulted sub-graph with one operation. In PR #19007 you can see operation decomposition into multiple node sub-graph.

Once you are done with implementation of the translator, you need to implement the corresponding layer tests test_tf_MatrixInverse.py and put it into layer_tests/tensorflow_tests directory. Example how to run some layer test:

export TEST_DEVICE=CPU
cd openvino/tests/layer_tests/tensorflow_tests
pytest test_tf_Shape.py

Hint

Use newly added operation Inverse into OV opset-14

Example Pull Requests

Resources

Contact points

  • @openvinotoolkit/openvino-tf-frontend-maintainers
  • @rkazants in GitHub
  • rkazants in Discord

Ticket

No response

@rkazants rkazants added no_stale Do not mark as stale category: TF FE OpenVINO TensorFlow FrontEnd labels Feb 20, 2024
@mlukasze mlukasze added the good first issue Good for newcomers label Feb 20, 2024
@koookieee
Copy link

.take

Copy link
Contributor

Thank you for looking into this issue! Please let us know if you have any questions or require any help.

@koookieee
Copy link

koookieee commented Feb 25, 2024

Hello @rkazants , I will be working on this issue. Let me know if you have any comments or suggestions !

@rkazants
Copy link
Contributor Author

rkazants commented Mar 8, 2024

Hi @koookieee, any update on the task?

Best regards,
Roman

@dyogaharshitha
Copy link

.take

Copy link
Contributor

Thanks for being interested in this issue. It looks like this ticket is already assigned to a contributor. Please communicate with the assigned contributor to confirm the status of the issue.

@rkazants
Copy link
Contributor Author

No update for long time, I have to release this task.

Best regards,
Roman

@hongbo-wei
Copy link
Contributor

.take

Copy link
Contributor

Thank you for looking into this issue! Please let us know if you have any questions or require any help.

@hongbo-wei
Copy link
Contributor

hongbo-wei commented Mar 18, 2024

Hello @rkazants, I've built OpenVINO locally, does this mean I can start coding and contributing? OpenVINO is a software toolkit, compared to other simpler applications, it's hard to have a direct visual of what is going on there, hence it is easier for developers to know what to do next. So I wonder if I'm on the right track : )

Screenshot 2024-03-18 at 1 31 06 PM

@rkazants
Copy link
Contributor Author

rkazants commented Mar 24, 2024

Hello @rkazants, I've built OpenVINO locally, does this mean I can start coding and contributing? OpenVINO is a software toolkit, compared to other simpler applications, it's hard to have a direct visual of what is going on there, hence it is easier for developers to know what to do next. So I wonder if I'm on the right track : )

Screenshot 2024-03-18 at 1 31 06 PM

Hi @hongbo-wei,

Now you need to implement translator for TF MatrixInverse to OV opset Inverse. Please check: https://docs.openvino.ai/2024/documentation/openvino-ir-format/operation-sets/operation-specs/matrix/Inverse_14.html

Should be simple translator. Check translators for other operations and description above.

Best regards,
Roman

@hongbo-wei
Copy link
Contributor

hongbo-wei commented Apr 6, 2024

Hello, Roman @rkazants, I want to build OpenVINO runtime to test my contributed program, and I follow the instructions in Mac (ARM).

In the second last step, I run cmake -G "Ninja Multi-Config" -DENABLE_SYSTEM_PUGIXML=ON -DENABLE_SYSTEM_SNAPPY=ON -DENABLE_SYSTEM_PROTOBUF=ON .., and was told that couldn't find Protobuf even it is installed using brew. I therefore concatenated -DPROTOBUF_LIBRARY=~/opt/homebrew/opt/protobuf before .. in the command, and run it. No error.

In the last step I encountered an error that says 'protobuf::libprotobuf-NOTFOUND'.

ninja: error: 'protobuf::libprotobuf-NOTFOUND', needed by '/Users/hongbo_wei/downloads/GitHub/openvino/bin/arm64/Release/libopenvino_tensorflow_frontend.2024.2.0.dylib', missing and no known rule to make it

I then set the environment path for CMake using the right path to protobuf, but the build system still cannot find it

export CMAKE_PREFIX_PATH=/opt/homebrew/opt/protobuf
export CMAKE_LIBRARY_PATH=/opt/homebrew/opt/protobuf

I also asked in Intel Community.

I'm so happy I'm moving closer to complete my Good First Issue.

@hongbo-wei
Copy link
Contributor

hongbo-wei commented Apr 8, 2024

Mac (ARM)

Correct path of Protobuf found and fixed. Encounter error when building.

/opt/homebrew/include/absl/functional/internal/any_invocable.h:380:28: error: no member named 'in_place_type_t' in namespace 'absl'
struct IsInPlaceType<absl::in_place_type_t<T>> : std::true_type {};
                     ~~~~~~^
/opt/homebrew/include/absl/functional/internal/any_invocable.h:380:44: error: 'T' does not refer to a value
struct IsInPlaceType<absl::in_place_type_t<T>> : std::true_type {};
                                           ^
/opt/homebrew/include/absl/functional/internal/any_invocable.h:379:17: note: declared here
template <class T>
                ^
/opt/homebrew/include/absl/functional/internal/any_invocable.h:380:46: error: expected unqualified-id
struct IsInPlaceType<absl::in_place_type_t<T>> : std::true_type {};
                                             ^
/opt/homebrew/include/absl/functional/internal/any_invocable.h:476:27: error: no template named 'in_place_type_t' in namespace 'absl'
  explicit CoreImpl(absl::in_place_type_t<QualTRef>, Args&&... args) {
                    ~~~~~~^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
ninja: build stopped: subcommand failed.

github-merge-queue bot pushed a commit that referenced this issue Apr 13, 2024
This pull request adds support for the TensorFlow "**MatrixInverse**"
operation to OpenVINO's TensorFlow Frontend (TF FE). It implements a new
loader function translate_matrix_inverse_op that translates the
TensorFlow operation into the equivalent OpenVINO Inverse operation
(from opset-14) for both real-valued and complex-valued inputs.

Addresses issue:
#[22957](#22957)

Hi Roman @rkazants, could you please review my work, thank you!

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
@mlukasze mlukasze added this to the 2024.2 milestone Apr 15, 2024
alvoron pushed a commit to alvoron/openvino that referenced this issue Apr 29, 2024
…kit#23881)

This pull request adds support for the TensorFlow "**MatrixInverse**"
operation to OpenVINO's TensorFlow Frontend (TF FE). It implements a new
loader function translate_matrix_inverse_op that translates the
TensorFlow operation into the equivalent OpenVINO Inverse operation
(from opset-14) for both real-valued and complex-valued inputs.

Addresses issue:
#[22957](openvinotoolkit#22957)

Hi Roman @rkazants, could you please review my work, thank you!

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: TF FE OpenVINO TensorFlow FrontEnd good first issue Good for newcomers no_stale Do not mark as stale
Projects
Status: Closed
Development

Successfully merging a pull request may close this issue.

5 participants