Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sketching out a "types and shapes" developer document. #7108

Closed
wants to merge 3 commits into from

Conversation

ScottTodd
Copy link
Collaborator

Progress on #5223

This is light on details right now, but gives us a place to fill in more over time.

@ScottTodd ScottTodd added documentation ✏️ Improvements or additions to documentation compiler/dialects Relating to the IREE compiler dialects (flow, hal, vm) labels Sep 20, 2021
@google-cla google-cla bot added the cla: yes label Sep 20, 2021

### Conversion process

IREE lowers programs from representations produced by high level frondends down
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its probably worth mentioning that we don't directly ingest Tensorflow and ingest TOSA instead. Our TOSA does not require fully defined shapes and we run shape inference after import. It will fully infer as far as it can, however not all cases on all ops are possible. E.g. transpose conv needs to know its shape at construction.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mentioned that a bit in the shapes section, thanks. Haven't yet gone into detail on each item down in this section of the doc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: +1 on TF side. We will infer the shapes as far as it can after importing the model into MLIR.

This is the pass doing shape inference: https://github.com/google/iree/blob/94a2168c63ac5e075be132fe1b97dd8427c03733/integrations/tensorflow/iree_tf_compiler/TF/Passes.cpp#L46

Comment on lines +31 to +34
Backend references:

* Vulkan: [buffer and image formats](https://www.khronos.org/registry/vulkan/specs/1.0/html/vkspec.html#formats)
* SPIR-V: [types](https://www.khronos.org/registry/SPIR-V/specs/1.0/SPIRV.html#_types) and [capabilities](https://www.khronos.org/registry/SPIR-V/specs/1.0/SPIRV.html#_a_id_capability_a_capability)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to split this to codegen backends and drivers?

Backend Driver
llvm-aot dylib
llvm-aot dylib-sync
spir-v vulkan
cuda cuda

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer for the start of the document to give enough general context: we target different hardware devices and APIs, and some of those have very strict requirements, so the layers can be pretty opinionated.

The HAL section down below can go into specifics for each compiler target / device configuration.


### Conversion process

IREE lowers programs from representations produced by high level frondends down
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: +1 on TF side. We will infer the shapes as far as it can after importing the model into MLIR.

This is the pass doing shape inference: https://github.com/google/iree/blob/94a2168c63ac5e075be132fe1b97dd8427c03733/integrations/tensorflow/iree_tf_compiler/TF/Passes.cpp#L46



import dialects (`iree`, `tensor`, `linalg`, etc.)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IREE dialect was renamed to Util dialect. Maybe delete it or use standard or math dialect?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, the new architecture diagram still puts "iree" next to "tensor". I can just change it to standard here.


#### Requirements for host code generation

#### Requirements for device code generation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC, we don't want to generate (big?) memory allocation code on device side either.


#### Requirements for device code generation

TODO: LLVM / SPIR-V emulation of types?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you aim to describe how codegen backends handle non native types? E.g., if i8 is not available on vulkan target env, how do we emulate i8 load/store?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have the "Strategies for converting between types" section up top for general information, then my hope was for you / others more familiar with the specifics of each layer to fill in the requirements or strategy preference for that layer down in this section of the doc (e.g. maybe LLVM has efficient support for emulating f64 but SPIR-V does not, so SPIR-V prefers to truncate f64 down to f32).

@benvanik
Copy link
Collaborator

benvanik commented Jul 1, 2022

Closing as obsolete. Still would be good to have some documentation about this topic but it's probably worth doing this in the specific frontend documentation instead as frontend dialects have different restrictions themselves.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler/dialects Relating to the IREE compiler dialects (flow, hal, vm) documentation ✏️ Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants