Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TOPI] Conv2d Added and Optimized for Intel HD Graphics #1290

Merged
merged 23 commits into from
Jun 22, 2018

Conversation

Laurawly
Copy link
Contributor

@Laurawly Laurawly commented Jun 15, 2018

Optimize convolutions specifically for Intel Graphics which compared with other opencl based backends, adds specific intel_sub_group operations. And we support consistent data packing (thanks to @yzhliu's implementation ) for end-to-end networks. Note that currently conv2d is most optimized for workloads for resnet. We compared with state-of-the-art deep learning libraries for intel GPUs. Following is some end-to-end benchmark for Intel® Processor Graphics Gen9 using opt_level 3.

Networks TVM clDNN (Intel's deep learning library)
ResNet-18 74ms 70ms
ResNet-50 262 ms 208.4 ms

Copy link
Contributor

@yidawang yidawang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the code we probably want to change "intel gpu" to "intel graphics"

@Laurawly Laurawly changed the title [TOPI] Conv2d Added and Optimized for Intel Graphics Cards [TOPI] Conv2d Added and Optimized for Intel HD Graphics Jun 15, 2018
@tqchen
Copy link
Member

tqchen commented Jun 15, 2018

@yidawang Why not intel_gpu? seems intel_graphics is a bit too long for abbreviation.

@Laurawly
Copy link
Contributor Author

Laurawly commented Jun 15, 2018

Please feel free to review and comment @yzhliu @Huyuwei @adityaatluri

@yidawang
Copy link
Contributor

@tqchen I feel that "Intel GPU" is a coined name, Intel normally calls it "Intel Graphics". And, Intel plans to launch their first version of discrete GPU, which will be the real "Intel GPU", in 2020. So I think it is helpful to reduce the confusion from the beginning.

@kevinthesun
Copy link
Contributor

It would be nice if later we can parameterize schedules, like conv2d for x86.

@liangfu
Copy link
Member

liangfu commented Jun 16, 2018

How about 'Iris'? Intel brand its high end graphics under the name "iris plus".

@merrymercy
Copy link
Member

We will rewrite all performance bottleneck operators (e.g. conv2d) in autotuning fashion after merging the auto tuner. For now it is good to follow @kevinthesun 's comment .

s[tensor].bind(xi, thread_x)
return xi, thread_z, thread_y, thread_x

@conv2d_alter_layout.register(["intel_gpu"])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider making the keys consistent as per @yidawang 's suggestion

@generic.schedule_conv2d_nchw.register(["intel_gpu"])
def schedule_conv2d_nchw(outs):
"""Schedule for conv2d_nchw for Intel GPU
Parameters
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there needs to be a new line between Parameters and usage.

See https://docs.tvm.ai/contribute/document.html#document-python

@tqchen tqchen added the status: need update need update based on feedbacks label Jun 17, 2018
@tqchen
Copy link
Member

tqchen commented Jun 18, 2018

@Laurawly please act on the comments and let us bring the changes in

@Laurawly
Copy link
Contributor Author

Laurawly commented Jun 18, 2018

@tqchen So we should change all intel_gpu to intel_graphics including the target naming in tvm right? If not I'd get a "no such target" error.

/*! \return A target for Intel GPU */
EXPORT Target intel_gpu(const std::vector<std::string>& options =
/*! \return A target for Intel Graphics */
EXPORT Target intel_graphics(const std::vector<std::string>& options =
std::vector<std::string>());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alignment

@tqchen
Copy link
Member

tqchen commented Jun 21, 2018

@yidawang please review again and approve or request more changes

@yidawang
Copy link
Contributor

@tqchen I agree with @kevinthesun and @merrymercy about parameterizing the schedules like https://github.com/dmlc/tvm/blob/master/topi/python/topi/x86/conv2d.py#L15 But I think we can do it in a separated PR later.

Other than that, LGTM

@tqchen
Copy link
Member

tqchen commented Jun 21, 2018

@tqchen tqchen merged commit 5fefedc into apache:master Jun 22, 2018
tqchen pushed a commit to tqchen/tvm that referenced this pull request Jul 6, 2018
mnuyens pushed a commit to mnuyens/tvm that referenced this pull request Jul 10, 2018
sergei-mironov pushed a commit to sergei-mironov/tvm that referenced this pull request Aug 8, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants