New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dnn: add attention layer #24476
dnn: add attention layer #24476
Conversation
304584b
to
5671dec
Compare
32293fd
to
816c331
Compare
Benchmark results are added. |
PR is good, thanks a lot. The only concern about the potential regression/fallback in backends because of transition from separate ops to a new fused layer. However, there is no alternatives for this problem for now and I recommend to keep this PR only for default CPU implementation. Please also take a look at this comment: opencv/opencv_extra#1128 (comment) |
net.setInput(bias, input_names[2]); | ||
|
||
net.setPreferableBackend(backendId); | ||
net.setPreferableTarget(targetId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I will benchmark on CUDA and with OpenVINO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Current numbers a bit confusing. Let's wait for #24476 (comment) because it fails.
input shape: [1, 320, 48]
model from opencv/opencv_extra#1128
CPU: 12th Gen Intel(R) Core(TM) i9-12900
backend | 4.x | PR |
---|---|---|
OpenCV, CPU | 18.46ms | 11.90ms (x0.64) |
OpenVINO, CPU | 0.25ms | 11.91ms |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, the acc test have a reduced scale so that it does not take a lot time to run. For benchmark, I use [1, 197, 768] (I have two attention.onnx
models, one with input [1, 197, 768], the other one [1, 320, 48]).
For OpenVINO, since we do not have backend-specific graph fusion, we can recreate this subgraph in initNgraph
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated results. Please ignore the table above:
input: 1x197x768
CPU: 12th Gen Intel(R) Core(TM) i9-12900
backend | 4.x | PR |
---|---|---|
OpenCV, CPU | 16.93ms | 2.37ms (x7.14) |
OpenVINO, CPU | 1.54ms | 2.33ms (x0.66) |
So there is a degradation in OpenVINO performance in case of fallback to OpenCV layer.
I will try to implement different backends for this layer in another pull request. Just try to reduce the review and merge this ASAP. |
669b503
to
b3068a3
Compare
@dkurt @WanliZhong Friendly reminder. |
modules/dnn/perf/perf_net.cpp
Outdated
| vit_b_32 | 89.92 | 116.22 | 30.36 | | ||
| vit_l_16 | 1593.32 | 1730.74 | 419.92 | | ||
| vit_l_32 | 468.11 | 577.41 | 134.12 | | ||
| VitTrack | 3.80 | 3.87 | 2.25 | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove the results from source code - it's enough to add to PR's description
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Running this models in perf_net.cpp
is taking up too much time for now. How about we comment PERF_TEST_P_(DNNTestNetwork, VIT_B_16)
and such for now until we get the close or even better inference speed as ORT? In this case, these perfamce results should be deleted and kept in the first comment of this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may add a corresponding skip exception:
applyTestTag(CV_TEST_TAG_LONG, CV_TEST_TAG_DEBUG_LONG);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no issue if we specify 4 inputs for "Slice". So no need to add new logic with optional inputs:
Track #24609 and proposal #24609 (comment)
I dont think the proposal is good enough. We actually have other models that have |
@fengyuentau, got it. So can you please in this PR do a simpler workaround like a parameter with number of Slice inputs? This problem should be solved separately. class AttentionSubGraph : public Subgraph {
public:
AttentionSubGraph(int numSliceInps) {
std::vector<std::string> inps(1 + numSliceInps, att_add);
for (int i = 0; i < numSliceInps; ++i)
inps[i + 1] = addNodeToMatch("");
slice_v = addNodeToMatch("Slice", inps);
} |
…; add test for attention subgraph fusion
…t is matched; clean comments
1bc226e
to
846237d
Compare
CUDA:
|
@opencv-alalek Please update test data on Buildbot. |
I was also looking into this issue. It is so weird that it only fails on CUDA_FP16 with almost double the threshold value. Could we apply skip tag |
CV_TEST_TAG_DNN_SKIP_CUDA_FP16 - sure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Resolves #24609
Merge with: opencv/opencv_extra#1128.
Attention operator spec from onnxruntime: https://github.com/microsoft/onnxruntime/blob/v1.16.1/docs/ContribOperators.md#com.microsoft.Attention.
TODO:
Benchmarks
Platform: Macbook Air M1.
Attention Subgraph
Input scale: [1, 197, 768].
ViTs
All data in millisecond (ms).
Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
Patch to opencv_extra has the same branch name.