Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update ENABLE_OPENVINO_DEBUG macro #25473

Merged
merged 16 commits into from
Jul 11, 2024
Merged

Conversation

evkotov
Copy link
Contributor

@evkotov evkotov commented Jul 9, 2024

Details:

According to the testing, this PR reduces LoadTIme of LLMs (llama2, llama3, qwen) to 7 - 16%

  • update OPENVINO_DEBUG, _ERR, _WARN, _INFO macros
  • fix OPENVINO_DEBUG, _ERR, _WARN, _INFO macros in all OpenVINO

Tickets:

  • 143185

@evkotov evkotov added this to the 2024.3 milestone Jul 9, 2024
@evkotov evkotov self-assigned this Jul 9, 2024
@evkotov evkotov requested review from a team as code owners July 9, 2024 16:30
@evkotov evkotov requested a review from cavusmustafa July 9, 2024 16:30
@github-actions github-actions bot added category: inference OpenVINO Runtime library - Inference category: Core OpenVINO Core (aka ngraph) category: IE Tests OpenVINO Test: plugins and common category: CPU OpenVINO CPU plugin category: Python API OpenVINO Python bindings category: transformations OpenVINO Runtime library - Transformations category: LP transformations OpenVINO Low Precision transformations category: ONNX FE OpenVINO ONNX FrontEnd category: TF FE OpenVINO TensorFlow FrontEnd category: PyTorch FE OpenVINO PyTorch Frontend labels Jul 9, 2024
@dorloff dorloff changed the title Update ENABLE_OPENVINO_DEBUG macroses Update ENABLE_OPENVINO_DEBUG macros Jul 9, 2024
Comment on lines +415 to +431
# ifdef ENABLE_OPENVINO_DEBUG
OPENVINO_DEBUG("[ threading ] stream_processors:");
for (size_t i = 0; i < stream_processors.size(); i++) {
OPENVINO_DEBUG << "{";
OPENVINO_DEBUG("{");
for (size_t j = 0; j < stream_processors[i].size(); j++) {
OPENVINO_DEBUG << stream_processors[i][j] << ",";
OPENVINO_DEBUG(stream_processors[i][j], ",");
}
OPENVINO_DEBUG << "},";
OPENVINO_DEBUG("},");
}
# endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's an unnecessary tab after '#'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was done by clang formatter

@evkotov evkotov requested a review from a team as a code owner July 10, 2024 13:54
@github-actions github-actions bot added the category: JAX FE OpenVINO JAX FrontEnd label Jul 10, 2024
@evkotov evkotov changed the title Update ENABLE_OPENVINO_DEBUG macros Update ENABLE_OPENVINO_DEBUG macro Jul 10, 2024
@itikhono itikhono added this pull request to the merge queue Jul 11, 2024
Merged via the queue into openvinotoolkit:master with commit 70a8ab6 Jul 11, 2024
123 checks passed
@evkotov evkotov deleted the cvs_143185 branch July 11, 2024 07:44
spran180 pushed a commit to spran180/openvino that referenced this pull request Jul 27, 2024
### Details:
According to the testing, this PR reduces LoadTIme of LLMs (llama2,
llama3, qwen) to 7 - 16%

 - update OPENVINO_DEBUG, _ERR, _WARN, _INFO macros
 - fix OPENVINO_DEBUG, _ERR, _WARN, _INFO macros in all OpenVINO

### Tickets:
 - 143185
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: Core OpenVINO Core (aka ngraph) category: CPU OpenVINO CPU plugin category: IE Tests OpenVINO Test: plugins and common category: inference OpenVINO Runtime library - Inference category: JAX FE OpenVINO JAX FrontEnd category: LP transformations OpenVINO Low Precision transformations category: ONNX FE OpenVINO ONNX FrontEnd category: Python API OpenVINO Python bindings category: PyTorch FE OpenVINO PyTorch Frontend category: TF FE OpenVINO TensorFlow FrontEnd category: transformations OpenVINO Runtime library - Transformations Code Freeze
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants