Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a specific implementation limit ? (multitasking models, cascaded models, or large models) #27

Closed
BICHENG opened this issue Oct 23, 2022 · 6 comments

Comments

@BICHENG
Copy link

BICHENG commented Oct 23, 2022

Hi, I have not applied through the developer zone account yet (will it be difficult to apply for all-pass?).
I wonder if the Haio-8 chip can run some large models at the same time? Or can you tell me what the implementation limits are? E.g:

  1. Operator compatibility (highest opset version supported by ONNX, or better than OpenVINO in general?)
  2. What is the memory size of the chip that can store/compute tensors? Can it run super-resolution models with higher output resolutions? Can the chip run very wide fully connected layers?
  3. Hailo-8 can multitask, if I run keypoint detection, ReID and depth estimation at the same time with frame skipping, is the chip's computing or memory capacity overloaded? How to spot where the overload is or estimate it?
  4. Has your team considered having multiple Hailo-8 chips "chained" to run some difficult tasks? This should be super cool.
@nadaved1
Copy link

Hi @BICHENG,
It should be straight forward to get all access, if you've an existing customer.

  1. Support officially opet8 and opset11, but newer opsets are also working. Do you have any specific opset that you want to use?
  2. The Chip can run bigger output resolutions, and very wide FC layers.
  3. We have offline profiler as well as online htop-like tool to monitor the load on the Hailo device to give you insights as for how to improve the performance of the pipeline that you execute.
  4. Yes :)

@BICHENG
Copy link
Author

BICHENG commented Oct 23, 2022

Thx, I will buy it from the official Hailo website.
It might be best to provide a list of currently supported operators. I want to use remap operators such as GridSample (opset16) to process some of the features brought by the lens model.

@BICHENG BICHENG closed this as completed Oct 27, 2022
@MustafaYounes1
Copy link

MustafaYounes1 commented Aug 15, 2023

Hello @nadaved1

  1. We have offline profiler as well as online htop-like tool to monitor the load on the Hailo device to give you insights as for how to improve the performance of the pipeline that you execute.

May I ask you what is the online htop-like tool that one can use to monitor the load on the Hailo Device?

I have the Hailo M.2 Acceleration module, and I'm wondering, how to monitor the AI Load on it.

@nadaved1
Copy link

nadaved1 commented Aug 15, 2023 via email

@MustafaYounes1
Copy link

Hi @nadaved1

Thanks for the quick reply!

I tried to use the hailortcli monitor command on one screen window while running inference using PyHailoRT on another window, but unfortunately, it keeps throwing this warning:

Monitor did not retrieve any files. This occurs when there is no application currently running.
If this is not the case, verify that environment variable 'HAILO_MONITOR' is set to 1.

The mentioned environment variable was set successfully to the mentioned value:

screen -S <session_id> -X setenv HAILO_MONITOR 1
# at both monitor + inference windows
$ echo $HAILO_MONITOR
1

OS: Ubuntu 22.04.2 LTS
HailoRT-CLI version: 4.14.0

@nadaved1
Copy link

nadaved1 commented Aug 16, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants