Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[elg_demo error] ValueError: could not broadcast input array from shape (216,360,3) into shape (216,180,3) #34

Closed
keishatsai opened this issue Jul 19, 2019 · 6 comments

Comments

@keishatsai
Copy link

Hi all,
Has anybody encountered the following error: when trying to run elg_demo.py?
I have struggle for few days, could anyone tell me how to fix this issue?

my environment:
Windows10
CUDA 10.0
cuDNN 7.6
Tensorflow-gpu 1.14.0
opencv-python 4.1.0.25
python 3.6

Exception in thread visualization:
Traceback (most recent call last):
  File "C:\Miniconda3\envs\tensorflow-gpu\lib\threading.py", line 916, in _bootstrap_inner
    self.run()
  File "C:\Miniconda3\envs\tensorflow-gpu\lib\threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "elg_demo.py", line 234, in _visualize_output
    bgr[v0:v1, u0:u1] = eye_image_raw
ValueError: could not broadcast input array from shape (216,360,3) into shape (216,180,3)

2019-07-19 13:32:50.978486: W tensorflow/core/kernels/queue_base.cc:277] _0_Video/fifo_queue: Skipping cancelled enqueue attempt with queue not closed

Thank you.

@WuZhuoran
Copy link

The error shows that eye_image_raw has different size with u0, u1.
Did you modify any code?

@keishatsai
Copy link
Author

@WuZhuoran Thanks for replying.
I didn't edit any code of this demo code. I just tried to run directly.
Is it because of threading thing? or other possibilities?

@WuZhuoran
Copy link

WuZhuoran commented Jul 30, 2019

@keishatsai you can try to comment this line to run.

bgr[v0:v1, u0:u1] = eye_image_raw
bgr[v1:v2, u0:u1] = eye_image_annotated

These 2 line code just put 2 eye image at the left-top corner of frame. I am not sure if your u0,u1 is wrong or the image size is wrong.

@keishatsai
Copy link
Author

keishatsai commented Jul 31, 2019

@WuZhuoran
Thank you for the reply. I have skipped this error, but got hung at the end.
My input video is about 1536*2048, mp4

I0731 10:40:13.458220 23460 model.py:192] ------------------------------
I0731 10:40:13.458220 23460 model.py:193]  Approximate Model Statistics
I0731 10:40:13.459215 23460 model.py:194] ------------------------------
I0731 10:40:13.459215 23460 model.py:195] FLOPS per input: 1,006,288,359.0
I0731 10:40:13.464202 23460 model.py:198] Trainable Parameters: 712,527
I0731 10:40:13.464202 23460 model.py:201] ------------------------------
W0731 10:40:13.470187 23460 deprecation_wrapper.py:119] From D:\GazeML\src\core\checkpoint_manager.py:45: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

OpenCV: FFMPEG: tag 0x34363248/'H264' is not supported with codec id 27 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x31637661/'avc1'

        OpenH264 Video Codec provided by Cisco Systems, Inc.

00000060 [38 FPS] read: 10ms, preproc: 52ms, infer: 27ms, vis: 1ms, proc: 13609ms, latency: 13620ms
00000120 [34 FPS] read: 11ms, preproc: 56ms, infer: 29ms, vis: 1ms, proc: 13547ms, latency: 13559ms
00000180 [33 FPS] read: 13ms, preproc: 59ms, infer: 27ms, vis: 1ms, proc: 13516ms, latency: 13530ms
00000240 [33 FPS] read: 13ms, preproc: 61ms, infer: 28ms, vis: 2ms, proc: 13415ms, latency: 13428ms
00000300 [34 FPS] read: 11ms, preproc: 58ms, infer: 33ms, vis: 2ms, proc: 13545ms, latency: 13557ms
Exception in thread record:
Traceback (most recent call last):
  File "C:\Miniconda3\envs\tensorflow-gpu\lib\threading.py", line 916, in _bootstrap_inner
    self.run()
  File "C:\Miniconda3\envs\tensorflow-gpu\lib\threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "elg_demo.py", line 113, in _record_frame
    assert frame_index in data_source._frames
AssertionError

00000360 [34 FPS] read: 9ms, preproc: 57ms, infer: 26ms, vis: 1ms, proc: 13847ms, latency: 13857ms
Video "E:\test.mp4" closed.

@WuZhuoran
Copy link

It seems that you already detected few frames. So you already worked with loading weights.

You problem occurs with

OpenCV: FFMPEG: tag 0x34363248/'H264' is not supported with codec id 27 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x31637661/'avc1'

I believe this is a common issue of opencv with mp4. How about try avi or webm format?

@keishatsai
Copy link
Author

@WuZhuoran
It does because of the problem of video format, so I used ffmpeg to convert it.
Finally, I can run it to the end, but I still got assertion error after detecting few frames.
Anyhow, thanks for all the helps.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants