Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Frame loss when acquiring at high FPS #131

Closed
bobcorn opened this issue Mar 25, 2020 · 56 comments
Closed

Frame loss when acquiring at high FPS #131

bobcorn opened this issue Mar 25, 2020 · 56 comments
Labels
improvement New feature or improvement
Milestone

Comments

@bobcorn
Copy link

bobcorn commented Mar 25, 2020

Introduction
Hi @kazunarikudo, first of all I really want to thank you for this amazing work, that really makes it easier for us to work with industrial cameras. Thank you, sincerely.

Moving to my question, I have an issue (that I've been trying to solve for months, without success) when acquiring at high FPS, resulting in frame loss. Unfortunately, this problem is crucial for my project, since it aims at tracking objects that move very fast, and the frame loss inevitably causes unsuccessful tracking.

Describe the bug
Using the script provided in the next section to record short videos, I experienced that setting a high fps rate (160 fps, for example), causes frame loss. By frame loss I mean that, every few frames, the next one is skipped, and the next's next is acquired instead. This happens really really frequently at maximum fps (163 fps), and becomes progressively less frequent when decreasing fps, disappearing indicatively below 120 fps (this threshold has been found empirically).

I noticed this the first time when, after recording a video, the tracking of my objects encountered problems. Subsequently, I verified that this actually happened by storing the timestamp of each acquired frame in an array (I omitted this portion of the code in the provided script for brevity) and, computing differences, I noticed how the time elapsed between each recorded frame was sometimes doubled (meaning that a frame was skipped).

For example, recording at 163 fps, time elapsed between each recorded frame should be 1/163 = 0.00613496932 seconds, but I experience something like:

...
Frame 3
0.00613496932 s
Frame 4
0.00613496932 s
Frame 5
0.01226993865 s -> (time is double than expected, a frame was skipped)
Frame 6
0.00613496932 s
Frame 7
0.01226993865 s -> (time is double than expected, a frame was skipped)
Frame 8
...
And so on.

This also happens in real time, without writing any video. I verified this printing timestamps.

To Reproduce
To reproduce this behaviour, I used the following script for recording short videos.

As you can see, I store frames to an array first (using RAM), then, when all frames are safe, I write them to disk. I do this to avoid hard disk writing speed bottleneck.

I apologise if the code may not be optimal, but I hope the concept is clear.

from harvesters.core import Harvester
import cv2
import os
import sys
import numpy as np

WIDTH = 1920  # Image buffer width
HEIGHT = 1200  # Image buffer height
PIXEL_FORMAT = "BayerRG8"  # Camera pixel format
SDK_CTI_PATH = "C:/Program Files/Common Files/Sentech/GenTL/StGenTL.cti"  # Input camera SDK .cti file
CAMERA_MODEL = "STC-MCS241U3V"  # Camera product model

# -------------------------------------------------------------------------------
# Functions
# -------------------------------------------------------------------------------

def init_camera(fps):
    ia = None

    h = Harvester()
    h.add_cti_file(SDK_CTI_PATH)
    h.update_device_info_list()

    try:
        ia = h.create_image_acquirer(model=CAMERA_MODEL)
    except:
        print("[ERROR] Camera with specified model \"" + CAMERA_MODEL + "\" is busy or not connected.")
        exit(1)

    ia.remote_device.node_map.Width.value = WIDTH
    ia.remote_device.node_map.Height.value = HEIGHT
    ia.remote_device.node_map.PixelFormat.value = PIXEL_FORMAT
    ia.remote_device.node_map.AcquisitionFrameRate.value = fps

    ia.start_image_acquisition()

    return h, ia

def shutdown_camera(image_acquirer, harvester):
    image_acquirer.stop_image_acquisition()
    image_acquirer.destroy()
    harvester.reset()

# -------------------------------------------------------------------------------
# Main
# -------------------------------------------------------------------------------

def run():
    file_name = input("\n[INPUT] Enter video name (without extension): ")
    n_frames = int(input("\n[INPUT] Enter how many frames to record: "))
    fps = int(input("\n[INPUT] Enter how many FPS: "))

    # Preallocate an array that will temporarily store frames
    frames = np.zeros([n_frames, 1200, 1920, 3], dtype=np.uint8)

    print("\n[INFO] Connecting to camera, please wait...\n")
    h, ia = init_camera(fps)

    # Store frames in RAM
    for i in range(n_frames):
        with ia.fetch_buffer() as buffer:
            np.copyto(frames[i], cv2.cvtColor(
                buffer.payload.components[0].data.reshape(buffer.payload.components[0].height,
                                                          buffer.payload.components[0].width), cv2.COLOR_BAYER_RG2RGB))

    out = cv2.VideoWriter(file_name + ".avi", cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'), fps, (WIDTH, HEIGHT))

    # After all frames have been grabbed safely, write output video
    for i in range(len(frames)):
        out.write(frames[i])

    path = os.getcwd() + "\\" + file_name + ".avi"

    print("\n\n[INFO] Video written successfully!\n")
    print("[INFO] Path: " + path)

    shutdown_camera(ia, h)

    exit(0)


if __name__ == "__main__":
    run()

I point out that the extent of the problem varies if operations inside the fetch loop are changed. This may sound obvious if more intensive operations are added (since computing time becomes longer), but the curious thing is that the problem gets even worse also by removing some operations (like the cv2.cvtColor demosaicing). I am unable to give an answer about this.

Expected behaviour
I expect all frames to be recorded correctly at a constant frame rate, without frame loss even at maximum fps (163 fps).

Screenshots
-- No screenshots provided --

Desktop (please complete the following information):

  • OS: Windows 10, 64 bit
  • Python: Intel Distribution for Python 3.7.4 and standard Python 3.8 (I tried both)
  • Harvester: 1.0.2
  • GenTL Producer: Omron SENTECH (I used the official Omron SENTECH SDK and the provided official .cti file)
  • Camera: STC-MCS241U3V

Additional context
I decided to ask for your help, since I tried to figure out this problem alone for months, without success. I wonder if the problem maybe caused by an incorrect use of Harvester library by me, or by some sort of bottleneck caused by Python itself when trying to save data in a too fast way.
I am slightly puzzled about the first one, since the acquisition works well up to a certain threshold (120 fps, indicatively).

I hope you could help me figuring this out.
Thank you for your patience on reading this, sincerely.

@kazunarikudo
Copy link
Member

@bobcorn Hi, Marco. Thank you for your kind words. It means a lot to me. Concerning the issue that you're facing, could you tell me how do you get the timestamp? Is that coming from the fetched buffer? If so, I can tell you that a timestamp is stamped by a GenTL Producer when it's acquired an image. If an image has a longer value it would imply there was a drop or other technical issues until it turns ready to be fetched by Harvester. Such a drop can be caused by an issue on the transport layer in general; some might be caused by a cable or a USB3 host controller IC. As a quick diagnose, could you try a GenTL Producer from MATRIX VISION instead of Sentec's one? You can download a Windows installer from here. You should be able to find their CTI file in the installation directory. I do not mean Harvester is not causing the issue but I just would like to know what's happening. If MV's GenTL Producer resolves the issue, there might be something wrong with the consumer-producer relationship with Sentech. Please feel free to update when you can, I will try to check what I can as much as possible so that I can suggest something to test on your side. /Kazunari

@kazunarikudo
Copy link
Member

Note that you will have to install MV's U3V driver to the target camera so that MV's GenTL Producer can handle that. Installation can be done in the device manager. (I could be wrong but MV installs a utility GUI app which helps the user to install U3V driver.)

@kazunarikudo
Copy link
Member

@bobcorn One more thing as the second trial, if you are using Harvester version 1.0 then I would like to suggest you try version 1.1.0. If the issue is caused by Harvester's performance, it could resolve that by the improvement that was done for ticket #120. Summarizing the suggested ideas, could you (1) try MV's GenTL Producer, and (2) try Harvester version 1.1.0, please?

@bobcorn
Copy link
Author

bobcorn commented Mar 26, 2020

@kazunarikudo Thank you so much for your detailed replies. I took some time to carefully put into practice your suggestions, in order to reply with as much information as possible. Unfortunately, the problem persists, but I'm now able to provide further details.

About the timestamp
Concerning the timestamp, I confirm that I get it from the fetched buffer, using the instruction "buffer.timestamp_ns" in the fetching loop, like this:

for i in range(n_frames):
        with ia.fetch_buffer() as buffer:
            np.copyto(frames[i], cv2.cvtColor(
                buffer.payload.components[0].data.reshape(buffer.payload.components[0].height,
                                                          buffer.payload.components[0].width), cv2.COLOR_BAYER_RG2RGB))

            timestamp = buffer.timestamp_ns

I store every timestamp retrieved this way in an array, and then I simply compute the difference between each timestamp and the previous one (converting in seconds). This is the way I compute and observe time deltas' inconsistencies between frames.

About the GenTL Producer
Concerning your suggestions, I tried out the MATRIX VISION GenTL producer, and after many tests I actually didn't experience any improvement or worsening of the performances. On average, dropped frames are equal, so both GenTL producers seem to work and perform the same way.

About Harvester 1.1.0
Unfortunately, this is not true for the latest version of Harvester 1.1.0. I was previously using version 1.0.2, with which I experienced the problems described in the first post. After upgrading to 1.1.0, I experienced a very huge worsening of the performances. I then downgraded to 1.0.5, that I can confirm performing as well as 1.0.2.

To give an idea of the performance difference while trying to record 1000 frames at 163 fps, this is an average of some tests I made:

  • With Harvester 1.0.5 using the latest Intel Python Distribution (3.7):
    frames dropped: 5/10
    frames grabbed correctly: 990/995

  • With Harvester 1.0.5 using standard Python 3.7:
    frames dropped: 50
    frames grabbed correctly: 950

  • With Harvester 1.1.0 using the latest Intel Python Distribution (3.7)
    frames dropped: 100/110
    frames grabbed correctly: 890/900

  • With Harvester 1.1.0 using standard Python 3.7:
    frames dropped: 200
    frames grabbed correctly: 800

The Intel Python distribution performs better than the standard one (as far as I know, it provides some optimisations exploiting the Intel processor), while Harvester 1.1.0 provides a huge drop of the overall performances. Version 1.1.0 also causes frame loss even at lower fps (20/40 fps), while 1.0.5 doesn't.

About the transport layer
In order to make sure that the hardware (cable, serial port, camera itself, etc.) wasn't the cause of the problem, I observed performances of the official acquisition software provided by Omron "StViewer". This software allows to capture frames providing a GUI, relying on C++ code under the hood. Using the software, I could observe how it was able to capture more than 10000 frames without losing none of them, showing a constant maximum frame rate (163 fps):

Immagine

This test led me to think that the hardware seems to perform as expected, and should not be the bottleneck which causes the issue.

Some reasoning
Putting it all together, the information I was able to gather led me to make some reflections:

  • The GenTL Producer seems not to be the cause of the problem (both Omron SENTECH and MATRIX VISION seem to perform the same way).
  • The hardware involved seems not to be the cause/bottleneck of the problem, since it is possible to achieve the expected performances by using it (even though by using a software that relies on C++ code).
  • Version 1.1.0 of Harvester doesn't solve the problem, but makes it worse. This however, although obviously negative, leads consider that the code seems to play some kind of role in all of this. In some way, changes from 1.0.5 to 1.1.0 influenced acquisition performances.
  • The speed-up brought by the Intel Python Distribution (which exploits some kind of parallelism under the hood) seems to play a positive role for this task.

As a final remark, I care to emphasize that these are only personal assumptions, therefore may be wrong, and I would really appreciate to hear your opinion about this. Harvester may be the cause of the issue, or on the contrary it may be fine, and the cause could be that I'm using it wrong. If you feel like I made some mistakes on acquiring frames, or you have some sort of script that may help on testing the acquisition frame rate, I would love to do my best to better investigate this issue.

I remain at complete disposal. Thank you again for your time and patience, sincerely.

@kazunarikudo kazunarikudo added bug Something isn't working improvement required and removed bug Something isn't working labels Mar 26, 2020
@kazunarikudo
Copy link
Member

@bobcorn Hi, thank you so much for taking your time and providing me such valuable information. As far as I hear the description above, I also guess something must be wrong with Harvester as you do. Having that sight, I reviewed the design and I noticed that Harvester has an image acquisition layer that seems to be redundant and perhaps draws its performance; I had overlooked a feature that is defined by GenTL SFNC so I implemented a similar feature by myself. Anyway, I am not sure yet but I am very curious to see what happens with an optimized design. However, you might know that Harvester is my hobby project and I cannot work on it until I finish the regular job. I will try to have a prototype and hope I can resume the discussion next week. Would this be okay for you? Sorry for having kept you in the inconvenient situation. /Kazunari

@bobcorn
Copy link
Author

bobcorn commented Mar 26, 2020

@kazunarikudo Thank you so much for taking time to review my analysis.

I realize that my need may have revealed a problem that is normally ignored in most scenarios, as my object tracking project requires that no frames are lost. I believe however, if that's the case, that this could be a great opportunity to further improve this amazing project (for which I am sincerely grateful to you), in order to make it applicable in all scenarios.

I know that Python, as an high-level language, offers lower performances than low-level languages such as C++ (which is used by Omron's official software). I don't know if this could matter, but if it would be possible to manage acquisition in an equally efficient way by using an entirely Python-based solution, that would be really amazing.

To be honest, I'm also actually unsure about how I could handle acquisition in a different way, without having to consider porting my software to another language.

Of course, I will stay available for any need, and tuned for any update.
Thanks again a lot for your time and patience on reviewing this.

@kazunarikudo
Copy link
Member

@bobcorn Hi. I have just created a development branch as _issue_130 and committed a changeset as the first prototype. The major change is now you can directly fetch a buffer from the associate GenTL Producer calling the fetch_buffer method. In the previous implementation, an ImageAcquirer object was fetching buffers in the background and at least it was in a place where can lead latency issue. To be honest I am not sure if it can gain the performance but that is the most straightforward way to bridge between you and the GenTL Producer. I would highly appreciate it if you could give me your feedback when you had a chance to try that prototype. I will try to improve Harvester if needed. /Kazunari

@kazunarikudo
Copy link
Member

One more thing, with that version, you can use short names for some methods. See #127 for detail. You can still use the original names but I'd deprecate them in the near future.

@kazunarikudo
Copy link
Member

It's truly an annoying and depressing mood due to COVID-19 these days but I hope we can keep our brains active to make our minds healthy so that we can make our world/lives beautiful again as we had before.

@bobcorn
Copy link
Author

bobcorn commented Mar 28, 2020

Hi @kazunarikudo. I completely agree with the depressing mood occurring these days, but I agree on trying to do our best to keep us active. Thank you for your quick reply and attempt to fix this issue.

Again, I took some time to carefully put into practice your update and testing acquisitions, in order to reply with as detailed information as possible. Unfortunately the problem seems not to be solved, at least by my tries. I will write my results below and describe them, trying to infer useful informations.

My tests
Results below have been computed by running my recording script which records N frames (specified by the user), at X fps (specified by the user). The script firstly stores all frames captured this way into an array, and only after writes them on disk to avoid hard-disk writing speed bottleneck.

Frames and timestamps are retrieved using the following loop:

frames = np.zeros([n_frames, 1200, 1920], dtype=np.uint8)

for i in range(n_frames):
	with ia.fetch_buffer() as buffer:
		np.copyto(frames[i], buffer.payload.components[0].data.reshape(
			buffer.payload.components[0].height,
			buffer.payload.components[0].width))

		timestamp = buffer.timestamp_ns

Timestamps are also stored into an array, and then post-processed in order to compute inconsistent deltas and the actual average recording speed (in fps).

For the new prototype provided, besides the new short names that I appreciated a lot, I changed the following line by adding argument "enable_callback=True", as I think it was necessary:

ia.start_image_acquisition(enable_callback=True)

Following tests have captured 200 frames at 60 fps (additional 200 frames were initially thrown away to warm up camera). For brevity reasons, I only pasted 10 deltas among all the inconsistent ones.

  • Test using standard Python 3.7 with the latest prototype:

[INFO] Video captured successfully! (actual FPS: 8.55 instead of 60 FPS)
[INFO] The following 173 intervals out of 200 do not match the goal
[INFO] At 60 FPS delta should be: 0.016667 s
...
0.416728 (+/- 0.400061 s)
0.133353 (+/- 0.116686 s)
0.033338 (+/- 0.016672 s)
0.083346 (+/- 0.066679 s)
0.233368 (+/- 0.216701 s)
0.333382 (+/- 0.316716 s)
0.050007 (+/- 0.033341 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
...

  • Test using standard Python 3.7 with Harvester version 1.0.5:

[INFO] Video captured successfully! (actual FPS: 32.71 instead of 60 FPS)
[INFO] The following 166 intervals out of 200 do not match the goal
[INFO] At 60 FPS delta should be: 0.016667 s
...
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
...

  • Test using Intel Python Distribution (3.7) with the latest prototype:

[INFO] Video captured successfully! (actual FPS: 29.70 instead of 60 FPS)
[INFO] The following 113 intervals out of 200 do not match the goal
[INFO] At 60 FPS delta should be: 0.016667 s
...
0.066676 (+/- 0.050010 s)
0.183360 (+/- 0.166694 s)
0.066676 (+/- 0.050010 s)
0.083346 (+/- 0.066679 s)
0.066676 (+/- 0.050010 s)
0.150022 (+/- 0.133355 s)
0.166691 (+/- 0.150025 s)
0.033338 (+/- 0.016672 s)
0.050007 (+/- 0.033341 s)
0.233368 (+/- 0.216701 s)
...

  • Test using Intel Python Distribution (3.7) with Harvester version 1.0.5:

[INFO] Video captured successfully! (actual FPS: 32.09 instead of 60 FPS)
[INFO] The following 173 intervals out of 200 do not match the goal
[INFO] At 60 FPS delta should be: 0.016667 s
...
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
0.033338 (+/- 0.016672 s)
...

  • Bonus test:

It's really curious how, by replacing the previously decribed acquisition loop with the following one:

frames = np.zeros([n_frames, 1200, 1920, 3], dtype=np.uint8)

for i in range(n_frames):
	with ia.fetch_buffer() as buffer:
		np.copyto(frames[i], cv2.cvtColor(
			buffer.payload.components[0].data.reshape(
				buffer.payload.components[0].height,
				buffer.payload.components[0].width),
			cv2.COLOR_BAYER_RG2RGB))

		timestamp = buffer.timestamp_ns

which introduces image demosaicing (conversion from Bayer to RGB), thus adding an overhead, performances paradoxically greatly improve instead with Harvester version 1.0.5, granting almost flawless acquisition at low FPS (only a few frames are lost every now and then). I still haven't been able to explain this.

This, however, is not true for versions higher than 1.0.5 and the latest prototype, for which this change greatly worsens performance.

My observations
For what I got to see, unfortunately the issue persists, and I also notice how it seems to no longer depend on high fps, but on all fps in general. I remembered that the issue occurred only beyond a certain FPS threshold, but apparently I was focusing too much on using the second demosaicing loop, which was shadowing the problem for lower fps thresholds.

By these tests, seems like that Harvester versions higher than 1.0.5 cause deltas to be uneven. By this I mean that for versions <= 1.0.5 all inconsistent deltas where equal, and always double the correct one (which means that only 1 frame was lost cumulatively each time). With higher versions instead, they became completely different from each other, and also of incredible unusual durations (like 0.416728, that is 25 times the correct delta). So long deltas make it look like some unknown stucks have happened somewhere under the hood.

Of course, if you think I made some mistakes on performing these tests, don't hesitate to let me know. Also, if you feel like tests should be performed in a different way, and maybe you have some portions of code that I should use, let me know. I will try my best to help on solving this issue that is really important for my project.

Thank you again for your time.

@kazunarikudo
Copy link
Member

kazunarikudo commented Mar 28, 2020

@bobcorn Hi, thank you for the update. Excuse me, I have not fully read your report but I have one thing to let you know. You should not pass enable_callback=True to the start_acquisition method. If you pass that, the behavior is the same as the original one. Could you try with just start_acquisition(), i.e., start_acquisition(enable_callback=False),please? (But it's not necessary to code such long. Just call start_acquisition().)

@kazunarikudo
Copy link
Member

Sorry for having not mentioned the expected usage to you in advance. Please excuse me!

@kazunarikudo
Copy link
Member

kazunarikudo commented Mar 28, 2020

Another thing that I can tell you is, I have introduced the Python built-in Queue module to moderate the CPU usage during the fetch_buffer call (see #120) and it resolves the issue but Queue objects involve Python thread under the hood and it can be thought it unstabilize the delta due to Python's global interpreter lock, GIL in short. Before that change, as you've seen with version 1.0.5, it tries to fetch at the fastest speed. However, some people did not like to consume CPU power. The one you're trying supports not only the moderate acquisition but also the way where it directly asks the target GenTL Producer to fetch an available image. The latter way had never been introduced before and I guess it would be much faster and reduce latency because it does not involve any Python thread in the acquisition process.

@kazunarikudo
Copy link
Member

Please feel free to give me an update anytime when you can. I can work on even Sunday to try resolving this issue.

@bobcorn
Copy link
Author

bobcorn commented Mar 28, 2020

Don't worry @kazunarikudo, my bad on taking the initiative of modifying the method call. Actually, I did it because, after upgrading to the latest prototype version, acquisition wasn't working, so I thought I was calling something the wrong way.

After some debugging, I found out that, when executing the recording script as you suggested, the program gets stuck when calling the "fetch_buffer()" method:

print("Before fetching")
buffer = ia.fetch_buffer()
print("After fetching")

In this example, "After fetching" never gets printed (I used version without the "with" statement and manual buffer re-queueing to better highlight the concept).

I also tried with the MATRIX VISION .cti, without success.

Could it be the infinite loop mentioned in issue #120?

If I'm doing something wrong, don't hesitate to let me know.

Thank you so much.

@kazunarikudo
Copy link
Member

kazunarikudo commented Mar 28, 2020

Could you pass timeout=0.003 to fetch_buffer and tell me what happens? If it does not change anything, can we chat over Skype? You should be able to find me out with kazunari.kudo.

@bobcorn
Copy link
Author

bobcorn commented Mar 28, 2020

Of course we can, I will write to you shortly if you agree with it.

Meanwhile, I tried your suggestion, and by passing timeout=0.003 i get a "TimeoutException".

@kazunarikudo
Copy link
Member

i get a "TimeoutException"

Could you tell me which line was raising that?

@bobcorn
Copy link
Author

bobcorn commented Mar 28, 2020

Of course, the traceback is:

Traceback (most recent call last):
  File "issue.py", line 174, in <module>
    run()
  File "issue.py", line 84, in run
    buffer = ia.fetch_buffer(timeout=0.003)
  File "C:\Users\rossinim\AppData\Roaming\Python\Python37\site-packages\harvesters-_issue_130-py3.7.egg\harvesters\core.py", line 2178, in fetch_buffer
    raise TimeoutException
_gentl.TimeoutException

So it was raised by line:

buffer = ia.fetch_buffer(timeout=0.003)

and by line 2178 in "core.py".

@kazunarikudo
Copy link
Member

Okay, so could you change the value to 'timeout=1'. This is just for testing to intentionally avoid the TimeoutException is raised.

@bobcorn
Copy link
Author

bobcorn commented Mar 28, 2020

By using 'timeout=1', I still get the same exception at the same lines. By increasing the timeout (to 100, for example), I get stuck instead. At the end of the timeout, the exception raises again.

@kazunarikudo
Copy link
Member

Wow, let me think...

@kazunarikudo
Copy link
Member

So could you set ia.timeout_for_image_acquisition = 1000 # ms before starting image acquisition?

@kazunarikudo
Copy link
Member

But I feel it's a bit strange. We called fetch_buffer(timeout=1) and expected we can acquire at least one image in the 1 second but couldn't.

@bobcorn
Copy link
Author

bobcorn commented Mar 28, 2020

By setting ia.timeout_for_image_acquisition = 1000 # ms, an error is raised. The traceback is:

Traceback (most recent call last):
  File "issue.py", line 175, in <module>
    run()
  File "issue.py", line 77, in run
    h, ia = init_camera(fps)
  File "issue.py", line 52, in init_camera
    ia.timeout_for_image_acquisition = 1000 # ms
  File "C:\Users\rossinim\AppData\Roaming\Python\Python37\site-packages\harvesters-_issue_130-py3.7.egg\harvesters\core.py", line 1790, in timeout_for_image_acquisition
    with self.thread_image_acquisition:
AttributeError: __enter__

@kazunarikudo
Copy link
Member

Could you edit the core.py by yourself? I would like you to a code block that begins at line 1788 with follows:

    @timeout_for_image_acquisition.setter
    def timeout_for_image_acquisition(self, ms):
        self._timeout_for_image_acquisition = ms

@bobcorn
Copy link
Author

bobcorn commented Mar 28, 2020

Sorry, maybe I didn't understand 100% what you would like me to change. Checking "core.py", I already have that setter which begins at line 1788, however, when i try to use it, an error is raised. Do you want me to edit the setter?

@bobcorn
Copy link
Author

bobcorn commented Mar 28, 2020

By calling fetch_buffer(), I get stuck:

print("Before fetching")
buffer = ia.fetch_buffer() <--- stuck here
print("After fetching") <--- never reached

I waited a couple of minutes, but I feel like it will never reach the next instruction.

@kazunarikudo
Copy link
Member

kazunarikudo commented Mar 28, 2020

I guess it's stuck because no image is delivered. Let me see... I do not understand why no image is delivered. As far as I tested with a simulator, it delivers images and does not stick.

@kazunarikudo
Copy link
Member

kazunarikudo commented Mar 28, 2020

Well, have you called ia.start_acquisition() before entering to the three lines above? I think you've done though. This is just for making it sure.

@bobcorn
Copy link
Author

bobcorn commented Mar 28, 2020

Yes, I did it right after ia.timeout_for_image_acquisition = 1000.

I was wondering what this problem could be related to, for example, some sort of incompatibility. However I would feel like excluding it, because there have never been compatibility problems with previous versions of Harvester. I am led to think that there must be some change in the prototype which caused the image not to be delivered.

@kazunarikudo
Copy link
Member

Agree, it could happen and I do not have any evidence that it works for your setup.

@bobcorn
Copy link
Author

bobcorn commented Mar 28, 2020

I wonder if the removal of the redundant layer, which you described as bringing the acquisition closer to the manufacturer, could be the cause of a loss of generality in relation to the connection to all kind of cameras.

If you feel like I can do something to help you troubleshooting, don't hesitate to let me know.

@kazunarikudo
Copy link
Member

Thank you for your kind offer. I will ask you when needed...

@bobcorn
Copy link
Author

bobcorn commented Mar 28, 2020

Thanks to you, I will stay tuned for anything.

@kazunarikudo
Copy link
Member

Excuse me, do you have another camera to try?

@kazunarikudo
Copy link
Member

Another question: Could you run your script in debug mode from PyCharm so that we can see the Producer is not really delivering any image?

@kazunarikudo
Copy link
Member

One more question: Could you show me the whole script please so that I can check each line?

@kazunarikudo
Copy link
Member

Wait, wait, wait. I have just reproduce the phenomenon. It's stuck. Let me check what's happening.

@kazunarikudo
Copy link
Member

I will try to give you an update by the time you start to work tomorrow. It's about 1:00 a.m. here so I need to go to bed. Please excuse me!

@bobcorn
Copy link
Author

bobcorn commented Mar 28, 2020

Sorry for the slight delay in the reply, but I was trying to provide as much information as possibile (doing as many checks as possibile).

Answering your questions, unfortunately this is the only camera I have for testing, since a camera of this kind has been bought specifically for my project (if it can help, the camera model is Omron STC-MCS241U3V).

Concerning the script, of course I can provide it to you. I will attach it here: issue.txt. I hope you can excuse me for the ugly code (especially for the timestamp post-processing), but I hope that the main logic of our interest is clear and correct.

Concerning PyCharm, I stepped into the fetch_buffer() method when it gets stuck, and I can confirm that the program loops forever in the while _buffer is None: loop at line 2155. For some reason that condition gets never satisfied.

Concerning the latest updates, I'm really glad you managed to reproduce the phenomenon. I hope this can rule out a specific problem related to my camera, and that it can be of great help to better understand what is happening.

Obviously, don't worry about the time at all. You are doing a lot to help me solve the problem, and I am sincerely grateful to you. If you have time to update tomorrow, it will be fantastic.

As always, I'll stay available and tuned.
Thank you, sincerely.

@kazunarikudo
Copy link
Member

@bobcorn Hi, I have just pushed a change at 87ae986. I'm not sure if it works for you but I could confirm it fixes the phenomenon that I reproduced by myself.

@bobcorn
Copy link
Author

bobcorn commented Mar 29, 2020

@kazunarikudo Great news! I did some testing with the new version and, from current findings, it seems like it completely solved the problem! Timestamps seem to be now fully precise as they should be, so things look very promising.

However, since is better safe than sorry, I would prefer to wait until tomorrow to give you the final confirmation, after performing some further tests to dispel any doubts. I will write an update reply here tomorrow (around 11/12 a.m. for me, which should be 7/8 p.m. for you).

For the moment, thank you really much for your help, sincerely.

@kazunarikudo
Copy link
Member

kazunarikudo commented Mar 29, 2020

@bobcorn Hi. I have made another change at f2eb564. I guess it does not affect your usage so please keep testing that version with your applications and feel free to let me know you get findings. On the other hand, even if that version would work for you but I have confirmed Harvester GUI works with that version but crashes at a particular case, unfortunately. I will keep debugging the issue and let you know if I need to make another change that would break your working copy. Thank you for your cooperation.

@bobcorn
Copy link
Author

bobcorn commented Mar 29, 2020

@kazunarikudo Hi. I tested the latest change you made at f2eb564, and I can confirm (at least from tests performed till now) that it works like a charm. Of course, I will continue to test it since I need to use it for my project, so I will be able to tell you if any problems will come up.

From my empirical point of view, you can consider this change as safe to be deployed. assuming that you believe that the changes made are correct from a logical point of view (which unfortunately I cannot verify because I don't know the source code).

I am very sorry that some problems emerged with Harvester GUI. I really hope that it could be possible to fix them, since I think that this improvement would be hugely important for every Harvester's user. I realize that this need of not losing any frame is crucial for my project, and that many other users may not even realize it, but I think that respecting a constant required frame rate is fundamental from a correctness point of view (and also because being efficient is always preferable).

In the really unfortunate case that this could not be fixed for Harvester GUI, do you think it would be a good idea to at least deploying this to Harvester, and keeping the older not optimized version for Harvester GUI?

Having said that, I really want to thank you deeply for your quick and amazing help. In no time, you understood the problem and managed to solve it, saving my project. I truly believe you are a great professional. Thank you really, really much.

@kazunarikudo
Copy link
Member

kazunarikudo commented Mar 29, 2020

@bobcorn

Hi Marco,

Thank you for your feedback. I'm glad to hear that worked for you. Concerning the GUI issue that I'm facing, as far as I digging some around so far the issue seems to be coming from misuse of the GUI framework; I guess it must have been working by luck. I have not found the right solution for it but I guess it should not be necessary to make drastic changes on Harvester Core itself. Even if it required a change on the Harvester Core side, you should be able to make it with a few modifications.

Anyway, I'm planning to release the one you've tested soon or later. So please do not worry. The Harvester GUI package in the PyPI has pinned Harvester versions so even if I released the one that fixes issue #130 it will not be download by pip command. People will be able to use the good-old Harvester Core.

Since I have not found the right timing to release the fixed version, could you keep using the one from the development branch for a while, please? I will let you know once I uploaded it to PyPI.

Thank you for giving me a chance to improve Harvester. I do not know how long years this project would run but I hope you and other people in the machine vision industry enjoy their jobs with Harvester.

PS. Your "thank you" should go to my one-year daughter because if she did not allow me to work this weekend I'd never made it to the end. :-)

/Kazunari.

@bobcorn
Copy link
Author

bobcorn commented Mar 29, 2020

Hi @kazunarikudo.

Concerning the GUI, I hope you will be able to find out the most suitable solution to solve the problem. Thank you for clarifying to me how development is managed between Harvester and Harvester GUI.

Concerning the upcoming release, of course I will use the one you provided in the development branch meanwhile. I will be extremely happy to use the official version when you'll be able to release it for everyone also. I'll stay tuned for the update.

I'm very happy I could contribute, although to a small extent, to make Harvester even better. I really hope this project would be supported for many years to come.

Last but not least, I therefore sincerely thank your daughter for giving you the chance to fix this issue. I hope I haven't taken off too much precious time from your family life, which is always the best time.

Thank you so much.
Marco

@kazunarikudo kazunarikudo added this to the 1.2.0 milestone Apr 1, 2020
@12Patrick
Copy link

Hi @bobcorn and @kazunarikudo,
I thank you both for your efforts on this issue. Because I've run into exactly the same problem. It happened also sporadically. Now after several runs, no error message at all. Tested with f2eb564 and with camera Allied Vision Mako G-223C PoE and GenTL Producer of Allied Vision and Matrix Vision.

Special thanks again to you, @kazunarikudo . The module really helps a lot in providing a machine vision system. Thanks for the time you invest in it!!!

@kazunarikudo kazunarikudo added improvement New feature or improvement and removed improvement required labels May 5, 2020
@Shorlaks
Copy link

@kazunarikudo and @bobcorn you guys are amazing.
Ive had the same problem, been trying to figure this out for like 4-5 month.
we have custom made camera's and drivers made by our programming team and ive been fighting with them so much about this blaming them for poorly implementing the protocol.
thank you very much

@kazunarikudo
Copy link
Member

@Shorlaks Hi. If you are happy then I'm happy, too! Enjoy Harvester! /Kazunari

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement New feature or improvement
Projects
None yet
Development

No branches or pull requests

4 participants