Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Displaying Video using OpenCV and Harvesters #234

Closed
jmdelahanty opened this issue Apr 16, 2021 · 17 comments
Closed

Displaying Video using OpenCV and Harvesters #234

jmdelahanty opened this issue Apr 16, 2021 · 17 comments

Comments

@jmdelahanty
Copy link

Hello again fellow Harvesters!

I'm successfully grabbing video I'm pretty sure with my camera now, but I'm stuck trying to display them in matplotlib.

I've gotten as far as this in the example in the readme:

ia = h.create_image_acquirer(0)

ia.start_acquisition()

buffer = ia.fetch_buffer()
buffer.queue()

payload = buffer.payload
component = payload.components[0]
width = component.width
height = component.height
data_format = component.data_format

# Reshape the image so that it can be drawn on the VisPy canvas:
if data_format in mono_location_formats:
    content = component.data.reshape(height, width)

else:
    print("Check camera type?")

x = input("Type s to stop me...")

if x == 's':
    ia.stop_acquisition()
    ia.destroy()
    h.reset()

But now that I have the content shaped correctly, I don't know how to display it so I can make sure harvesters is grabbing images.

Any advice?

@jmdelahanty
Copy link
Author

Hello @kazunarikudo!

So I've been trying to get through what's wrong with my code and I've discovered that the component value returned by payload.components[0] is giving me a value of the following:

0 x 0, Mono8, 1310720 elements,
[0 0 0 ... 0 0 0]

Traceback (most recent call last):
  File "harvester_test.py", line 40, in <module>
    content = component.data.reshape(height, width)
ValueError: cannot reshape array of size 1310720 into shape (0,0)

In other words, I'm getting values of 0 x 0 for the camera's height and width somehow. The camera has a height of 1280 x 1024 according to the Sapera LT app I have. It seems like I'm definitely missing something for getting the correct information out of the camera's settings. I've tried looking at different issues and in the example, but I haven't been successful yet. Any advice?

@jmdelahanty
Copy link
Author

I've gotten past this error! It turns out this needs to be done in the correct order. The components have the correct values after actually acquiring frames!

ia = h.create_image_acquirer(0)

ia.start_acquisition()

buffer = ia.fetch_buffer()
buffer.queue()

x = input("Type s to stop me...")

if x == 's':
    ia.stop_acquisition()

payload = buffer.payload
component = payload.components[0]
print(component)
width = component.width
height = component.height
data_format = component.data_format
print("done")
if data_format in mono_location_formats:
    content = component.data.reshape(height, width)
else:
    print("Check camera type?")
ia.destroy()
h.reset()

This gives me the correct height and width! Now just to display the frames that have been gathered... still need some advice there.

@jmdelahanty
Copy link
Author

I've had some success showing an image! Here's how I got there:

# Harvester Routine

from harvesters.core import Harvester
from harvesters.util.pfnc import mono_location_formats
import numpy as np
import time
import matplotlib.pyplot as plt


h = Harvester()

cti_file = "C:/Program Files/MATRIX VISION/mvIMPACT Acquire/bin/x64/mvGENTLProducer.cti"

h.add_file(cti_file)


h.update()

print(h.device_info_list)

ia = h.create_image_acquirer(0)

ia.start_acquisition()

buffer = ia.fetch_buffer()
buffer.queue()

x = input("Type s to stop me...")

if x == 's':
    ia.stop_acquisition()

payload = buffer.payload
component = payload.components[0]
print(component)
width = component.width
height = component.height
data_format = component.data_format
print("done")
if data_format in mono_location_formats:
    content = component.data.reshape(height, width)

else:
    print("Check camera type?")

plt.imshow(content)
plt.show()


y = input("Type s to move forward...")

if y == 's':
    ia.destroy()
    h.reset()

print("Exiting...")
exit

My next step is to show how I can display all the frames that are taken during a video acquisition...

@jmdelahanty
Copy link
Author

I'm discovering that the buffer object is only acquiring 1 image I think. Any tips for making sure what I'm recording is actually being saved over time? Or how to check if I have multiple frames available in the data?

@jmdelahanty
Copy link
Author

After looking through the different closed issues, I discovered that trying out a while True statement, or something similar, might be the solution to grabbing many frames. The camera appears to start trying to record, but doesn't seem to take multiple frames still. The light on the camera that normally flashes each time a frame is taken now just remains on when the program is running. Some progress, but not quite there yet.

@jmdelahanty
Copy link
Author

Some success! It turns out I had to reset the camera, it had gotten stuck because of my program. Just replugging it in let me try again.

Using a while condition allowed me to do this for testing. #168 might be relevant. I discovered that if you don't queue the buffer and then wait for a few seconds, the height and width of the components are 0x0. Waiting a little bit brings in the correct values.

Here's what I've got so far that lets me grab many frames:

# Harvester Routine

from harvesters.core import Harvester
from harvesters.util.pfnc import mono_location_formats
import numpy as np
import time
import matplotlib.pyplot as plt
import cv2
import os

img_array = [None]*500
x = 0

h = Harvester()
cti_file = "C:/Program Files/MATRIX VISION/mvIMPACT Acquire/bin/x64/mvGENTLProducer.cti"
h.add_file(cti_file)
h.update()

print(h.device_info_list)

ia = h.create_image_acquirer(0)
ia.start_acquisition()
buffer = ia.fetch_buffer()
buffer.queue()

time.sleep(5)
while x < 500:
    payload = buffer.payload
    component = payload.components[0]
    print(component)
    width = component.width
    # width = 1280
    height = component.height
    # height = 1024
    data_format = component.data_format
    content = component.data.reshape(height, width)

    img_array[x] = content
    x += 1

type(img_array)
ia.stop_acquisition()
ia.destroy()
h.reset()

plt.imshow(img_array[-1])
plt.show()

print("Exiting...")
exit

@jmdelahanty
Copy link
Author

The thing I'm currently trying to accomplish is relevant to #117 now that I can get multiple frames off the camera. I'm trying to only acquire video when a microscope sends a TTL pulse to the camera. I've confirmed that the TTL pulses are being sent, but I can't seem to get the camera to recognize them (in Harvesters or Sapera LT). I'm wondering if it's because I'm not able to set a particular parameter correctly in the remote_device_node map. When I have the TriggerMode set to "On", the camera just hangs Here's what I have so far:

from harvesters.core import Harvester
from harvesters.util.pfnc import mono_location_formats
import numpy as np
import time
import matplotlib.pyplot as plt
import cv2
import os

img_array = [None]*20
x = 0

h = Harvester()
cti_file = "C:/Program Files/MATRIX VISION/mvIMPACT Acquire/bin/x64/mvGENTLProducer.cti"
h.add_file(cti_file)
h.update()

print(h.device_info_list)

ia = h.create_image_acquirer(0)
n = ia.remote_device.node_map
n.TriggerSelector.value = "SingleFrameTrigger" # <--- set this value?
n.TriggerMode.value = "On"
n.TriggerActivation.value = "RisingEdge"
n.TriggerSource.value = "Line2"
n.LineSelector.value = "Line2"
ia.start_acquisition()
buffer = ia.fetch_buffer()
buffer.queue()

time.sleep(5)
while x <= 20:
    payload = buffer.payload
    component = payload.components[0]
    width = component.width
    # width = 1280
    height = component.height
    # height = 1024
    data_format = component.data_format
    content = component.data.reshape(height, width)

    img_array[x] = content
    x += 1

Sapera LT's CamExpert software shows me the TriggerSelector value and gives me the option of setting up a Single Frame Trigger. Unfortunately, typing this both with and without spaces gives me an error in my Python script. It says this:

Traceback (most recent call last):
  File "Documents\gitrepos\harvesters\harvester_multiframe.py", line 23, in <module>
    n.TriggerSelector.value = "SingleFrameTrigger"
  File "C:\Users\jdelahanty\.conda\envs\genicam\lib\site-packages\genicam\genapi.py", line 2484, in <lambda>
    __setattr__ = lambda self, name, value: _swig_setattr(self, IEnumeration, name, value)
  File "C:\Users\jdelahanty\.conda\envs\genicam\lib\site-packages\genicam\genapi.py", line 101, in _swig_setattr
    return _swig_setattr_nondynamic(self, class_type, name, value, 0)
  File "C:\Users\jdelahanty\.conda\envs\genicam\lib\site-packages\genicam\genapi.py", line 93, in _swig_setattr_nondynamic
    object.__setattr__(self, name, value)
  File "C:\Users\jdelahanty\.conda\envs\genicam\lib\site-packages\genicam\genapi.py", line 2531, in _set_value
    self._primal_set_value(value)
  File "C:\Users\jdelahanty\.conda\envs\genicam\lib\site-packages\genicam\genapi.py", line 2522, in _primal_set_value
    return _genapi.IEnumeration__primal_set_value(self, entry, verify)
_genapi.InvalidArgumentException: Feature 'TriggerSelector' : cannot convert value 'SingleFrameTrigger', the value is invalid. : InvalidArgumentException thrown in node 'TriggerSelector' while calling 'TriggerSelector.FromString()' (file 'Enumeration.cpp', line 134)

I'm not even certain this is the problem because, even if I have this set up in the CamExpert software, I still don't get any confirmation that frames are being grabbed. I'm awaiting support from the manufacturer, but hoping for advice before then.

My efforts to get a video recorded after getting frames has also been unsuccessful for the day. I'm trying to use OpenCV to get it done, but seem to be having codec problems which I don't think is within the scope of this repo. If anyone here has advice there anyways I'd be happy to learn!

@jmdelahanty
Copy link
Author

Last update for the day for anyone following along!

I'm still struggling to get the trigger recognized by the camera. Until I hear back from Teledyne, I think I'll have to wait.

I have started to solve the issue of recording videos correctly though I think! Unfortunately, I think the only thing that's getting saved is the first image of the video. The rest of the video is the same exact picture (I put my hand in front of the camera and move it around, but the image presented in video only shows the first image). Something must be wrong in how I'm ordering things... Here's my code:

# Harvesters GenIcam Routine for use with Bruker2P Setup
# Jeremy Delahanty Apr. 2021
# Harvesters written by Kazunari Kudo https://github.com/genicam/harvesters
# Genie Nano manufactured by Teledyne DALSA


#### Packages ####
# Harvesters for interfacing with Genie Nano
from harvesters.core import Harvester
# Import mono8 location format, our Genie Nano uses mono8 or mono10
from harvesters.util.pfnc import mono_location_formats
# Harvesters offloads images as numpy arrays, import numpy
import numpy as np
# Matplotlib for plotting an example image
import matplotlib.pyplot as plt
# Time.sleep required to allow camera to warm up
import time
# Import OpenCV2 to write images/videos to file
import cv2
# Import OS to change directories and write files to disk
import os

#### Create Variables ####
# Initialize list of None values for total number of frames to be gathered
# This is converted into a numpy array later
img_array = [None]*3000
# Start increment variable at 0
current_frame = 0

#### Setup Harvester ####
# Create harvester object as h
h = Harvester()
# Give path to GENTL producer
cti_file = "C:/Program Files/MATRIX VISION/mvIMPACT Acquire/bin/x64/mvGENTLProducer.cti"
# Add GENTL producer to Harvester object
h.add_file(cti_file)
# Update Harvester object
h.update()
# Print device list to make sure camera is present
print(h.device_info_list)

#### Grab Camera, Change Settings ####
# Create image_acquirer object for Harvester, grab first (only) device
camera = h.create_image_acquirer(0)
# Gather node map to camera properties
n = camera.remote_device.node_map
# Change camera properties to listen for Bruker TTL triggers
# n.TriggerSelector.value = "SingleFrameTrigger" <-- currently not changeable...
n.TriggerMode.value = "Off"
n.TriggerActivation.value = "RisingEdge"
n.TriggerSource.value = "Line2"
n.LineSelector.value = "Line2"

#### Start Taking Frames ####
# Start the acquisition
print("Starting Acquisition")
camera.start_acquisition()
# Fetch buffer of camera
buffer = camera.fetch_buffer()
# Queue the buffer, cycles through buffer positions, destroys 'buffer' object
buffer.queue()
print("Buffer Queued")
# Tell program to sleep for 5 seconds, allow camera to warm up
print("Sleeping...")
time.sleep(5)
print("Go!")
# Create stop condition using current_frame TODO: While true, when experiment ends stop acquisition
# TODO: Need to get triggers to take an image, still stuck. Awaiting Sam.
while current_frame < 3000:
    # Payload includes camera properties and 1D numpy array of pixel values
    payload = buffer.payload
    # Get height and width components from camera, first value of payload
    component = payload.components[0]
    width = component.width # width = 1280
    height = component.height # height = 1024
    # Define incoming data format
    data_format = component.data_format # Mono8, defined above
    # Reshape data numpy array into correct height and width
    content = component.data.reshape(height, width)
    # Gather framerate for writing video later
    framerate = n.AcquisitionFrameRate.value
    # Replace None value with frame at current frame's position
    img_array[current_frame] = content
    # Increment current frame by 1
    current_frame += 1


#### Stopping Acquisition, Writing Video ####
# Stop camera
camera.stop_acquisition()
# Destroy camera object, frees resource
camera.destroy()
# Reset Harvester object and clear all settings
h.reset()

# Convert image array, currently a list of numpy arrays, into a numpy array
img_array = np.array(img_array)
# Show example image of video to user
# plt.imshow(img_array[-1])
# plt.show()


# Create file name for video TODO: Should be done before acquisition
filename = 'testvid.avi'
# State directory for storing the video TODO: Should be done before acquisition
directory = r"C:\Users\jdelahanty\Documents\genie_nano_videos"
# Change directory to specified location
os.chdir(directory)

# State which video codec to use
fourcc = cv2.VideoWriter_fourcc(*'DIVX')
# Create openCV video writer object
out = cv2.VideoWriter(filename, fourcc, framerate, (width, height), False)
# Write image array to file
for image in img_array:
    out.write(image)
# Destroy opencv writer
out.release()
# Exit the program
print("Exiting...")
exit

Hoping a fellow Harvester arrives to show me the way. Very exciting to get frames out of the camera, just not getting them all!

@kazunarikudo
Copy link
Member

@jmdelahanty I have not reviewed the detail but let me leave some comments:

  1. Do you mean SingleFrame? If so, the value must be set to the AcquisitionMode node.
  2. If you want to get stimulate the camera by an external trigger then you need to set On to the TriggerMode node. Any device will keep being stimulated by its internal trigger as long as the value is Off.
  3. I guess everything will be okay displaying images in the image acquisition loop. If you want to keep images on your side, then you will need deep copy images from a buffer that the ImageAcquirer object returns to you. However, note that a deep copy operation drags performance so if any copy is not necessary then you should reconsider changing the implementation; in that case, you will follow a cycle that consists of (1) fetch, (2) display, and (3) queue it back.

I believe you may get it working by yourself so I would like to suggest you moderate the posting pace so that you can leave some meaningful information if you are willing to share it with others. It is great to share information with others but there also be a case where some people copy and paste some wrong and irrelevant code block without enough consideration.

@kazunarikudo
Copy link
Member

@jmdelahanty By the way, thank you for trying out Harvester!

@jmdelahanty
Copy link
Author

Hey Kazunari. thanks for the reply!

Do you mean SingleFrame? If so, the value must be set to the AcquisitionMode node.

Great! I didn't know which node to set, I thought it was the TriggerSelector value. I can get the camera triggered properly in Sapera's CamExpert, but in Python after the first trigger I'm getting told that the resource is already in use.

If you want to get stimulate the camera by an external trigger then you need to set On to the TriggerMode node. Any device will keep being stimulated by its internal trigger as long as the value is Off.

I forgot to change that to "On" before posting the code here, my bad.

I guess everything will be okay displaying images in the image acquisition loop. If you want to keep images on your side, then you will need deep copy images from a buffer that the ImageAcquirer object returns to you.

I need to keep the images and write them to disk, so I'll use deep copy to store them until they're written.

However, note that a deep copy operation drags performance.

Is this conquered by having faster processors on the computer? We'll be acquiring images from a microscope at the same time so it's something I'm worried about...

So if any copy is not necessary then you should reconsider changing the implementation; in that case, you will follow a cycle that consists of (1) fetch, (2) display, and (3) queue it back.

I'll give this a try also just so I can learn how to implement it properly.

By the way, thank you for trying out Harvester!

Thank you for creating Harvester! It's amazing how you can use Python to control these cameras so nicely. I'm having a lot of fun learning how to use it properly.

@kazunarikudo
Copy link
Member

@jmdelahanty Hi, perhaps your application may not need SingleFrame. Instead, I would recommend you checking if Continuous fits your application. SingleFrame or MultiFrame will require you to start image acquisition again because they stop the acquisition engine on the host side stop. Concerning the deep copy, if you need to save every image then it will be inevitable but you should note that the display rate is usually 60 fps so it is ridiculous to show every frame that you can get at some hundreds of fps. In that case, you will be required to run two threads so that you can work on multiple tasks in parallel; one is to pick up one every 1/60 sec. and the other is to save images. Threading is out of the cope here and I will highly recommend you learning by yourself otherwise you will never be able to accomplish any practical application. Good luck!

@kazunarikudo
Copy link
Member

Ah, one more thing: The GenICam committee defines feature names and their behaviors. The standard called the Standard Feature Naming Convention, SFNC in short. You can download a PDF copy at here. That is the shortest path to collect features that would need to build your application. You should be able to find other resources at our resource page when needed.

@jmdelahanty
Copy link
Author

jmdelahanty commented Apr 27, 2021

perhaps your application may not need SingleFrame. Instead, I would recommend you checking if Continuous fits your application. SingleFrame or MultiFrame will require you to start image acquisition again because they stop the acquisition engine on the host side stop.

I had misunderstood what Continuous lets you do! I had thought that to use triggers to acquire a frame you needed to set the acquisition mode to SingleFrame. Doing it with this method gets triggers to work perfect! Thanks Kazunari! Now I'm just struggling to get openCV to write files with the correct width/height parameters. It currently only writes to file if I have the height and width reversed for some reason... I'm using code that was written in issue #131 one of the repos issues that I will find later to save videos. It was a lot nicer than my setup, thank you for your work @bobcorn!

Concerning the deep copy, if you need to save every image then it will be inevitable but you should note that the display rate is usually 60 fps so it is ridiculous to show every frame that you can get at some hundreds of fps.

Thankfully I'm only acquiring frames at 30fps so hopefully displaying video at that speed might not be too intensive for the computer. It's intended to only be displayed for the experimenter to set up the camera's view and focus correctly in the beginning and, once ready, log frames during the experiment. I'm hoping to avoid multithreading for this use case.

The GenICam committee defines feature names and their behaviors. The standard called the Standard Feature Naming Convention, SFNC in short.

Thanks for the resource! This is very helpful.

@jmdelahanty
Copy link
Author

jmdelahanty commented Apr 28, 2021

Now I'm just struggling to get openCV to write files with the correct width/height parameters. It currently only writes to file if I have the height and width reversed for some reason...

I've come across a solution for this and the reason for it is below at the link. but I don't know why it solves the issue. When using np.copyto(), there's something wrong with how the numpy array is written it seems. Creating an array with the height then width as the axes as well as reshaping the incoming data with height by width allows for correct writing later with VideoWriter. Here's the updated code:

# Use Capture Images to Record from Camera
def capture_images():
    # Create filename TODO: make this an input or from setup function
    filename = 'testvid.avi'
    # Define filepath for video
    directory = r"C:\Users\jdelahanty\Documents\genie_nano_videos"
    # Define number of frames to record TODO: Make this an input/from setup
    num_frames = 30
    # Preallocate an array in memory to temporarily store frames
    # Initialize np array as zeros for number of frames, height, width,
    # and 1 color channel.
    # USE HEIGHT THEN WIDTH, unsure why this order is needed...
    img_array = np.zeros([num_frames, 1024, 1280], dtype=np.uint8)

    os.chdir(directory)
    # Start the Camera
    h, camera = init_camera()
    # Get height and width values of frames
    # width = n.Width.value
    # height = n.Height.value
    # Store frames in RAM
    for i in range(num_frames):
        with camera.fetch_buffer() as buffer:
            np.copyto(img_array[i], buffer.payload.components[0].data.reshape(
            buffer.payload.components[0].height, buffer.payload.components[0].width
            ))
    plt.imshow(img_array[-1])
    plt.show()
    # Define which video codec to use
    fourcc = cv2.VideoWriter_fourcc(*'DIVX')
    # Only writes when height and width are reversed!
    out = cv2.VideoWriter(filename, fourcc, 30, (img_array.shape[2], img_array.shape[1]), 0)


    for i in range(len(img_array)):
        out.write(img_array[i])
    out.release()

    shutdown_camera(camera, h)

    sys.exit(0)

Edit: User crackwitz on the opencv forum explained to me what the correct dimensions are for numpy and opencv. Link below: https://forum.opencv.org/t/correct-width-and-height-gives-error-in-videowriter/3039/3

@barriebarry
Copy link

Hi Jeremy, Thanks for sharing your progress with Harvester and I'm going done the very same path but a little behind you. Through your examples, I managed to capture and display my first image and now I need to find a prepare lens! About triggering the camera, I might be able to help in that regard so I'll connect with you via email. I can't seem to find a means to PM you through my github account. Barry

@jmdelahanty
Copy link
Author

jmdelahanty commented May 3, 2021

Hey @barriebarry , here's some updated code that doesn't require you to initialize an empty array before capturing:

# Use Capture Images to Record from Camera
def capture_images():
    # Create filename TODO: make this an input or from setup function
    filename = 'testvid.avi'
    # Define filepath for video
    directory = r"C:\Users\jdelahanty\Documents\genie_nano_videos"
    # Change to directory for writing the video
    os.chdir(directory)
    # Start the Camera
    h, camera, width, height = init_camera()
    # Define number of frames to record TODO: Make this an input/from setup
    num_frames = 30
    # Define video codec for writing images
    fourcc = cv2.VideoWriter_fourcc(*'DIVX')

    # Write file to disk
    # Create VideoWriter object: file, codec, framerate, dims, color value
    out = cv2.VideoWriter(filename, fourcc, 30, (width, height), isColor=False)
    for i in range(num_frames):
        # Use with statement to acquire buffer, payload, an data
        # Payload is 1D numpy array, RESHAPE WITH HEIGHT THEN WIDTH
        # Numpy is backwards, reshaping as heightxwidth writes correctly
        with camera.fetch_buffer() as buffer:
            # Define frame content with buffer.payload
            content = buffer.payload.components[0].data.reshape(height, width)
            # Debugging statment, print content shape and frame number
            print(content.shape, i)
            out.write(content)
    # Release VideoWriter object
    out.release()
    # Shutdown the camera
    shutdown_camera(camera, h)
    # Exit the program
    print("Exiting...")
    sys.exit(0)

This will acquire the number of frames you specify and output them as a video .avi file. That way you can see the video you take! Just be sure to change the directory to something on your computer. ie r"C:\Users\barriebarry\foldername"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants