New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Displaying Video using OpenCV and Harvesters #234
Comments
Hello @kazunarikudo! So I've been trying to get through what's wrong with my code and I've discovered that the 0 x 0, Mono8, 1310720 elements,
[0 0 0 ... 0 0 0]
Traceback (most recent call last):
File "harvester_test.py", line 40, in <module>
content = component.data.reshape(height, width)
ValueError: cannot reshape array of size 1310720 into shape (0,0) In other words, I'm getting values of 0 x 0 for the camera's height and width somehow. The camera has a height of 1280 x 1024 according to the Sapera LT app I have. It seems like I'm definitely missing something for getting the correct information out of the camera's settings. I've tried looking at different issues and in the example, but I haven't been successful yet. Any advice? |
I've gotten past this error! It turns out this needs to be done in the correct order. The components have the correct values after actually acquiring frames! ia = h.create_image_acquirer(0)
ia.start_acquisition()
buffer = ia.fetch_buffer()
buffer.queue()
x = input("Type s to stop me...")
if x == 's':
ia.stop_acquisition()
payload = buffer.payload
component = payload.components[0]
print(component)
width = component.width
height = component.height
data_format = component.data_format
print("done")
if data_format in mono_location_formats:
content = component.data.reshape(height, width)
else:
print("Check camera type?")
ia.destroy()
h.reset() This gives me the correct height and width! Now just to display the frames that have been gathered... still need some advice there. |
I've had some success showing an image! Here's how I got there: # Harvester Routine
from harvesters.core import Harvester
from harvesters.util.pfnc import mono_location_formats
import numpy as np
import time
import matplotlib.pyplot as plt
h = Harvester()
cti_file = "C:/Program Files/MATRIX VISION/mvIMPACT Acquire/bin/x64/mvGENTLProducer.cti"
h.add_file(cti_file)
h.update()
print(h.device_info_list)
ia = h.create_image_acquirer(0)
ia.start_acquisition()
buffer = ia.fetch_buffer()
buffer.queue()
x = input("Type s to stop me...")
if x == 's':
ia.stop_acquisition()
payload = buffer.payload
component = payload.components[0]
print(component)
width = component.width
height = component.height
data_format = component.data_format
print("done")
if data_format in mono_location_formats:
content = component.data.reshape(height, width)
else:
print("Check camera type?")
plt.imshow(content)
plt.show()
y = input("Type s to move forward...")
if y == 's':
ia.destroy()
h.reset()
print("Exiting...")
exit My next step is to show how I can display all the frames that are taken during a video acquisition... |
I'm discovering that the |
After looking through the different closed issues, I discovered that trying out a |
Some success! It turns out I had to reset the camera, it had gotten stuck because of my program. Just replugging it in let me try again. Using a Here's what I've got so far that lets me grab many frames: # Harvester Routine
from harvesters.core import Harvester
from harvesters.util.pfnc import mono_location_formats
import numpy as np
import time
import matplotlib.pyplot as plt
import cv2
import os
img_array = [None]*500
x = 0
h = Harvester()
cti_file = "C:/Program Files/MATRIX VISION/mvIMPACT Acquire/bin/x64/mvGENTLProducer.cti"
h.add_file(cti_file)
h.update()
print(h.device_info_list)
ia = h.create_image_acquirer(0)
ia.start_acquisition()
buffer = ia.fetch_buffer()
buffer.queue()
time.sleep(5)
while x < 500:
payload = buffer.payload
component = payload.components[0]
print(component)
width = component.width
# width = 1280
height = component.height
# height = 1024
data_format = component.data_format
content = component.data.reshape(height, width)
img_array[x] = content
x += 1
type(img_array)
ia.stop_acquisition()
ia.destroy()
h.reset()
plt.imshow(img_array[-1])
plt.show()
print("Exiting...")
exit |
The thing I'm currently trying to accomplish is relevant to #117 now that I can get multiple frames off the camera. I'm trying to only acquire video when a microscope sends a TTL pulse to the camera. I've confirmed that the TTL pulses are being sent, but I can't seem to get the camera to recognize them (in Harvesters or Sapera LT). I'm wondering if it's because I'm not able to set a particular parameter correctly in the from harvesters.core import Harvester
from harvesters.util.pfnc import mono_location_formats
import numpy as np
import time
import matplotlib.pyplot as plt
import cv2
import os
img_array = [None]*20
x = 0
h = Harvester()
cti_file = "C:/Program Files/MATRIX VISION/mvIMPACT Acquire/bin/x64/mvGENTLProducer.cti"
h.add_file(cti_file)
h.update()
print(h.device_info_list)
ia = h.create_image_acquirer(0)
n = ia.remote_device.node_map
n.TriggerSelector.value = "SingleFrameTrigger" # <--- set this value?
n.TriggerMode.value = "On"
n.TriggerActivation.value = "RisingEdge"
n.TriggerSource.value = "Line2"
n.LineSelector.value = "Line2"
ia.start_acquisition()
buffer = ia.fetch_buffer()
buffer.queue()
time.sleep(5)
while x <= 20:
payload = buffer.payload
component = payload.components[0]
width = component.width
# width = 1280
height = component.height
# height = 1024
data_format = component.data_format
content = component.data.reshape(height, width)
img_array[x] = content
x += 1 Sapera LT's CamExpert software shows me the Traceback (most recent call last):
File "Documents\gitrepos\harvesters\harvester_multiframe.py", line 23, in <module>
n.TriggerSelector.value = "SingleFrameTrigger"
File "C:\Users\jdelahanty\.conda\envs\genicam\lib\site-packages\genicam\genapi.py", line 2484, in <lambda>
__setattr__ = lambda self, name, value: _swig_setattr(self, IEnumeration, name, value)
File "C:\Users\jdelahanty\.conda\envs\genicam\lib\site-packages\genicam\genapi.py", line 101, in _swig_setattr
return _swig_setattr_nondynamic(self, class_type, name, value, 0)
File "C:\Users\jdelahanty\.conda\envs\genicam\lib\site-packages\genicam\genapi.py", line 93, in _swig_setattr_nondynamic
object.__setattr__(self, name, value)
File "C:\Users\jdelahanty\.conda\envs\genicam\lib\site-packages\genicam\genapi.py", line 2531, in _set_value
self._primal_set_value(value)
File "C:\Users\jdelahanty\.conda\envs\genicam\lib\site-packages\genicam\genapi.py", line 2522, in _primal_set_value
return _genapi.IEnumeration__primal_set_value(self, entry, verify)
_genapi.InvalidArgumentException: Feature 'TriggerSelector' : cannot convert value 'SingleFrameTrigger', the value is invalid. : InvalidArgumentException thrown in node 'TriggerSelector' while calling 'TriggerSelector.FromString()' (file 'Enumeration.cpp', line 134) I'm not even certain this is the problem because, even if I have this set up in the CamExpert software, I still don't get any confirmation that frames are being grabbed. I'm awaiting support from the manufacturer, but hoping for advice before then. My efforts to get a video recorded after getting frames has also been unsuccessful for the day. I'm trying to use |
Last update for the day for anyone following along! I'm still struggling to get the trigger recognized by the camera. Until I hear back from Teledyne, I think I'll have to wait. I have started to solve the issue of recording videos correctly though I think! Unfortunately, I think the only thing that's getting saved is the first image of the video. The rest of the video is the same exact picture (I put my hand in front of the camera and move it around, but the image presented in video only shows the first image). Something must be wrong in how I'm ordering things... Here's my code: # Harvesters GenIcam Routine for use with Bruker2P Setup
# Jeremy Delahanty Apr. 2021
# Harvesters written by Kazunari Kudo https://github.com/genicam/harvesters
# Genie Nano manufactured by Teledyne DALSA
#### Packages ####
# Harvesters for interfacing with Genie Nano
from harvesters.core import Harvester
# Import mono8 location format, our Genie Nano uses mono8 or mono10
from harvesters.util.pfnc import mono_location_formats
# Harvesters offloads images as numpy arrays, import numpy
import numpy as np
# Matplotlib for plotting an example image
import matplotlib.pyplot as plt
# Time.sleep required to allow camera to warm up
import time
# Import OpenCV2 to write images/videos to file
import cv2
# Import OS to change directories and write files to disk
import os
#### Create Variables ####
# Initialize list of None values for total number of frames to be gathered
# This is converted into a numpy array later
img_array = [None]*3000
# Start increment variable at 0
current_frame = 0
#### Setup Harvester ####
# Create harvester object as h
h = Harvester()
# Give path to GENTL producer
cti_file = "C:/Program Files/MATRIX VISION/mvIMPACT Acquire/bin/x64/mvGENTLProducer.cti"
# Add GENTL producer to Harvester object
h.add_file(cti_file)
# Update Harvester object
h.update()
# Print device list to make sure camera is present
print(h.device_info_list)
#### Grab Camera, Change Settings ####
# Create image_acquirer object for Harvester, grab first (only) device
camera = h.create_image_acquirer(0)
# Gather node map to camera properties
n = camera.remote_device.node_map
# Change camera properties to listen for Bruker TTL triggers
# n.TriggerSelector.value = "SingleFrameTrigger" <-- currently not changeable...
n.TriggerMode.value = "Off"
n.TriggerActivation.value = "RisingEdge"
n.TriggerSource.value = "Line2"
n.LineSelector.value = "Line2"
#### Start Taking Frames ####
# Start the acquisition
print("Starting Acquisition")
camera.start_acquisition()
# Fetch buffer of camera
buffer = camera.fetch_buffer()
# Queue the buffer, cycles through buffer positions, destroys 'buffer' object
buffer.queue()
print("Buffer Queued")
# Tell program to sleep for 5 seconds, allow camera to warm up
print("Sleeping...")
time.sleep(5)
print("Go!")
# Create stop condition using current_frame TODO: While true, when experiment ends stop acquisition
# TODO: Need to get triggers to take an image, still stuck. Awaiting Sam.
while current_frame < 3000:
# Payload includes camera properties and 1D numpy array of pixel values
payload = buffer.payload
# Get height and width components from camera, first value of payload
component = payload.components[0]
width = component.width # width = 1280
height = component.height # height = 1024
# Define incoming data format
data_format = component.data_format # Mono8, defined above
# Reshape data numpy array into correct height and width
content = component.data.reshape(height, width)
# Gather framerate for writing video later
framerate = n.AcquisitionFrameRate.value
# Replace None value with frame at current frame's position
img_array[current_frame] = content
# Increment current frame by 1
current_frame += 1
#### Stopping Acquisition, Writing Video ####
# Stop camera
camera.stop_acquisition()
# Destroy camera object, frees resource
camera.destroy()
# Reset Harvester object and clear all settings
h.reset()
# Convert image array, currently a list of numpy arrays, into a numpy array
img_array = np.array(img_array)
# Show example image of video to user
# plt.imshow(img_array[-1])
# plt.show()
# Create file name for video TODO: Should be done before acquisition
filename = 'testvid.avi'
# State directory for storing the video TODO: Should be done before acquisition
directory = r"C:\Users\jdelahanty\Documents\genie_nano_videos"
# Change directory to specified location
os.chdir(directory)
# State which video codec to use
fourcc = cv2.VideoWriter_fourcc(*'DIVX')
# Create openCV video writer object
out = cv2.VideoWriter(filename, fourcc, framerate, (width, height), False)
# Write image array to file
for image in img_array:
out.write(image)
# Destroy opencv writer
out.release()
# Exit the program
print("Exiting...")
exit Hoping a fellow Harvester arrives to show me the way. Very exciting to get frames out of the camera, just not getting them all! |
@jmdelahanty I have not reviewed the detail but let me leave some comments:
I believe you may get it working by yourself so I would like to suggest you moderate the posting pace so that you can leave some meaningful information if you are willing to share it with others. It is great to share information with others but there also be a case where some people copy and paste some wrong and irrelevant code block without enough consideration. |
@jmdelahanty By the way, thank you for trying out Harvester! |
Hey Kazunari. thanks for the reply!
Great! I didn't know which node to set, I thought it was the
I forgot to change that to
I need to keep the images and write them to disk, so I'll use deep copy to store them until they're written.
Is this conquered by having faster processors on the computer? We'll be acquiring images from a microscope at the same time so it's something I'm worried about...
I'll give this a try also just so I can learn how to implement it properly.
Thank you for creating Harvester! It's amazing how you can use Python to control these cameras so nicely. I'm having a lot of fun learning how to use it properly. |
@jmdelahanty Hi, perhaps your application may not need |
Ah, one more thing: The GenICam committee defines feature names and their behaviors. The standard called the Standard Feature Naming Convention, SFNC in short. You can download a PDF copy at here. That is the shortest path to collect features that would need to build your application. You should be able to find other resources at our resource page when needed. |
I had misunderstood what
Thankfully I'm only acquiring frames at 30fps so hopefully displaying video at that speed might not be too intensive for the computer. It's intended to only be displayed for the experimenter to set up the camera's view and focus correctly in the beginning and, once ready, log frames during the experiment. I'm hoping to avoid multithreading for this use case.
Thanks for the resource! This is very helpful. |
I've come across a solution for this and the reason for it is below at the link. # Use Capture Images to Record from Camera
def capture_images():
# Create filename TODO: make this an input or from setup function
filename = 'testvid.avi'
# Define filepath for video
directory = r"C:\Users\jdelahanty\Documents\genie_nano_videos"
# Define number of frames to record TODO: Make this an input/from setup
num_frames = 30
# Preallocate an array in memory to temporarily store frames
# Initialize np array as zeros for number of frames, height, width,
# and 1 color channel.
# USE HEIGHT THEN WIDTH, unsure why this order is needed...
img_array = np.zeros([num_frames, 1024, 1280], dtype=np.uint8)
os.chdir(directory)
# Start the Camera
h, camera = init_camera()
# Get height and width values of frames
# width = n.Width.value
# height = n.Height.value
# Store frames in RAM
for i in range(num_frames):
with camera.fetch_buffer() as buffer:
np.copyto(img_array[i], buffer.payload.components[0].data.reshape(
buffer.payload.components[0].height, buffer.payload.components[0].width
))
plt.imshow(img_array[-1])
plt.show()
# Define which video codec to use
fourcc = cv2.VideoWriter_fourcc(*'DIVX')
# Only writes when height and width are reversed!
out = cv2.VideoWriter(filename, fourcc, 30, (img_array.shape[2], img_array.shape[1]), 0)
for i in range(len(img_array)):
out.write(img_array[i])
out.release()
shutdown_camera(camera, h)
sys.exit(0) Edit: User crackwitz on the opencv forum explained to me what the correct dimensions are for numpy and opencv. Link below: https://forum.opencv.org/t/correct-width-and-height-gives-error-in-videowriter/3039/3 |
Hi Jeremy, Thanks for sharing your progress with Harvester and I'm going done the very same path but a little behind you. Through your examples, I managed to capture and display my first image and now I need to find a prepare lens! About triggering the camera, I might be able to help in that regard so I'll connect with you via email. I can't seem to find a means to PM you through my github account. Barry |
Hey @barriebarry , here's some updated code that doesn't require you to initialize an empty array before capturing: # Use Capture Images to Record from Camera
def capture_images():
# Create filename TODO: make this an input or from setup function
filename = 'testvid.avi'
# Define filepath for video
directory = r"C:\Users\jdelahanty\Documents\genie_nano_videos"
# Change to directory for writing the video
os.chdir(directory)
# Start the Camera
h, camera, width, height = init_camera()
# Define number of frames to record TODO: Make this an input/from setup
num_frames = 30
# Define video codec for writing images
fourcc = cv2.VideoWriter_fourcc(*'DIVX')
# Write file to disk
# Create VideoWriter object: file, codec, framerate, dims, color value
out = cv2.VideoWriter(filename, fourcc, 30, (width, height), isColor=False)
for i in range(num_frames):
# Use with statement to acquire buffer, payload, an data
# Payload is 1D numpy array, RESHAPE WITH HEIGHT THEN WIDTH
# Numpy is backwards, reshaping as heightxwidth writes correctly
with camera.fetch_buffer() as buffer:
# Define frame content with buffer.payload
content = buffer.payload.components[0].data.reshape(height, width)
# Debugging statment, print content shape and frame number
print(content.shape, i)
out.write(content)
# Release VideoWriter object
out.release()
# Shutdown the camera
shutdown_camera(camera, h)
# Exit the program
print("Exiting...")
sys.exit(0) This will acquire the number of frames you specify and output them as a video |
Hello again fellow Harvesters!
I'm successfully grabbing video I'm pretty sure with my camera now, but I'm stuck trying to display them in matplotlib.
I've gotten as far as this in the example in the readme:
But now that I have the content shaped correctly, I don't know how to display it so I can make sure harvesters is grabbing images.
Any advice?
The text was updated successfully, but these errors were encountered: