Replies: 9 comments
-
I'm working towards this myself, the MediaPlayer object is using libav underneath, and I don't think there's any way for that to work "out of the box". my plans, which I don't have much time to commit to since it's just an evening hobby thing, is to make my own class which supports the same interface. |
Beta Was this translation helpful? Give feedback.
-
Doesn't picamera2 expose a v4l2 interface? |
Beta Was this translation helpful? Give feedback.
-
well, you do see devices come up as /dev/video* but I had no luck just trying to use those, I don't recall the errors right now |
Beta Was this translation helpful? Give feedback.
-
@jensbjorgensen I'm working on a side project involving a Raspberry Pi and aiortc as well. I've been considering implementing this feature too. I'm not very familiar with all the technical details yet, but I believe I can learn them. If you have a plan in mind for implementing this feature, I'd be excited to collaborate with you. |
Beta Was this translation helpful? Give feedback.
-
@Hadayo I'd love to do it, but unfortunately this is just hobby work so no idea when I'll actuall get started on it. The interface that's there basically matches up to PyAv it seems. Without having looked deeply at all my thinking was just wrap the pycamera2 such that I can expose an interface that matches what PyAv has. |
Beta Was this translation helpful? Give feedback.
-
OK so if i understand your question correctly, you want to able to somehow create a video track using picamera2 and then add it to the peer right ? class OverwrittenCamera(MediaStreamTrack):
kind = "video"
def __init__(self):
super().__init__() # don't forget this!
self.cam = Picamera2()
self.cam.configure(self.cam.create_preview_configuration())
self.cam.start()
async def recv(self):
img = self.cam.capture_array()
# Calculating the representation time in term of microseconds
pts = time.time() * 1000000
# rebuild a VideoFrame, preserving timing information
new_frame = VideoFrame.from_ndarray(img, format='rgba')
new_frame.pts = int(pts)
new_frame.time_base = Fraction(1,1000000)
return new_frame then all you need to do is to add it to the track by altering the "offer" function a little bit... cam = OverwrittenCamera()
pc.addTrack(cam) Here is the full code which i have tried and it works on a Raspberry Pi 4 Model B Rev 1.5 ( Raspbian GNU/Linux 11 ), which itself is connected to a camera Module 3 - 12MP class OverwrittenCamera(MediaStreamTrack):
kind = "video"
def __init__(self):
super().__init__() # don't forget this!
self.cam = Picamera2()
self.cam.configure(self.cam.create_preview_configuration())
self.cam.start()
async def recv(self):
img = self.cam.capture_array()
# Calculating the representation time in term of microseconds
pts = time.time() * 1000000
# rebuild a VideoFrame, preserving timing information
new_frame = VideoFrame.from_ndarray(img, format='rgba')
new_frame.pts = int(pts)
new_frame.time_base = Fraction(1,1000000)
return new_frame
async def offer(request):
params = await request.json()
offer = RTCSessionDescription(sdp=params["sdp"], type=params["type"])
pc = RTCPeerConnection()
pcs.add(pc)
@pc.on("connectionstatechange")
async def on_connectionstatechange():
print("Connection state is %s" % pc.connectionState)
if pc.connectionState == "failed":
await pc.close()
pcs.discard(pc)
cam = OverwrittenCamera()
pc.addTrack(cam)
await pc.setRemoteDescription(offer)
answer = await pc.createAnswer()
await pc.setLocalDescription(answer)
return web.Response(
content_type="application/json",
text=json.dumps(
{"sdp": pc.localDescription.sdp, "type": pc.localDescription.type}
),
) |
Beta Was this translation helpful? Give feedback.
-
Nicely done @hackernese that's pretty close to what I'm going for, except that since the device has built-in hardware h.264 compress I'd like to pass hardware-compressed frames through directly. My hope is it shouldn't take too much more code than what you've done above. I definitely don't want to either a) be streaming uncompressed frames (too much data, I want to stream them over the internet and lack sufficient uplink speed) or b) do compression in software on the Pi. |
Beta Was this translation helpful? Give feedback.
-
Well personally i have never tried any Raspberry Pi H.264 compression feature that you mentioned so i would appreciate if you link me some good sources on what i have been missing. |
Beta Was this translation helpful? Give feedback.
-
Ok, I finally opened this up again and managed to get it working. As I mentioned above my goal was not "can I get picamera video working" but more specifically "can I stream natively-h264-compressed picamera video". Similarly to the approach @hackernese took above (nice work with that) the main thing is to derive from MediaStreamTrack but instead we the raw H.264 frames and hand those along. Here's the modified webcam.py script I have working:
|
Beta Was this translation helpful? Give feedback.
-
Hi guys. Is anybody know or have an idea how to use PiCamera2 and put videodata into MediaStreamTrac?
Beta Was this translation helpful? Give feedback.
All reactions