New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get realtime frame data from GPUImageVideoCamera quickly without using GPUImageRawDataOutput? #2014
Comments
glFinish() is required in the GPUImageRawDataOutput to guarantee that rendering has completed up to the point where bytes are extracted in that output. Due to the deferred nature of the iOS GPUs, I wouldn't necessarily trust time profiling for CPU-side instructions to say that that instruction is what's taking 32 ms. That's just the point at which the processing will wait on the CPU until the GPU-side processing catches up. If you have a more complex video pipeline before that, glFinish() may only be pausing long enough for the rest of the GPU-side operations to complete. If you just need to get raw bytes from the camera, don't bother with this framework and instead go directly to the sample buffer returned from the AV Foundation camera instance. |
@BradLarson Thank you for your kindly reply. Actually I just need to get raw bytes (RGB format) from GPUImageVideoCamera. In your FilterShowcase project, I find a "face detection ", which uses the So I implement the delegate method like this, reference your reply in http://stackoverflow.com/questions/10865100/ios-get-pixel-by-pixel-data-from-camera But result is wrong, the And the raw data get in this way is RGB or YUV? |
By default, the GPUImageVideoCamera uses YUV planar data straight from the camera. I believe this makes the sample buffers you get back from that planar ones, so you might have to treat them slightly differently than standard RGB buffers. You can switch to an RGB input on the GPUImageVideoCamera by replacing the line
with
in GPUImageVideoCamera.m. I don't have this exposed as a property at present. That said, I didn't add the sample buffer capture, so I haven't checked to see if it still works right after the months of changes I made in the framework since it was contributed. Again, if all you need is a raw sample buffer, and no further GPU-side processing from the framework, don't bother with GPUImage. Just use AV Foundation directly. |
@BradLarson What I want to do is that each frame data get from the VideoCamera is preprocessed by my codes (like face detection), according to the detection results I will change the parameters of the following GPUImageFilters . They are all based on your framework. So I really need to get raw data from your framework. In your GPUImageShowCase , there is also a face detection.
|
@BradLarson Thank you brad, now I can get each realtime frame data . But the problem is the out put data's sort order. I suppose the output pixel data sort order is the same as the following picture So do you have any suggestion to me about how to change the output pixel order? Thank you so much! |
@sjtujulian did you come up with a solution? |
@BradLarson hi, I use GPUImageRawDataOutput to get "rawBytesForImage", but when i convert it to uiimage, just a few of the rawBytes working, code: + (UIImage *)imageFromGlubyte:(GLubyte *)bytes width:(NSInteger)width height:(NSInteger)height {
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bytes, width * height * 4, nil);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGImageRef imageRef = CGImageCreate(width, height, 8, 32, width * 4, colorSpaceRef, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, provider, nil, NO, kCGRenderingIntentDefault);
// CGContextRef contextRef = CGBitmapContextCreate(bytes, width, height, 8, 4 * width, colorSpaceRef, kCGImageAlphaPremultipliedLast);
// CGContextDrawImage(contextRef, CGRectMake(0, 0, 720, 1280), imageRef);
// UIImage *newImage = [[UIImage alloc] initWithCGImage:CGBitmapContextCreateImage(contextRef)];
UIImage *newImage = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
return newImage;
} use this function in rawDataOutput's newFrameAvailableBlock. Did i miss something? |
@BradLarson is it OK to replace glfinish with glflush? |
I set captureAsYUV = NO in the GPUImageVideoCamera.m file and ran the build script... got freshly build framework, hooked it up to my project and still got black and white images out of the willOutputSampleBuffer method. I also tried setting different video options, however this caused a crash. There were a few options available for the video option key type, however my attempts to select a colour type failed. |
Hi, brad,
Now I want get realtime data from the GPUImageVideoCamera. When I use the GPUImageRawDataOutput class, the app works well. However, I realize there is a
glFinish()
in GPUImageRawDataOutput’s method- (GLubyte *)rawBytesForImage;
which is verytime consuming
. I need to do some process on each realtime frame , but the glFinish() takes about 32ms, which makes frame rate decreases fast. If Ireplace the glFinish() with glFlush
in- (GLubyte *)rawBytesForImage
, I can not get the realtime frame data which result in that I can not do the process on each realtime frame;So I wonder if there are other solution to
get the realtime data from GPUImageVideoCamera besides the time consuming GPUImageRawDataOutput
? what’s more, I find that GPUImageVideoCamera’s willOutputSampleBuffer maybe useful from http://w3facility.org/question/capturing-gpuimagevideocamera-or-avcapturesession-frames-in-circular-or-ring-buffer-for-instant-playback/Can you give me some ideas about this problem? Thank you.
The text was updated successfully, but these errors were encountered: