Skip to content

Latest commit

 

History

History
129 lines (93 loc) · 4.87 KB

006.md

File metadata and controls

129 lines (93 loc) · 4.87 KB

播放视频

前两篇教程使用过 UIImageView 和 GLKView 渲染视频,本篇教程换个牛逼的渲染方法,将 avframe 转成 CMSampleBufferRef ,然后使用 AVSampleBufferDisplayLayer 渲染(这种方式只有 iOS8,macos10.8 才支持)。

这是我使用 iPhone6sp (iOS 10.10) 测试之后的数据:

缓存20s的视频包;缓存2s的解码帧;
 
 USE_PIXEL_BUFFER_POLL 0
 
 60fps GPU 10% CPU 35% Memory 67M
 
 USE_PIXEL_BUFFER_POLL 1
 
 60fps GPU 大部分是0%,少数平均(15%) CPU 25% Memory 69M
 
 截止目前表现最好的!
 --------------------------------
 //第五篇教程的数据
 19fps GPU 10% CPU 35% Memory 70M
 -------------------------------- 
 //第四篇教程的数据
 when USEBITMAP 1
 
 22fps GPU 12% CPU 55% Memory 110M
 
 when USEBITMAP 0
 
 22fps GPU 14% CPU 50% Memory 26M

注:一定不要使用模拟器去查看这些参数,因为模拟器是通过 x86_64 架构的电脑模拟出来的环境和真实环境相差特别大!

由于大部分逻辑都是基于第四天教程的,所以直接贴下核心代码:

核心代码

开始读包前设置下转换格式,准备下渲染 Layer

// 渲染Layer
if(!self.sampleBufferDisplayLayer){
    
    self.sampleBufferDisplayLayer = [[AVSampleBufferDisplayLayer alloc] init];
    self.sampleBufferDisplayLayer.frame = self.view.bounds;
    self.sampleBufferDisplayLayer.position = CGPointMake(CGRectGetMidX(self.view.bounds), CGRectGetMidY(self.view.bounds));
    self.sampleBufferDisplayLayer.videoGravity = AVLayerVideoGravityResizeAspect;
    self.sampleBufferDisplayLayer.opaque = YES;
    [self.view.layer addSublayer:self.sampleBufferDisplayLayer];
}
    
enum AVPixelFormat pix_fmt = PIX_FMT_NV12;
const int picSize = avpicture_get_size(pix_fmt, self.vwidth, self.vheight);
    
self.out_buffer = malloc(picSize);
self.img_convert_ctx = sws_getContext(self.vwidth, self.vheight, self.format, self.vwidth, self.vheight, pix_fmt, SWS_BICUBIC, NULL, NULL, NULL);
    
self.pFrameYUV = av_frame_alloc();
avpicture_fill((AVPicture *)self.pFrameYUV, self.out_buffer, pix_fmt, self.vwidth, self.vheight);              

帧格式转换方法

- (CMSampleBufferRef)sampleBufferFromAVFrame:(AVFrame*)video_frame w:(int)w h:(int)h
{
#if USE_PIXEL_BUFFER_POLL
    CVReturn theError;
    if (!self.pixelBufferPool){
        int linesize = video_frame->linesize[0];
        NSMutableDictionary* attributes = [NSMutableDictionary dictionary];
        [attributes setObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
        [attributes setObject:[NSNumber numberWithInt:w] forKey: (NSString*)kCVPixelBufferWidthKey];
        [attributes setObject:[NSNumber numberWithInt:h] forKey: (NSString*)kCVPixelBufferHeightKey];
        [attributes setObject:@(linesize) forKey:(NSString*)kCVPixelBufferBytesPerRowAlignmentKey];
        [attributes setObject:[NSDictionary dictionary] forKey:(NSString*)kCVPixelBufferIOSurfacePropertiesKey];
        
        theError = CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (__bridge CFDictionaryRef) attributes, &_pixelBufferPool);
        if (theError != kCVReturnSuccess){
            NSLog(@"CVPixelBufferPoolCreate Failed");
        }
    }
#endif
    
    CVPixelBufferRef pixelBuffer = [MRConvertUtil pixelBufferFromAVFrame:video_frame w:w h:h opt:self.pixelBufferPool];
    
    return [MRConvertUtil cmSampleBufferRefFromCVPixelBufferRef:pixelBuffer];
}

解码线程逻辑修改

// 根据配置把数据转换成 NV12 或者 RGB24
int pictRet = sws_scale(self.img_convert_ctx, (const uint8_t* const*)video_frame->data, video_frame->linesize, 0, self.vheight, self.pFrameYUV->data, self.pFrameYUV->linesize);
    
if (pictRet <= 0) {
    av_frame_free(&video_frame);
    return ;
}
    
CMSampleBufferRef sampleBuffer = [self sampleBufferFromAVFrame:self.pFrameYUV w:self.vwidth h:self.vheight];
    
// 获取时长
const double frameDuration = av_frame_get_pkt_duration(video_frame) * self.videoTimeBase;
av_frame_free(&video_frame);
// 构造模型
MRVideoFrame *frame = [[MRVideoFrame alloc]init];
frame.duration = frameDuration;
frame.sampleBuffer = sampleBuffer;

渲染

- (void)displayVideoFrame:(MRVideoFrame *)frame
{
    [self.sampleBufferDisplayLayer enqueueSampleBuffer:frame.sampleBuffer];
}

完整逻辑可打开工程查看运行。

分析

USE_PIXEL_BUFFER_POLL 1 意味着创建一个 CVPixelBufferPoolRef ,后续就可以通过这个池子生成 CVPixelBuffer 了,通过试验来看确实有必要创建!性能提升了不少,甚至于 GPU 使用率一度是 0!这让我有些怀疑人生,不确定是怎么回事!我猜测这样做能够实现CPU 和 GPU 的数据共享,避免了数据拷贝,节省了很多时间,进而提升了效率!

截止到目前,我们可以不通过 OPENGL 就能做到毫无压力的 60 fps 了,可见 iOS API 真给力!