iOS Metal图像渲染

从数据源说起

videoToolBox解码出来的是CPU与GPU共享内存的CVPixelBufferRef格式,渲染过程中cpu与gpu会在大量数据拷贝交换,对资源消耗很大,CVPixelBufferRef是存储在共享寄存器的数据,免去了这个过程,性能有很大的提升

伴随着iphone5s划时代产物的发布,苹果的硬编解码器api也在iOS8开放,iphone5s上搭载了A7处理器,这是苹果自研的首款支持64bit架构的双核芯片,然而苹果硬解码支持的码流格式仍然很有限,仅支持h264、hevc

所以为了支持更多的码流格式,不得不实现一套软解码方案,软解出来的数据格式不支持显存与内存共享,所以为了渲染一致性和降低后续特效处理的复杂性可以将数据格式统一成nv12,苹果的命名kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange

使用libyuv操作起来很容易

    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    size_t bytePerRowY = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
    size_t bytesPerRowUV = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
    void* y = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    void* uv = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
    
    const uint8* src_y = frame.m_pData[0];
    const uint8* src_u = frame.m_pData[0] + frame.m_width*frame.m_height;
    const uint8* src_v = frame.m_pData[0] + frame.m_width*frame.m_height*5/4;
    
    I420ToNV12(src_y, frame.m_width,
               src_u, frame.m_width/2,
               src_v, frame.m_width/2,
               (uint8_t *)y, (int)bytePerRowY,
               (uint8_t *)uv, (int)bytesPerRowUV,
               frame.m_width, frame.m_height);
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

自此播放器输出的格式就是CVPixelBufferRef格式,接下来会进行渲染流程

渲染线程

渲染的上下文是线程相关的,主线程通常有更重要的IO需要处理,所以渲染不应该占用主线程,我们应该创建一个渲染线程,用CADisplayLink进行驱动,定义在QuartzCore中的与系统刷新频率一致的一个计时器,默认一秒执行60次

像这样

    self.displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(renderLoop)];
    [_displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSRunLoopCommonModes];

renderLoop函数负责到CVPixelBufferRef队列取数据,取不到视频帧证明播放器可能buffering或者同步帧率控制,放弃这次渲染机会即可,取到之后分发给渲染器,渲染器启动管线,对数据进行特效处理,最终显示

播放器图像处理边界

通常来讲播放器不会处理太复杂的特效,剪辑类的应用视频处理会独立一套剪辑工具专门用于特效处理,播放器通常具备以下视频处理能力即可,旋转、镜像、平移、高斯边缘模糊、画质增强、字幕、弹幕、截图、录制和外部渲染,稍后会从这些功能进行展开

渲染引擎

iOS支持openGLES,但是2018年的WWDC上,苹果公司发布了正式版的iOS12,同时宣布弃用openGLES并且在后续的iOS版本中会逐步替换底层库的相关代码使之向metal演变,这真的很苹果,强推自己研发的metal渲染引擎,并宣称2D渲染效率是openGLES的2倍,3D渲染效率更是好的惊人,是openGLES的差不多10倍,后经本人实测,数据水分不大,所以本文主要分享下metal引擎

  • Metal支持的操作系统:iOS、macOS X
  • Metal的硬件要求:A7处理器,ARM64以上,就是iphone5s以上的机型,不支持x86
  • Metal支持的系统版本:Metal:iOS8,MetalKit:iOS9

渲染管线

黑色标题为可编程部分

  • 顶点数据:包括顶点坐标、纹理坐标、顶点法线和顶点颜色等属性,顶点数据构成的图元信息(点、线、三角形等)需要参数代入绘制指令
  • 顶点着色器:将输入的局部坐标变换到世界坐标、观察坐标和裁剪坐标
  • 图元装配:将输入的顶点组装成指定的图元,这个阶段会进行裁剪和背面剔除相关优化
  • 几何着色器:将输入的图元扩展成多边形,将物体坐标变换为窗口坐标
  • 光栅化:将多边形转化为离散屏幕像素点并得到片元信息
  • 片元着色器:通过片元信息为像素点着色,这个阶段会进行光照计算、阴影处理等特效处理
  • 测试混合阶段:依次进行裁切测试、Alpha测试、模板测试和深度测试
  • 帧缓存:最终生成的图像存储在帧缓存,然后放入渲染缓冲区,等待显示

Metal对比openGLES性能高在哪里

openGLES
初始化流程:

  • 配置CAEAGLLayer属性
  • 配置EAGLContext上下文
  • 绑定帧缓存和渲染缓冲区
  • 编译着色器shader
  • 创建着色器句柄
  • 链接、应用着色器
  • 创建可复用纹理

渲染一帧流程:

  • 设置当前EAGLContext上下文
  • 创建VBO
  • 应用着色器
  • 绑定帧缓冲区
  • 绑定渲染缓冲区
  • 清空缓冲区颜色
  • 设置渲染窗口大小
  • 绑定VBO、设置着色器参数、绑定纹理、绘制
  • 显示到屏幕

Metal

iOS9推出了MetalKit,整个流程与openGLES的核心流程差不多,我们来看下区别:

渲染引擎/特性 openGLES Metal
平台 跨平台 iOS、macOS
编程 面向过程编程基于状态机 面向对象编程基于协议
着色器 动态编译 编译成二进制预集成在可执行文件里
并发 不支持 支持
异步 不支持 支持
CPU、GPU利用率 CPU占用率高、GPU占用率也不低 CPU占用率低、GPU占用率高

看上去Metal除了只支持苹果平台,其他全是优点,噗~

Metal为何如此优秀

openGLES中所有资源依附于上下文,而Metal的粒度则更细,我们看它是如何拆解这个渲染过程的

  • id对象被定义为GPU的入口,资源对象直接通过入口对象创建,id纹理、缓冲区id指针是不可变的,但指针指向的地址即图像数据是可变的,这就大大提高寻址的速度

  • id渲染管线等对象通过相应的状态描述对象创建,指针和对象都是不可变的,因此管线状态可以预设,流程可以预估,反观openGLES需要每次draw call之前检查当前上下文是否变化以及纹理数据生成、绑定,所以这点上Metal又做到了降低能耗

  • Metal使用命令队列来管理整个渲染过程,id命令队列对象由统一的id对象创建,所有绘制操作以id对象的形式提交到命令队列,命令队列由系统维护,会对命令进行合并、批量回执等优化操作,Metal预设了4种命令,分别是渲染命令、块传输命令、计算命令、并行渲染命令,命令在提交到队列之前需要编码,这个过程由CPU负责,之后的执行过程全权交给GPU,Metal又成功的降低了CPU能耗

  • MTLParallelRenderCommandEncoder多线程命令编码器,把命令提交到这个类型的编码器,系统会自动多线程编码,是线程安全的,嗯~多线程,提升了执行速度,多核CPU下这个步骤会占用更多的CPU以提升下个阶段的执行效率

  • Metal设计了3帧缓存如下图,对比openGLES的一帧缓存,优势是我们可以预先提交三帧到渲染缓冲区,一帧缓存的模型问题是:GPU没有消费完上一帧图像时候CPU会阻塞,GPU消费完上一帧数据的时候可能因为CPU还没有提交下一帧到渲染缓冲区而发生空闲等待的情况,这个设计提升了单位时间渲染图像的上限即提升了FPS上限

播放器的通用图像处理能力

下面会基于MetalPetal(一个优秀的Metal开源框架)阐述播放器的图像处理功能:旋转、镜像、平移、高斯边缘模糊、画质增强、字幕、弹幕、截图、录制和外部渲染

笔者对MetalPetal做了一些改动如下:
MTIContext.h

//旋转
typedef NS_ENUM(NSUInteger, RotateMode)
{
    RotateModeNone,         //正常
    RotateModeMirror,       //镜像
    RotateMode90,           //90or-270
    RotateMode180,          //180
    RotateMode270           //270or-90
};

@property (nonatomic) RotateMode rotateMode;
@property (nonatomic) int videoWidth;
@property (nonatomic) int videoHeight;
@property (nonatomic) CGFloat viewOffsetX;//渲染偏移
@property (nonatomic) CGFloat viewOffsetY;
@property (nonatomic) BOOL blendEnable;//透明混合

MTIContext (Rendering)的renderImage函数,支持旋转、镜像、平移

- (BOOL)renderImage:(MTIImage *)image toDrawableWithRequest:(MTIDrawableRenderingRequest *)request error:(NSError * _Nullable __autoreleasing *)inOutError {
    [self lockForRendering];
    @MTI_DEFER {
        [self unlockForRendering];
    };
    
    MTIImageRenderingContext *renderingContext = [[MTIImageRenderingContext alloc] initWithContext:self];
    
    NSError *error = nil;
    id resolution = [renderingContext resolutionForImage:image error:&error];
    @MTI_DEFER {
        [resolution markAsConsumedBy:self];
    };
    if (error) {
        if (inOutError) {
            *inOutError = error;
        }
        return NO;
    }
    
    MTLRenderPassDescriptor *renderPassDescriptor = [request.drawableProvider renderPassDescriptorForRequest:request];
    if (renderPassDescriptor == nil) {
        if (inOutError) {
            *inOutError = MTIErrorCreate(MTIErrorEmptyDrawable, nil);
        }
        return NO;
    }
    
    float heightScaling = 1.0;
    float widthScaling = 1.0;
    MTKView *renderView = (MTKView *)request.drawableProvider;
    CGSize viewSize = renderView.bounds.size;
    
    CGFloat width = viewSize.width;
    CGFloat height = viewSize.height;
    CGFloat videoWidth = self.videoWidth;
    CGFloat videoHeigth = self.videoHeight;
    
    CGSize drawEnableSize;
    
    CGFloat viewWHProportion = width/height;
    CGFloat videoWHProportion = videoWidth/videoHeigth;
    
    if (viewWHProportion >= videoWHProportion)
    {
        drawEnableSize.width = videoHeigth * viewWHProportion;
        drawEnableSize.height = videoHeigth;
    }
    else
    {
        drawEnableSize.width = videoWidth;
        drawEnableSize.height = videoWidth / viewWHProportion;
    }
    switch (request.resizingMode) {
        case MTIDrawableRenderingResizingModeScale: {
            widthScaling = 1.0;
            heightScaling = 1.0;
        }; break;
        case MTIDrawableRenderingResizingModeAspect:
        {
            widthScaling = videoWidth / drawEnableSize.width;
            heightScaling = videoHeigth / drawEnableSize.height;
        }; break;
        case MTIDrawableRenderingResizingModeAspectFill:
        {
            widthScaling = drawEnableSize.height / videoHeigth;
            heightScaling = drawEnableSize.width / videoWidth;
            if (request.shotEdgeVisible)
            {
                if (videoWHProportion > 1)
                {
                    widthScaling = widthScaling/heightScaling;
                    heightScaling = 1;
                }
                else
                {
                    heightScaling = heightScaling/widthScaling;
                    widthScaling = 1;
                }
            }
        }; break;
    }
    
    MTIVertices *vertices = nil;
    switch (self.rotateMode)
    {
        case RotateModeNone:
        {
            MTIVertices *vertice = [[MTIVertices alloc] initWithVertices:(MTIVertex []){
                { .position = {-widthScaling, -heightScaling, 0, 1} , .textureCoordinate = { 0, 1 } },
                { .position = {widthScaling, -heightScaling, 0, 1} , .textureCoordinate = { 1, 1 } },
                { .position = {-widthScaling, heightScaling, 0, 1} , .textureCoordinate = { 0, 0 } },
                { .position = {widthScaling, heightScaling, 0, 1} , .textureCoordinate = { 1, 0 } }
            } count:4 primitiveType:MTLPrimitiveTypeTriangleStrip];
            vertices = vertice;
        }
            break;
        case RotateModeMirror:
        {
            MTIVertices *vertice = [[MTIVertices alloc] initWithVertices:(MTIVertex []){
                { .position = {-widthScaling, -heightScaling, 0, 1} , .textureCoordinate = { 1, 1 } },
                { .position = {widthScaling, -heightScaling, 0, 1} , .textureCoordinate = { 0, 1 } },
                { .position = {-widthScaling, heightScaling, 0, 1} , .textureCoordinate = { 1, 0 } },
                { .position = {widthScaling, heightScaling, 0, 1} , .textureCoordinate = { 0, 0 } }
            } count:4 primitiveType:MTLPrimitiveTypeTriangleStrip];
            vertices = vertice;
        }
            break;
        case RotateMode90:
        {
            MTIVertices *vertice = [[MTIVertices alloc] initWithVertices:(MTIVertex []){
                { .position = {-widthScaling, -heightScaling, 0, 1} , .textureCoordinate = { 1, 1 } },
                { .position = {widthScaling, -heightScaling, 0, 1} , .textureCoordinate = { 1, 0 } },
                { .position = {-widthScaling, heightScaling, 0, 1} , .textureCoordinate = { 0, 1 } },
                { .position = {widthScaling, heightScaling, 0, 1} , .textureCoordinate = { 0, 0 } }
            } count:4 primitiveType:MTLPrimitiveTypeTriangleStrip];
            vertices = vertice;
        }
            break;
        case RotateMode180:
        {
            MTIVertices *vertice = [[MTIVertices alloc] initWithVertices:(MTIVertex []){
                { .position = {-widthScaling, -heightScaling, 0, 1} , .textureCoordinate = { 1, 0 } },
                { .position = {widthScaling, -heightScaling, 0, 1} , .textureCoordinate = { 0, 0 } },
                { .position = {-widthScaling, heightScaling, 0, 1} , .textureCoordinate = { 1, 1 } },
                { .position = {widthScaling, heightScaling, 0, 1} , .textureCoordinate = { 0, 1 } }
            } count:4 primitiveType:MTLPrimitiveTypeTriangleStrip];
            vertices = vertice;
        }
            break;
        case RotateMode270:
        {
            MTIVertices *vertice = [[MTIVertices alloc] initWithVertices:(MTIVertex []){
                { .position = {-widthScaling, -heightScaling, 0, 1} , .textureCoordinate = { 0, 0 } },
                { .position = {widthScaling, -heightScaling, 0, 1} , .textureCoordinate = { 0, 1 } },
                { .position = {-widthScaling, heightScaling, 0, 1} , .textureCoordinate = { 1, 0 } },
                { .position = {widthScaling, heightScaling, 0, 1} , .textureCoordinate = { 1, 1 } }
            } count:4 primitiveType:MTLPrimitiveTypeTriangleStrip];
            vertices = vertice;
        }
            break;
        default:
        {
            MTIVertices *vertice = [[MTIVertices alloc] initWithVertices:(MTIVertex []){
                { .position = {-widthScaling, -heightScaling, 0, 1} , .textureCoordinate = { 0, 1 } },
                { .position = {widthScaling, -heightScaling, 0, 1} , .textureCoordinate = { 1, 1 } },
                { .position = {-widthScaling, heightScaling, 0, 1} , .textureCoordinate = { 0, 0 } },
                { .position = {widthScaling, heightScaling, 0, 1} , .textureCoordinate = { 1, 0 } }
            } count:4 primitiveType:MTLPrimitiveTypeTriangleStrip];
            vertices = vertice;
        }
            break;
    }
    
    NSParameterAssert(image.alphaType != MTIAlphaTypeUnknown);
    
    //iOS drawables always require premultiplied alpha.
    MTIRenderPipelineKernel *kernel;
    if (image.alphaType == MTIAlphaTypeNonPremultiplied) {
        kernel = MTIContext.premultiplyAlphaKernel;
    } else {
        kernel = MTIContext.passthroughKernel;
    }
    
    MTIRenderPipelineKernelConfiguration *configuration = [[MTIRenderPipelineKernelConfiguration alloc] initWithColorAttachmentPixelFormat:renderPassDescriptor.colorAttachments[0].texture.pixelFormat];
    MTIRenderPipeline *renderPipeline = [self kernelStateForKernel:kernel configuration:configuration error:&error];
    if (error) {
        if (inOutError) {
            *inOutError = error;
        }
        return NO;
    }
    
    id samplerState = [renderingContext.context samplerStateWithDescriptor:image.samplerDescriptor error:&error];
    if (error) {
        if (inOutError) {
            *inOutError = error;
        }
        return NO;
    }
    
    __auto_type commandEncoder = [renderingContext.commandBuffer renderCommandEncoderWithDescriptor:renderPassDescriptor];
    
    [commandEncoder setViewport:(MTLViewport){self.viewOffsetX, self.viewOffsetY, renderPassDescriptor.colorAttachments[0].texture.width, renderPassDescriptor.colorAttachments[0].texture.height, -1.0, 1.0 }];
    
    [commandEncoder setRenderPipelineState:renderPipeline.state];
    [commandEncoder setVertexBytes:vertices.bufferBytes length:vertices.bufferLength atIndex:0];
    
    [commandEncoder setFragmentTexture:resolution.texture atIndex:0];
    [commandEncoder setFragmentSamplerState:samplerState atIndex:0];
    
    [commandEncoder drawPrimitives:vertices.primitiveType vertexStart:0 vertexCount:vertices.vertexCount];
    [commandEncoder endEncoding];
    
    id drawable = [request.drawableProvider drawableForRequest:request];
    [renderingContext.commandBuffer presentDrawable:drawable];
    
    [renderingContext.commandBuffer commit];
    [renderingContext.commandBuffer waitUntilScheduled];
    
    return YES;
}

MTIComputePipelineKernel的newKernelStateWithContext函数,支持透明通道渲染

if (context.blendEnable)
{
    colorAttachmentDescriptor.blendingEnabled = YES; //启用混合
    colorAttachmentDescriptor.rgbBlendOperation = MTLBlendOperationAdd;
    colorAttachmentDescriptor.alphaBlendOperation = MTLBlendOperationAdd;
    colorAttachmentDescriptor.sourceRGBBlendFactor = MTLBlendFactorSourceAlpha;
    colorAttachmentDescriptor.sourceAlphaBlendFactor = MTLBlendFactorSourceAlpha;
    colorAttachmentDescriptor.destinationRGBBlendFactor = MTLBlendFactorOneMinusSourceAlpha;
    colorAttachmentDescriptor.destinationAlphaBlendFactor = MTLBlendFactorOneMinusSourceAlpha;
}
else
{
    colorAttachmentDescriptor.blendingEnabled = NO;
}

旋转

- (void)rotateTex:(int)angle
{
    if(angle == 90 || angle == -270)
    {
        _context.rotateMode = RotateMode90;
    }
    else if (angle == -90 || angle == 270)
    {
        _context.rotateMode = RotateMode270;
    }
    else if (angle == -180 || angle == 180)
    {
        _context.rotateMode = RotateMode180;
    }
    else
    {
        _context.rotateMode = RotateModeNone;
    }
}

镜像

- (void)setMirror:(BOOL)mirror
{
    if (mirror)
    {
        _context.rotateMode = RotateModeMirror;
    }
    else
    {
        _context.rotateMode = RotateModeNone;
    }
}

平移

- (void)translateX:(float)x
{
    _context.viewOffsetY = x * [UIScreen mainScreen].scale;
}

- (void)translateY:(float)y
{
    _context.viewOffsetY = y * [UIScreen mainScreen].scale;
}

高斯边缘模糊

- (void)setEnableEdgeBlur:(BOOL)enableEdgeBlur
{
    _enableEdgeBlur = enableEdgeBlur;
    if (_enableEdgeBlur)
    {
        _resizingMode = MTIDrawableRenderingResizingModeAspectFill;
        if (!_boxBlurFilter)
        {
            self.boxBlurFilter = [[MTIMPSBoxBlurFilter alloc] init];
            _boxBlurFilter.size = 100;
        }
        if (!_layerFilter)
        {
            self.layerFilter = [[MTIMultilayerCompositingFilter alloc] init];
        }
    }
    else
    {
        self.m_DisplayMode = _m_DisplayMode;
    }
}

- (MTIImage *)blurImage:(MTIImage *)inputImage
{
    CGFloat width = bounds.size.width;
    CGFloat heigth = bounds.size.height;
    if (_context.rotateMode == RotateMode90 || _context.rotateMode == RotateMode270)
    {
        width = bounds.size.height;
        heigth = bounds.size.width;
    }
    
    CGFloat videoWidth = inputImage.size.width;
    CGFloat videoHeigth = inputImage.size.height;
    if (_bufferExporter.sar_num && _bufferExporter.sar_den)
    {
        //确保sar大的一边是被放大的
        if (_bufferExporter.sar_num >= _bufferExporter.sar_den)
        {
            videoWidth = inputImage.size.width *_bufferExporter.sar_num/_bufferExporter.sar_den;
            videoHeigth = inputImage.size.height;
        }
        else
        {
            videoWidth = inputImage.size.width;
            videoHeigth = inputImage.size.height *_bufferExporter.sar_den/_bufferExporter.sar_num;
        }
    }
    
    CGPoint layerPoint = CGPointMake(inputImage.size.width * 0.5, inputImage.size.height * 0.5);
    CGSize layerSize;
    
    CGFloat viewWHProportion = width/heigth;
    CGFloat videoWHProportion = videoWidth/videoHeigth;
    
    if (viewWHProportion >= videoWHProportion)
    {
        layerSize.width = videoWidth * heigth/(width/videoWHProportion);
        layerSize.height = videoHeigth * heigth/(width/videoWHProportion);
    }
    else
    {
        layerSize.width = videoWidth * width/(heigth*videoWHProportion);
        layerSize.height = videoHeigth * width/(heigth*videoWHProportion);
    }
    

    _boxBlurFilter.inputImage = inputImage;
    MTIImage *backgroundImage = [_boxBlurFilter.outputImage imageWithCachePolicy:MTIImageCachePolicyPersistent];
    
    MTILayer *layer = [[MTILayer alloc] initWithContent:inputImage layoutUnit:MTILayerLayoutUnitPixel position:layerPoint size:layerSize rotation:0 opacity:1 blendMode:MTIBlendModeNormal];
    NSArray *layers = @[layer];
    _layerFilter.inputBackgroundImage = backgroundImage;
    _layerFilter.layers = layers;
    MTIImage *outputImage = [_layerFilter.outputImage imageWithCachePolicy:MTIImageCachePolicyPersistent];
    
    return outputImage;
}

画质增强

- (MTIImage *)enhancedImageQuality:(MTIImage *)inputImage
{
    _saturationFilter.inputImage = inputImage;
    _brightnessFilter.inputImage = _saturationFilter.outputImage;
    _contrastFilter.inputImage = _brightnessFilter.outputImage;
    
    MTIImage *outputImage = _contrastFilter.outputImage;
    
    return outputImage;
}

字幕、弹幕

字幕和弹幕是用文件来存储的,需要ffmpeg配合libass单独解码,生成纹理,然后和当前视频帧进行一次叠加或者单独添加一个layer
弹幕点播和直播的实现略有不同,点播和字幕的实现差不多,直播需要一定的实时性,但是渲染过程还是一样的

- (void)addBlendBuffer:(NSData *)buffer width:(NSUInteger)width height:(NSUInteger)height
{
    CIImage *outImage = [CIImage imageWithBitmapData:buffer
                                         bytesPerRow:width * 4
                                                size:CGSizeMake(width, height)
                                              format:kCIFormatRGBA8
                                          colorSpace:CGColorSpaceCreateDeviceRGB()];
    self.blendBuffer = [[MTIImage alloc] initWithCIImage:outImage];
    dispatch_once(&blendFilterInited, ^{
        self.blendFilter = [[MTIMultilayerCompositingFilter alloc] init];
    });
}

- (MTIImage *)blendImage:(MTIImage *)inputImage
{
    if (!_blendBuffer)
    {
        return inputImage;
    }
    CGPoint layerPoint = CGPointMake(inputImage.size.width * 0.5, inputImage.size.height * 0.5);
    CGSize layerSize = CGSizeMake(_blendBuffer.size.width, _blendBuffer.size.height);
    MTIImage *backgroundImage = inputImage;
    MTILayer *layer = [[MTILayer alloc] initWithContent:_blendBuffer layoutUnit:MTILayerLayoutUnitPixel position:layerPoint size:layerSize rotation:0 opacity:1 blendMode:MTIBlendModeNormal];
    NSArray *layers = @[layer];
    _blendFilter.inputBackgroundImage = backgroundImage;
    _blendFilter.layers = layers;
    MTIImage *outputImage = [_blendFilter.outputImage imageWithCachePolicy:MTIImageCachePolicyPersistent];
    return outputImage;
}

截图、录制和外部渲染

流程比较简单,取出当前即将渲染的视频帧,生成UIImage、抛到录制系统、CVPixelBufferRef输出到外部,相关代码冗长,不再列举下载链接

你可能感兴趣的:(iOS Metal图像渲染)