VideoToolBox 解码H.264

关于VideoToolBox 解码 H264 ,这次我们通过 ffmpeg 提取一个视频流的 的视频流,也就是 h264 编码格式的视频流(没有音频);

命令如下:

ffmpeg -i /Users/pengchao/Downloads/download.mp4 -codec copy  -f h264 output.h264

1. 获取 NALU 单元

demo中,我们首先把h264 文件读到内存中,通过创建定时器,来读取 一个NALU单元;该步骤重点是如何在文件流中找到 NALU 单元,众所周知 ,每个 NALU单元前面都有起始码 0x00 0x00 0x00 0x010x00 0x00 0x01来分割 NALU 单元; 这里 我们画图来解释 如何通过指针移动来找到 一个NALU 单元,并拿到NALU 单元的长度,从而获取到 一个完整NALU

image.png

源码逻辑可参考如下代码:

- (void)tick {
    
    dispatch_sync(_decodeQueue, ^{
        //1.获取packetBuffer和packetSize
        packetSize = 0;
        if (packetBuffer) {
            free(packetBuffer);
            packetBuffer = NULL;
        }
        if (_inputSize < _inputMaxSize && _inputStream.hasBytesAvailable) { //一般情况下只会执行一次,使得inputMaxSize等于inputSize
            _inputSize += [_inputStream read:_inputBuffer + _inputSize maxLength:_inputMaxSize - _inputSize];
        }
        if ((memcmp(_inputBuffer, startCode, 4) == 0) && (_inputSize > 4)) {
            
            uint8_t *pStart = _inputBuffer + 4;         //pStart 表示 NALU 的起始指针
            uint8_t *pEnd = _inputBuffer + _inputSize;  //pEnd 表示 NALU 的末尾指针
            while (pStart != pEnd) {                    //这里使用一种简略的方式来获取这一帧的长度:通过查找下一个0x00000001来确定。
                if(memcmp(pStart - 3, startCode, 4) == 0 ) {
                    packetSize = pStart - _inputBuffer - 3;
                    if (packetBuffer) {
                        free(packetBuffer);
                        packetBuffer = NULL;
                    }
                    packetBuffer = malloc(packetSize);
                    memcpy(packetBuffer, _inputBuffer, packetSize); //复制packet内容到新的缓冲区
                    memmove(_inputBuffer, _inputBuffer + packetSize, _inputSize - packetSize); //把缓冲区前移
                    _inputSize -= packetSize;
                    break;
                }
                else {
                    ++pStart;
                }
            }
        }
        if (packetBuffer == NULL || packetSize == 0) {
            [self endDecode];
            return;
        }
        /// 拿到NALU 的首地址和 长度后,解析该NALU
}

2. 获取SPS 和PPS

在上一篇文章中,我们首先保存的是SPSPPS 数据,所以在文件流的读取中,我们应该晓得第一个和第二个NALU分别是SPSPPS,这正是我们创建VideoToolBox所需要的参数;
在解析NALU的时候,还是要再讲一下 H264 码流的结构。H264码流是由一个个的NAL单元组成,其中SPSPPSIDRSLICENAL单元某一类型的数据。

如下图所示:

image.png

所以在找到 start code后,第一个字节为NALU Header ,通过NALU Header判断这是一个什么类型的NALU
关于 NALU Header 的结构:

  • 第 0位 F
  • 第1-2 位 NRI
  • 第3-7位:TYPE
image.png

关于NALU 类型的定义我们可以参考下图:

image.png

解析NALU 的代码如所示:

        //2.将packet的前4个字节换成大端的长度
        //大端:高字节保存在低地址
        //小端:高字节保存在高地址
        //大小端的转换实际上及时将字节顺序换一下即可
        uint32_t nalSize = (uint32_t)(packetSize - 4);
        uint8_t *pNalSize = (uint8_t*)(&nalSize);
        packetBuffer[0] = pNalSize[3];
        packetBuffer[1] = pNalSize[2];
        packetBuffer[2] = pNalSize[1];
        packetBuffer[3] = pNalSize[0];
        
        //3.判断帧类型(根据码流结构可知,startcode后面紧跟着就是码流的类型)
        int nalType = packetBuffer[4] & 0x1f;
        switch (nalType) {
            case 0x05:
                //IDR frame
                [self initDecodeSession];
                [self decodePacket];
                break;
            case 0x07:
                //sps
                if (_sps) { _sps = nil;}
                size_t spsSize = (size_t) packetSize - 4;
                uint8_t *sps = malloc(spsSize);
                memcpy(sps, packetBuffer+4, spsSize);
                _sps = [NSData dataWithBytes:sps length:spsSize];
                break;
            case 0x08:
                //pps
                if (_pps) { _pps = nil; }
                size_t ppsSize = (size_t) packetSize - 4;
                uint8_t *pps = malloc(ppsSize);
                memcpy(pps, packetBuffer+4, ppsSize);
                _pps = [NSData dataWithBytes:pps length:ppsSize];
                break;
            default:
                // B/P frame
                [self decodePacket];
                break;
        }
    });

3. 创建 VideoToolBox

在拿到 spspps后,创建videoTooBox
如果没有spspps我们 需要 xxx 来创建 videoToolBox;

-(void)initVideoToolBox {
    
    if (_decodeSession) {
        return;
    }
    
    CMFormatDescriptionRef formatDescriptionOut;
    const uint8_t * const param[2] = {_sps.bytes,_pps.bytes};
    const size_t paramSize[2] = {_sps.length,_pps.length};
    OSStatus formateStatus =
    CMVideoFormatDescriptionCreateFromH264ParameterSets(NULL,
                                                        2,
                                                        param,
                                                        paramSize,
                                                        4,
                                                        &formatDescriptionOut);
    _formatDescriptionOut = formatDescriptionOut;
    
    if (formateStatus!=noErr) {
        NSLog(@"FormatDescriptionCreate fail");
        return;
    }
    //2. 创建VTDecompressionSessionRef
    //确定编码格式
    const void *keys[] = {kCVPixelBufferPixelFormatTypeKey};
    
    uint32_t t = kCVPixelFormatType_420YpCbCr8BiPlanarFullRange;
    const void *values[] = {CFNumberCreate(NULL, kCFNumberSInt32Type, &t)};
    
    CFDictionaryRef att = CFDictionaryCreate(NULL, keys, values, 1, NULL, NULL);
    
    VTDecompressionOutputCallbackRecord VTDecompressionOutputCallbackRecord;
    VTDecompressionOutputCallbackRecord.decompressionOutputCallback = decodeCompressionOutputCallback;
    VTDecompressionOutputCallbackRecord.decompressionOutputRefCon = (__bridge void * _Nullable)(self);
    
    OSStatus sessionStatus = VTDecompressionSessionCreate(NULL,
                                 formatDescriptionOut,
                                 NULL,
                                 att,
                                 &VTDecompressionOutputCallbackRecord,
                                 &_decodeSession);
    CFRelease(att);
    if (sessionStatus != noErr) {
        NSLog(@"SessionCreate fail");
        [self endDecode];
    }
}

4.解码NALU 单元

再拿到 关键关键帧后,我们 通过NSData 构造videoToolBox 需要的sampleBuffe ;并送入编码器;

关于解码的源码如下:

- (void)encoderWithData:(NSData *)data{
    if (!_decodeSession) {
        return;
    }
    //1.创建CMBlockBufferRef
    CMBlockBufferRef blockBuffer = NULL;
    OSStatus blockBufferStatus =
    CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
                                       data.bytes,
                                       data.length,
                                       NULL,
                                       NULL,
                                       0,
                                       data.length,
                                       0,
                                       &blockBuffer);
    if (blockBufferStatus!=noErr) {
        NSLog(@"BolkBufferCreate fail");
        return;
    }
    //2.创建CMSampleBufferRef
    CMSampleBufferRef sampleBuffer = NULL;
    const size_t sampleSizeArray[] = {data.length};
    OSStatus sampleBufferStatus =
    CMSampleBufferCreateReady(kCFAllocatorDefault,
                              blockBuffer,
                              _formatDescriptionOut,
                              1, //sample 的数量
                              0, //sampleTimingArray 的长度
                              NULL, //sampleTimingArray 对每一个设置一些属性,这些我们并不需要
                              1, //sampleSizeArray 的长度
                              sampleSizeArray,
                              &sampleBuffer);
    
    if (blockBuffer && sampleBufferStatus == kCMBlockBufferNoErr) {
        //3.编码生成
        VTDecodeFrameFlags flags = 0;
        VTDecodeInfoFlags flagOut = 0;
        OSStatus decodeStatus = VTDecompressionSessionDecodeFrame(_decodeSession,
                                          sampleBuffer,flags,
                                          NULL,
                                          &flagOut); //receive information about the decode operation
        if (decodeStatus!= noErr) {
            NSLog(@"DecodeFrame fail %d",(int)decodeStatus);
            return;
        }
    }
    if (sampleBufferStatus != noErr) {
        NSLog(@"SampleBufferCreate fail");
        return;
    }
}

5.获取解码后的pixelBuffer图像信息

解码成功后的回调


static void decodeCompressionOutputCallback(void * CM_NULLABLE decompressionOutputRefCon,
                                      void * CM_NULLABLE sourceFrameRefCon,
                                      OSStatus status,
                                      VTDecodeInfoFlags infoFlags,
                                      CM_NULLABLE CVImageBufferRef imageBuffer,
                                      CMTime presentationTimeStamp,
                                      CMTime presentationDuration ){
    
    VideoDecoder *self = (__bridge VideoDecoder *)(decompressionOutputRefCon);
    dispatch_queue_t callbackQuque = self ->_decodeCallbackQueue;
    
    CIImage *ciimage = [CIImage imageWithCVPixelBuffer:imageBuffer];
    UIImage *image = [UIImage imageWithCIImage:ciimage];
    if (imageBuffer && [self.delegate respondsToSelector:@selector(videoDecoderCallbackPixelBuffer:)]) {
        CIImage *ciimage = [CIImage imageWithCVPixelBuffer:imageBuffer];
        UIImage *image = [UIImage imageWithCIImage:ciimage];
        dispatch_async(callbackQuque, ^{
            [self.delegate videoDecoderCallbackPixelBuffer:image];
        });
    }
}

6. 总结

源码地址: https://github.com/hunter858/OpenGL_Study/AVFoundation/VideoToolBox-decoder

你可能感兴趣的:(VideoToolBox 解码H.264)