Android GraphicBuffer在CameraService、CameraProvider、CameraHAL的申请、传递、使用、归还流程

Android Camera系统在启动预览或者拍照时,所有预览帧数据或者拍照数据都是以GraphicBuffer为载体来完成传递的。本文主要针对GraphicBuffer在CameraService、CameraProvider、CameraHAL中的申请、传递、使用、归还流程进行简要的分析。

首先分析下GraphicBuffer的申请流程

1. GraphicBuffer申请流程

在Camera 启动预览或者拍照前,需要至少传递一个surface给CameraService。
在Camera 启动预览或者拍照时,Camera 应用层会向CameraService发送processCaptureRequest申请,CameraService收到该申请会通过prepareHalRequests向Surface申请一个GraphicBUffer来完成帧数据的申请,传递,归还等操作,prepareHalRequests申请GraphicBuffer流程图如下:
Android GraphicBuffer在CameraService、CameraProvider、CameraHAL的申请、传递、使用、归还流程_第1张图片
prepareHalRequests代码如下:

//frameworks\av\services\camera\libcameraservice\device3\Camera3Device.cpp
status_t Camera3Device::RequestThread::prepareHalRequests() {

    for (size_t i = 0; i < mNextRequests.size(); i++) {
        auto& nextRequest = mNextRequests.editItemAt(i);
        sp<CaptureRequest> captureRequest              = nextRequest.captureRequest;
        camera3_capture_request_t* halRequest          = &nextRequest.halRequest;
        Vector<camera3_stream_buffer_t>* outputBuffers = &nextRequest.outputBuffers;
        ....
        //创建captureRequest->mOutputStreams.size()个camera3_stream_buffer_t空对象
        //并放入nextRequest.outputBuffers中
        outputBuffers->insertAt(camera3_stream_buffer_t(), 0,
                captureRequest->mOutputStreams.size());
         //将outputBuffers赋值给halRequest->output_buffers
         //现在halRequest->output_buffers里边没有值
        halRequest->output_buffers = outputBuffers->array();
        //逐个申请GraphicBuffer,然后赋值halRequest->output_buffers
        for (size_t j = 0; j < captureRequest->mOutputStreams.size(); j++) {
             //outputStream是在配置流的时候创建的
            sp<Camera3OutputStreamInterface> outputStream = captureRequest->mOutputStreams.editItemAt(j);
            ....
            //向Surfaces申请GraphicBuffer及其对应FenceFD
            //然后赋值给halRequest->output_buffers
            res = outputStream->getBuffer(&outputBuffers->editItemAt(j),
                    captureRequest->mOutputSurfaces[j]);
            ....
            halRequest->num_output_buffers++;
        }
    }

    return OK;
}

GraphicBuffer申请是在Camera3Stream::getBuffer完成的,
Camera3Stream::getBuffer又会调用Camera3OutputStream::getBufferLocked来完成申请GraphicBuffer

我们分析下 Camera3OutputStream::getBufferLocked

//frameworks\av\services\camera\libcameraservice\device3\Camera3OutputStream.cpp
status_t Camera3OutputStream::getBufferLocked(camera3_stream_buffer *buffer,
        const std::vector<size_t>&) {
    ....
    //ANativeWindowBuffer是GraphicBuffer的父类
    ANativeWindowBuffer* anb;
    int fenceFd = -1;
    status_t res;
    //申请GraphicBuffer及对应的fenceFd
    res = getBufferLockedCommon(&anb, &fenceFd);
    ....
     //将申请到的GraphicBuffer的handle成员地址和fenceFd赋值给camera3_stream_buffer
     //GraphicBuffer handle 类型为 native_handle_t*
    handoutBufferLocked(*buffer, &(anb->handle), /*acquireFence*/fenceFd,
                        /*releaseFence*/-1, CAMERA3_BUFFER_STATUS_OK, /*output*/true);
    return OK;
}

继续分析下Camera3OutputStream::getBufferLockedCommon是如何申请GraphicBuffer的

//frameworks\av\services\camera\libcameraservice\device3\Camera3OutputStream.cpp
status_t Camera3OutputStream::getBufferLockedCommon(ANativeWindowBuffer** anb, int* fenceFd) {

    ....
    //gotBufferFromManager为false
    if (!gotBufferFromManager) {
        ....
        //mConsumer是应用传入CameraService的Surfac的强指针
        //ANativeWindow是Surfac的父类
        sp<ANativeWindow> currentConsumer = mConsumer;
        mLock.unlock();
        ...
        //通过Surfac::dequeueBuffer申请GraphicBuffer及fenceFd
        res = currentConsumer->dequeueBuffer(currentConsumer.get(), anb, fenceFd);
        ...
        mLock.lock();
        ...
    }
    if (res == OK) {
        std::vector<sp<GraphicBuffer>> removedBuffers;
         //同时获取需要被释放的GraphicBuffer的队列--removedBuffers
         //cameraservice在向CameraProvider申请帧数据时,
         //会一并将申请到的GraphicBuffer和removedBuffers
         //传递给CameraProvider
        res = mConsumer->getAndFlushRemovedBuffers(&removedBuffers);
        if (res == OK) {
            //将removedBuffers保存到mFreedBuffers中
            onBuffersRemovedLocked(removedBuffers);
           ....
        }
    }

    return res;
}

通过上边的分析发现,
CameraService是通过Surface::dequeueBuffer来获取GraphicBuffer及其对应的fenceFd的。
Surface::dequeueBuffer更进一步的申请流程请参考
Android GraphicBuffer是系统什么buffer及其分配过程,本文就不详细介绍了。与此同时,还会通过Surface::getAndFlushRemovedBuffer获取Surface中需要被释放的GraphicBuffer列表removedBuffers。

在CameraService向CameraProvider申请帧数据时,首先需要将GraphicBuffer及Fence赋值给halRequest->output_buffers
该过程是在Camera3IOStreamBase::handoutBufferLocked完成的。

//frameworks\av\services\camera\libcameraservice\device3\Camera3IOStreamBase.cpp
//handle为graphicbuffer handle地址, &(anb->handle)
//acquireFence为graphicbuffer对应的fencefd
void Camera3IOStreamBase::handoutBufferLocked(camera3_stream_buffer &buffer,
                                              buffer_handle_t *handle,
                                              int acquireFence,
                                              int releaseFence,
                                              camera3_buffer_status_t status,
                                              bool output) {
    ...
    buffer.stream = this;
    //将ANativeWindowBuffer中成员handle(类型为native_handle_t* )的地址
    //赋值给camera3_stream_buffer的buffer(类型buffer_handle_t *)成员,
    //buffer_handle_t *类型是native_handle_t**类型
    buffer.buffer = handle;
    //将GraphicBuffer对应的fencefd赋值给acquire_fence
    buffer.acquire_fence = acquireFence;
    buffer.release_fence = releaseFence;
    buffer.status = status;
  ...
}

至此完成了GraphicBuffer申请流程分析,简单总结下:

  1. 通过Surface::dequeueBuffer获取GraphicBuffer 及其对应FenceFd
  2. 将申请到的GraphicBuffer 及其对应FenceFd赋值给halRequest->output_buffers
  3. 通过Surface::getAndFlushRemovedBuffer获取Surface中需要释放的GraphicBuffer列表removedBuffers

通过RequestThread::prepareHalRequests后,CameraService 将申请到的GraphicBuffer 及其对应FenceFd
赋值给了halRequest(camera3_capture_request_t*l类型)output_buffers中,完成了halRequest的准备工作,接着分析下halRequest如何传递给CameraProvider

2. GraphicBuffer从CameraService传递给CameraProvider流程

CameraService在向CameraProvider申请帧数据时,会将之前申请到的GraphicBuffer和removedBuffer一起传递给CameraProvider,
代码如下:

//frameworks\av\services\camera\libcameraservice\device3\Camera3Device.cpp
status_t Camera3Device::HalInterface::processBatchCaptureRequests(
        std::vector<camera3_capture_request_t*>& requests,
        /*out*/uint32_t* numRequestProcessed) {
    ....
    //1.captureRequests准备流程
    hardware::hidl_vec<device::V3_2::CaptureRequest> captureRequests;
    size_t batchSize = requests.size();
    captureRequests.resize(batchSize);
    std::vector<native_handle_t*> handlesCreated;
    //将camera3_capture_request_t型requests转换
    //为device::V3_2::CaptureRequest型captureRequests。
    //该过程会将requests中camera3_stream_buffer_t型的output_buffers转换
    //为StreamBuffer类型
    for (size_t i = 0; i < batchSize; i++) {
        //转换函数,稍后会重点分析下
        wrapAsHidlRequest(requests[i], /*out*/&captureRequests[i], /*out*/&handlesCreated);
    }
    //2.cachesToRemove准备流程
    //将mFreedBuffers中的数据转换为BufferCache类型
    std::vector<device::V3_2::BufferCache> cachesToRemove;
    {
        .....
        //mFreedBuffers就是在申请GraphicBuffer时获得的
        for (auto& pair : mFreedBuffers) {
            // The stream might have been removed since onBufferFreed
            if (mBufferIdMaps.find(pair.first) != mBufferIdMaps.end()) {
                cachesToRemove.push_back({pair.first, pair.second});
            }
        }
        //清空mFreedBuffers
        mFreedBuffers.clear();
    }
    .....
    //CameraService与CameraProvider IPC通信,
    //将captureRequests、cachesToRemove一并传递给CameraProvider
    //mHidlSession为BpHwCameraDeviceSession代理对象
    auto err = mHidlSession->processCaptureRequest(captureRequests, cachesToRemove,
            [&status, &numRequestProcessed] (auto s, uint32_t n) {
                status = s;
                *numRequestProcessed = n;
            });
    ....
    return CameraProviderManager::mapToStatusT(status);
}

从上述代码看,在向CameraProvider申请帧数前,需要两个过程:

  1. 将halRequest保存到hidl_vecdevice::V3_2::CaptureRequest captureRequests 队列
  2. 将removedBuffer 保存到std::vectordevice::V3_2::BufferCache cachesToRemove队列

2.1 halRequest保存到captureRequests 队列

该过程是在函数Camera3Device::HalInterface::wrapAsHidlRequest完成的,代码如下:

//frameworks\av\services\camera\libcameraservice\device3\Camera3Device.cpp
void Camera3Device::HalInterface::wrapAsHidlRequest(camera3_capture_request_t* request,
        /*out*/device::V3_2::CaptureRequest* captureRequest,
        /*out*/std::vector<native_handle_t*>* handlesCreated) {
    ....
    //frameNumber 设置为request->frame_number
    captureRequest->frameNumber = request->frame_number;
    //fmqSettingsSize 设置为0
    captureRequest->fmqSettingsSize = 0;
    {
        .....
        //将camera3_stream_buffer_t类型转换为StreamBuffer类型
        captureRequest->outputBuffers.resize(request->num_output_buffers);
        for (size_t i = 0; i < request->num_output_buffers; i++) {
            //src 为camera3_stream_buffer_t*
            const camera3_stream_buffer_t *src = request->output_buffers[i];
            //des 为StreamBuffer
            StreamBuffer &dst = captureRequest->outputBuffers[i];
            //获取streamId
            int32_t streamId = Camera3Stream::cast(src->stream)->getId();
            //等同于native_handle_t* buf = *(src->buffer);
            //就是surface::dequeue时获取的anw->handle
            buffer_handle_t buf = *(src->buffer);
            //查找是否是streamId已分配过的buffer
            //如果是则返回(false,bufferid)
            //如果不是则返回(true,newbufferid)
            //因为从surface申请的buffer,一般为2个或3个,循环利用
            //所以不必每次都传递,只需要id即可
            auto pair = getBufferId(buf, streamId);
            bool isNewBuffer = pair.first;
            dst.streamId = streamId;
            //设置bufferId 
            dst.bufferId = pair.second;
		    //dst.buffer类型为native_handle_t*
		    //dst.buffer可以为空
		    //为空时需要hal通过dst.bufferId去查找对应的buffer
		    //详细解释见hardware\interfaces\camera\device\3.2\types.hal
            dst.buffer = isNewBuffer ? buf : nullptr;
            dst.status = BufferStatus::OK;
            //fence相关内容
            //如果camera3_stream_buffer_t中acquire_fence
            //是一个有效的fence,则需要由该fence重新生成一个备份fencefd
            native_handle_t *acquireFence = nullptr;
            if (src->acquire_fence != -1) {
                acquireFence = native_handle_create(1,0);
                acquireFence->data[0] = src->acquire_fence;
                handlesCreated->push_back(acquireFence);
            }
            dst.acquireFence = acquireFence;
            dst.releaseFence = nullptr;
            //函数作用待调查
            pushInflightBufferLocked(captureRequest->frameNumber, streamId,
                    src->buffer, src->acquire_fence);
        }
    }
}

2.2 将removedBuffer 保存到 cachesToRemove队列

removedBuffers是在Camera3OutputStream::getBufferLockedCommon 申请GraphicBuffer时一并申请的,代码如下:

//frameworks\av\services\camera\libcameraservice\device3\Camera3OutputStream.cpp
status_t Camera3OutputStream::getBufferLockedCommon(ANativeWindowBuffer** anb, int* fenceFd) {
    ...
    //通过Surfac::dequeueBuffer申请GraphicBuffer及fenceFd
    res = currentConsumer->dequeueBuffer(currentConsumer.get(), anb, fenceFd);
    ...
    if (res == OK) {
        std::vector<sp<GraphicBuffer>> removedBuffers;
         //同时获取需要被释放的GraphicBuffer的队列--removedBuffers
         //cameraservice在向CameraProvider申请帧数据时,
         //会一并将申请到的GraphicBuffer和removedBuffers
         //传递给CameraProvider
        res = mConsumer->getAndFlushRemovedBuffers(&removedBuffers);
        if (res == OK) {
            //将removedBuffers保存到mFreedBuffers中
            onBuffersRemovedLocked(removedBuffers);
           ....
        }
    }
    ....
}

下边介绍下如何将removedBuffers保存到mFreedBuffers中

//frameworks\av\services\camera\libcameraservice\device3\Camera3OutputStream.cpp
void Camera3OutputStream::onBuffersRemovedLocked(
        const std::vector<sp<GraphicBuffer>>& removedBuffers) {
    sp<Camera3StreamBufferFreedListener> callback = mBufferFreedListener.promote();
    if (callback != nullptr) {
        for (auto gb : removedBuffers) {
            callback->onBufferFreed(mId, gb->handle);
        }
    }
}

接着分析下Camera3Device::HalInterface::onBufferFreed

//frameworks\av\services\camera\libcameraservice\device3\Camera3Device.cpp
void Camera3Device::HalInterface::onBufferFreed(
        int streamId, const native_handle_t* handle) {
    std::lock_guard<std::mutex> lock(mBufferIdMapLock);
    uint64_t bufferId = BUFFER_ID_NO_BUFFER;
    //在stream中查找是否存在当前handle,如果有则删除
    auto mapIt = mBufferIdMaps.find(streamId);
    if (mapIt == mBufferIdMaps.end()) {
        return;
    }
    BufferIdMap& bIdMap = mapIt->second;
    auto it = bIdMap.find(handle);
    if (it == bIdMap.end()) {
        return;
    } else {
        bufferId =  it->second;
        bIdMap.erase(it);
    }
    //将bufferId保存至mFreedBuffers
    mFreedBuffers.push_back(std::make_pair(streamId, bufferId));
}

经过上述过程完成了将removedBuffers保存到mFreedBuffers。

接着我们介绍下mFreedBuffers是如何保存到vectordevice::V3_2::BufferCache cachesToRemove队列中的,代码如下:

//frameworks\av\services\camera\libcameraservice\device3\Camera3Device.cpp
status_t Camera3Device::HalInterface::processBatchCaptureRequests(
        std::vector<camera3_capture_request_t*>& requests,
        /*out*/uint32_t* numRequestProcessed) {
    ....
    //1.captureRequests准备流程
    ...
    //2.cachesToRemove准备流程
    //将mFreedBuffers中的数据转换为BufferCache类型
    std::vector<device::V3_2::BufferCache> cachesToRemove;
    {
        .....
        //mFreedBuffers就是在申请GraphicBuffer时获得的
        for (auto& pair : mFreedBuffers) {
            // The stream might have been removed since onBufferFreed
            if (mBufferIdMaps.find(pair.first) != mBufferIdMaps.end()) {
                cachesToRemove.push_back({pair.first, pair.second});
            }
        }
        //清空mFreedBuffers
        mFreedBuffers.clear();
    }
    .....
    //CameraService与CameraProvider IPC通信,
    ....
}

至此完成了:

  1. 将halRequest保存到hidl_vecdevice::V3_2::CaptureRequest captureRequests 队列
  2. 将removedBuffer 保存到std::vectordevice::V3_2::BufferCache cachesToRemove队列

2.4 CameraService 发送GraphicBuffer给CameraProvider

CameraService是通过BpHwCameraDeviceSession::processCaptureRequest将申请到的GraphicBuffer和removedBuffer 一起传递给CameraProvider的,代码如下:

  //CameraService与CameraProvider IPC通信,
  //将captureRequests、cachesToRemove一并传递给CameraProvider
  //mHidlSession为BpHwCameraDeviceSession代理对象
  auto err = mHidlSession->processCaptureRequest(captureRequests, cachesToRemove,
          [&status, &numRequestProcessed] (auto s, uint32_t n) {
              status = s;
              *numRequestProcessed = n;
          });

至此完成了GraphicBuffer从CameraService 向CameraProvider传递的流程分析,接着分析下CameraProvider响应processCaptureRequest并从中获取GraphicBuffer的流程。

2.5 CameraProvider响应processCaptureRequest并从中获取GraphicBuffer的流程

CameraProvider响应processCaptureRequest的流程图如下:
Android GraphicBuffer在CameraService、CameraProvider、CameraHAL的申请、传递、使用、归还流程_第2张图片
我们直接分析下响应函数CameraDeviceSession::processCaptureRequest

//hardware\interfaces\camera\device\3.2\default\CameraDeviceSession.cpp
Return<void> CameraDeviceSession::processCaptureRequest(
        const hidl_vec<CaptureRequest>& requests,
        const hidl_vec<BufferCache>& cachesToRemove,
        ICameraDeviceSession::processCaptureRequest_cb _hidl_cb)  {
    //更新mCirculatingBuffers
    //从mCirculatingBuffers中删除cachesToRemove中的buff
    updateBufferCaches(cachesToRemove);
    uint32_t numRequestProcessed = 0;
    Status s = Status::OK;
    for (size_t i = 0; i < requests.size(); i++, numRequestProcessed++) {
        //继续传递request
        s = processOneCaptureRequest(requests[i]);
    }
    ...
}

接着分析下processOneCaptureRequest

Status CameraDeviceSession::processOneCaptureRequest(const CaptureRequest& request)  {
    Status status = initStatus();
    //创建空的camera3_capture_request_t对象halRequest
    camera3_capture_request_t halRequest;
    halRequest.frame_number = request.frameNumber;
    //将HIDL CameraMetadata转换为camera_metadata_t类型
    converted = convertFromHidl(request.settings, &halRequest.settings);
    ....
    hidl_vec<buffer_handle_t*> allBufPtrs;
    hidl_vec<int> allFences;
    //hasInputBuf 为false
    bool hasInputBuf = (request.inputBuffer.streamId != -1 &&
            request.inputBuffer.bufferId != 0);
    size_t numOutputBufs = request.outputBuffers.size();
    size_t numBufs = numOutputBufs + (hasInputBuf ? 1 : 0);
    //importRequest作用
    //1. 获取V3_2::CaptureRequest request中的所有GraphicBuffer及其对应fencefd
    //并分别保存到allBufPtrs和allFences
    //2. 如果GraphicBuffer是首次传递到CameraProvider,则保存其到mCirculatingBuffers
    status = importRequest(request, allBufPtrs, allFences);
    hidl_vec<camera3_stream_buffer_t> outHalBufs;
    outHalBufs.resize(numOutputBufs);
    ....
    {
        ....
        halRequest.num_output_buffers = numOutputBufs;
        for (size_t i = 0; i < numOutputBufs; i++) {
            auto key = std::make_pair(request.outputBuffers[i].streamId, request.frameNumber);
            // 创建空的camera3_stream_buffer_t
            auto& bufCache = mInflightBuffers[key] = camera3_stream_buffer_t{};
            //将allBufPtrs,request.outputBuffers[i].status
            //streamId,fencefd赋值给bufCache ,bufCache 是camera3_stream_buffer_t类型
            convertFromHidl(
                    allBufPtrs[i], request.outputBuffers[i].status,
                    &mStreamMap[request.outputBuffers[i].streamId], allFences[i],
                    &bufCache);
            outHalBufs[i] = bufCache;
        }
        //将bufCache 赋值给halRequest.output_buffers
        halRequest.output_buffers = outHalBufs.data();
       .....
    }
    ....
    //开始进入HAL3
    //halRequest类型为camera3_capture_request_t
    status_t ret = mDevice->ops->process_capture_request(mDevice, &halRequest);
}

从上述代码可以看出,CameraProvider又会将device::V3_2::CaptureRequest request转换为camera3_capture_request_t halRequest,然后 将其传递给hal层。

我们重点分析下updateBufferCachesimportRequest两个函数。

我们先介绍下CameraDeviceSession类成员mCirculatingBuffers,其中保存了每个Stream中正在循环使用的GraphicBuffe。

updateBufferCaches会更新mCirculatingBuffers ,将cachesToRemove中所有的GraphicBuffe 从mCirculatingBuffers删除。

void CameraDeviceSession::updateBufferCaches(const hidl_vec<BufferCache>& cachesToRemove) {

    for (auto& cache : cachesToRemove) {
       //获取cache.streamId的GraphicBuffe容器
        auto cbsIt = mCirculatingBuffers.find(cache.streamId);
        if (cbsIt == mCirculatingBuffers.end()) {
            // The stream could have been removed
            continue;
        }
        CirculatingBuffers& cbs = cbsIt->second;
        //查看cache.bufferId是否在容器中
        auto it = cbs.find(cache.bufferId);
        if (it != cbs.end()) {
            //释放该GraphicBuffer
            sHandleImporter.freeBuffer(it->second);
            //从容器中删除该GraphicBuffer
            cbs.erase(it);
        } 
        ...
    }
}

importRequest会读取device::V3_2::CaptureReques request中的所有GraphicBuffe (实际读取是GraphicBuffe .handle ) 和bufferId,。
如果该bufferId对应GraphicBuffe 在mCirculatingBuffers中,则说明该GraphicBuffe 以前已经传递给了CameraProvider,只需要从mCirculatingBuffers读取使用即可。
如果bufferId对应buff不在mCirculatingBuffers中,则说明是首次将该GraphicBuffe 传递给CameraProvider,则需要将该GraphicBuffe填充给mCirculatingBuffers

//hardware\interfaces\camera\device\3.2\default\CameraDeviceSession.cpp
Status CameraDeviceSession::importRequest(
        const CaptureRequest& request,
        hidl_vec<buffer_handle_t*>& allBufPtrs,
        hidl_vec<int>& allFences) {
    //hasInputBuf为false
    bool hasInputBuf = (request.inputBuffer.streamId != -1 &&
            request.inputBuffer.bufferId != 0);
    size_t numOutputBufs = request.outputBuffers.size();
    size_t numBufs = numOutputBufs + (hasInputBuf ? 1 : 0);
    // Validate all I/O buffers
    hidl_vec<buffer_handle_t> allBufs;
    hidl_vec<uint64_t> allBufIds;
    allBufs.resize(numBufs);
    allBufIds.resize(numBufs);
    allBufPtrs.resize(numBufs);
    allFences.resize(numBufs);
    std::vector<int32_t> streamIds(numBufs);
    for (size_t i = 0; i < numOutputBufs; i++) {
        // request.outputBuffers[i].buffer类型为native_handle_t*
        //通过之前的分析,allBufs[i]可能为空
        allBufs[i] = request.outputBuffers[i].buffer.getNativeHandle();
        allBufIds[i] = request.outputBuffers[i].bufferId;
        allBufPtrs[i] = &allBufs[i];
        streamIds[i] = request.outputBuffers[i].streamId;
    }
    .....
    for (size_t i = 0; i < numBufs; i++) {
        buffer_handle_t buf = allBufs[i];
        uint64_t bufId = allBufIds[i];
        CirculatingBuffers& cbs = mCirculatingBuffers[streamIds[i]];
        //如果在mCirculatingBuffers未有bufId相关信息,说明之前未传递过该buffer,buf不能未空
        //如果mCirculatingBuffers存在该bufId相关信息,直接从mCirculatingBuffers取出buffer_handle_t型的buffer
        if (cbs.count(bufId) == 0) {
            ...
            //将该新buffer注册到mCirculatingBuffers中
            // Register a newly seen buffer
            buffer_handle_t importedBuf = buf;
			// In IComposer, any buffer_handle_t is owned by the caller and we need to
			// make a clone for hwcomposer2.  We also need to translate empty handle
			// to nullptr.  This function does that, in-place.
            sHandleImporter.importBuffer(importedBuf);
            //new buff 不能为空
            if (importedBuf == nullptr) {
                ALOGE("%s: output buffer %zu is invalid!", __FUNCTION__, i);
                return Status::INTERNAL_ERROR;
            } else {
                //将新buffer添加到mCirculatingBuffers中
                cbs[bufId] = importedBuf;
            }
        }
        allBufPtrs[i] = &cbs[bufId];
    }
    //至此allBufPtrs所有元素不为空了,需要检测下 acquire fences的有效性
    // All buffers are imported. Now validate output buffer acquire fences
    for (size_t i = 0; i < numOutputBufs; i++) {
        //检查fencefd的有效性,我们就不介绍了
        if (!sHandleImporter.importFence(
                request.outputBuffers[i].acquireFence, allFences[i])) {
            ALOGE("%s: output buffer %zu acquire fence is invalid", __FUNCTION__, i);
            cleanupInflightFences(allFences, i);
            return Status::INTERNAL_ERROR;
        }
    }
    ....
    return Status::OK;
}

从上述流程看,在CameraProvider收到processCaptureRequest时,会根据CaptureRequest和cachesToRemove中的streamId和bufferId更新CameraDeviceSession中mCirculatingBuffers对应Stream的GraphicBuffer,
最终将V3_2::CaptureRequest request转换为camera3_capture_request_t型halrequest

至此,完成了CameraProvider响应processCaptureRequest并从中获取CameraService传递来的GraphicBuffer及其对应的Fence。

整个流程看似非常复杂,其实总结下,
CameraService和CameraProvider中通过GraphicBuffer handle传递GraphicBuffer,GraphicBuffer handle类型为native_handle_t*。

接着接续分析下CameraProvider向HAL层传递GraphicBuffer的流程,

3. GraphicBuffer从CameraProvider传递给Camera HAL层流程

由于CameraProvider和HAL层属于一个进程,其传递过程不需要跨进程。
代码如下:

Status CameraDeviceSession::processOneCaptureRequest(const CaptureRequest& request)  {
    ....
    //开始进入HAL3
    //mDevice类型为Camera3Device
    //mDevice->ops类型为Camera3DeviceOps
    //halRequestl类型为camera3_capture_request_t
    status_t ret = mDevice->ops->process_capture_request(mDevice, &halRequest);
}

下边介绍下GraphicBuffer在HAL层的传递流程

3. 1 GraphicBuffer在HAL层的传递流程

GraphicBufferr在HAL层的传递流程流涉及代码太多了,其大致流程图如下:
Android GraphicBuffer在CameraService、CameraProvider、CameraHAL的申请、传递、使用、归还流程_第3张图片

中间详细流程不在详细介绍了,我们从CameraUsecaseBase开始分析。
如果对GraphicBuffer在CAMX中的复杂传递过程不感兴趣,可以直接跳到
4.0小节 GraphicBuffer在Pipeline中的使用流程

//vendor\qcom\proprietary\chi-cdk\vendor\chioverride\default\chxadvancedcamerausecase.cpp
CDKResult CameraUsecaseBase::ExecuteCaptureRequest(
    camera3_capture_request_t* pRequest)
{
	....
	//将graphics buffer handle封装到submitRequest后向ExtensionModule提交申请
	//submitRequest为CHISTREAMBUFFER类型
	result = ExtensionModule::GetInstance()->SubmitRequest(&submitRequest);
	....
}

AdvancedCameraUsecase::ExecuteCaptureRequest会将CameraService传递下来的graphics buffer handle封装到CHISTREAMBUFFER buffer,然后通过ExtensionModule::SubmitRequest继续下传CaptureRequest

//vendor\qcom\proprietary\camx\src\core\chi\camxchi.cpp
static CDKResult ChiSubmitPipelineRequest(
    CHIHANDLE           hChiContext,
    CHIPIPELINEREQUEST* pRequest)
{   
    ...
    //继续传递Camera Request申请
    result = pChiContext->SubmitRequest(pCHISession, pRequest);
    ...
}
//vendor\qcom\proprietary\camx\src\core\chi\camxchicontext.cpp
CamxResult ChiContext::SubmitRequest(
    CHISession*         pSession,
    ChiPipelineRequest* pRequest)
{
   //继续传递Camera Request申请
   result = pSession->ProcessCaptureRequest(pRequest);
}

接着进入Session ProcessCaptureRequest

//vendor\qcom\proprietary\camx\src\core\camxsession.cpp
CamxResult Session::ProcessCaptureRequest(
    const ChiPipelineRequest* pPipelineRequests)
{
    CamxResult  result      = CamxResultEFailed;
    ......
   //如果m_livePendingRequests大于m_maxLivePendingRequests,
   //再次申请帧时,需要先返回一帧数据
    while (m_livePendingRequests >= m_maxLivePendingRequests)
    {
        ...
        resultWait = m_pWaitLivePendingRequests->TimedWait(m_pLivePendingRequestsLock->GetNativeHandle(), waitTime);
        ...
    }
    ...
    //创建空的ChiCaptureRequest数组requests
    ChiCaptureRequest requests[MaxPipelinesPerSession];
    for (UINT requestIndex = 0; requestIndex < numRequests; requestIndex++)
    {
        //从pPipelineRequests中获取第requestIndex个ChiCaptureRequest
        const ChiCaptureRequest* pCaptureRequest    = &(pPipelineRequests->pCaptureRequests[requestIndex]);
        ....
        //将ChiCaptureRequest拷贝给requests
        CamX::Utils::Memcpy(&requests[requestIndex], pCaptureRequest, sizeof(ChiCaptureRequest));
        //这句话很重要,需要等待申请到的graphics buffer acquireFence被触发
        //表示graphics buffer 现在可以开始被Camera使用了
        result = WaitOnAcquireFence(&requests[requestIndex]);

        //将Camera Request保存到SessionCaptureRequest型成员变量m_captureRequest中
        //SessionCaptureRequest定义为:
        //struct SessionCaptureRequest
        //{
        //    CaptureRequest    requests[MaxPipelinesPerSession]; 
        //    UINT32            numRequests;                      
        //};
	   CaptureRequest* pRequest = &(m_captureRequest.requests[requestIndex]);
	   pRequest->streamBuffers[m_batchedFrameIndex[pipelinIndex]].numOutputBuffers =
	          requests[requestIndex].numOutputs;

	   for (UINT i = 0; i < requests[requestIndex].numOutputs; i++)
	   {
	       //将requests拷贝给m_captureRequest
	       Utils::Memcpy(&pRequest->streamBuffers[m_batchedFrameIndex[pipelinIndex]].outputBuffers[i],
	                     &requests[requestIndex].pOutputBuffers[i],
	                     sizeof(ChiStreamBuffer));
	   }
	   .....
	   //将m_captureRequest插入m_pRequestQueue队列
	   result = m_pRequestQueue->EnqueueWait(&m_captureRequest);
	   .....
	   if (CamxResultSuccess == result)
	   {
	       //触发m_pThreadManager异步处理m_captureRequest
	       VOID* pData[] = {this, NULL};
	       result        = m_pThreadManager->PostJob(m_hJobFamilyHandle,
	                                                 NULL,
	                                                 &pData[0],
	                                                 FALSE,
	                                                 FALSE);
	   }
	   ...
}

m_pThreadManager中线程被触发,会执行ThreadJobCallback,接着会执行ThreadJobExecute,

//vendor\qcom\proprietary\camx\src\core\chi\camxchisession.cpp
CamxResult CHISession::ThreadJobExecute()
{
    CamxResult result = CamxResultSuccess;
    if (TRUE == static_cast<BOOL>(CamxAtomicLoad32(&m_aCheckResults)))
    {
        result = ProcessResults();
    }

    if (CamxResultSuccess == result)
    {
        result = ProcessRequest();
    }
    else
    {
        FlushRequests(FALSE);
    }
    return result;
}

从上述代码看,线程先执行ProcessResults,再执行ProcessRequest,再执行FlushRequests
我们重点分析下ProcessRequest

//vendor\qcom\proprietary\camx\src\core\camxsession.cpp
CamxResult Session::ProcessRequest()
{
    CamxResult              result          = CamxResultSuccess;
    SessionCaptureRequest*  pSessionRequest = NULL;
    //从m_pRequestQueue中Dequeue出一个帧申请pSessionRequest
    pSessionRequest = static_cast<SessionCaptureRequest*>(m_pRequestQueue->Dequeue());
    ......
    //CSLOpenRequest通知camerakernel,开启一次Request
    result = m_pipelineData[pRequest->pipelineIndex].pPipeline->OpenRequest(pRequest->requestId, pRequest->CSLSyncID);
    if (NULL != pSessionRequest)
    {
        .....
        for (UINT requestIndex = 0; requestIndex < pSessionRequest->numRequests; requestIndex++)
        {
            ......
            //向pPipeline申请ProcessRequest
            result = m_pipelineData[pRequest->pipelineIndex].pPipeline->ProcessRequest(&pipelineProcessRequestData);
            .....
        }
        ......
    }
}

4.0 GraphicBuffer在Pipeline中的使用流程

经过上边复杂的传递流程,终于进入到Pipeline,Pipeline会在SetupRequestOutputPorts时将GraphicBuffer填充(import)给Pipeline中SinkPort的ImageBuffer中。SinkPortWithBuffer所在的CamXNode在Depenences满足时会执行ExecuteProcessRequest时将该ImageBuffer及其对应的Fence打包提交给camerakernel,camerakernel在处理完成后会触发SinkPortWithBuffer对应的CSLFenceCallback通知camx 帧数据已经生产完成。

//vendor\qcom\proprietary\camx\src\core\camxpipeline.cpp
CamxResult Pipeline::ProcessRequest(
    PipelineProcessRequestData* pPipelineRequestData)
{
    .....
    PerBatchedFrameInfo* pPerBatchedFrameInfo = &pPipelineRequestData->perBatchedFrameInfo[0];
    .....
    if (CamxResultSuccess == result)
    {
        UINT32 nodesEnabled = 0;
        for (UINT i = 0; i < m_nodeCount ; i++)
        {
            BOOL isNodeEnabled = FALSE;
            //遍历Pipeline中所有m_ppNodes 
            //初始化Node active input/output ports
            //主要是初始每个activeport的ppImageBuffers及对应fence
            //如果是sinkportwithbuffer,
            //则将GraphicBuffer注册给该port的ppImageBuffers。
            //如果不是,则通过pImageBufferManager申请一个空闲的ppImageBuffer。
            //然后为每个port创建一个fence并注册一个cslfencecallback,等待fence的触发
            m_ppNodes[i]->SetupRequest(pPerBatchedFrameInfo,
                                       pDifferentActiveStreams,
                                       requestId,
                                       pCaptureRequest->CSLSyncID,
                                       &isNodeEnabled);
            ......
        }
        for (UINT i = 0; i < m_nodeCount ; i++)
        {
            if (TRUE == Utils::IsBitSet(nodesEnabled, i))
            {
                //将enable的m_ppNode提交给m_pDeferredRequestQueue,
                //异步处理每个node的ExecuteProcessRequest
                result = m_pDeferredRequestQueue->AddDeferredNode(requestId,
                                                                  m_ppNodes[i],
                                                                  NULL);
            }
            ...
        }
        ...
        // Consider any nodes now ready
        //触发m_pDeferredRequestQueue中Node的ExecuteProcessRequest方法
        m_pDeferredRequestQueue->DispatchReadyNodes();
       ....
    }
    return result;
}

4.1 PipeLine enable Node的active inputports/outputports初始化

SetupReques会初始化Node的active inputports/outputports,代码如下:

//vendor\qcom\proprietary\camx\src\core\camxnode.cpp
CamxResult Node::SetupRequest(
    PerBatchedFrameInfo* pPerBatchedFrameInfo,
    UINT*                pDifferentActiveStreams,
    UINT64               requestId,
    UINT64               syncId,
    BOOL*                pIsEnabled)
{
    ......
    if (TRUE == IsNodeEnabled())
    {
        //初始化Node的输入和输出端口
        result = SetupRequestOutputPorts(pPerBatchedFrameInfo);
        result = SetupRequestInputPorts(pPerBatchedFrameInfo);
        *pIsEnabled = TRUE;
    }
    ......
    return result;
}

我来分析下SetupRequestOutputPorts和SetupRequestInputPorts是如何初始化active input/output的

//vendor\qcom\proprietary\camx\src\core\camxnode.cpp
CamxResult Node::SetupRequestOutputPorts(
    PerBatchedFrameInfo* pPerBatchedFrameInfo)
{   ...
    //m_perRequestInfo中保存该Node每次申请的每个active inputports/outputports的相关信息,如imagebuffer和fence
    PerRequestActivePorts* pRequestPorts  = &m_perRequestInfo[requestIdIndex].activePorts;
    //遍历所以enable的output ports,为其填充ppImageBuffers
    //和pFenceHandlerData或者pDelayedBufferFenceHandlerData
    for (UINT portIndex = 0; portIndex < m_outputPortsData.numPorts; portIndex++)
    {   //OutputPort是否enable
        if (TRUE == IsOutputPortEnabled(portIndex))
        {
            OutputPort* pOutputPort = &m_outputPortsData.pOutputPorts[portIndex];
            if (pOutputPort->bufferProperties.maxImageBuffers > 0)
            {
                .....
                //获取pOutputPort的pFenceHandlerData
                NodeFenceHandlerData*     pFenceHandlerData     =
                    &pOutputPort->pFenceHandlerData[(m_tRequestId % maxImageBuffers)];
                .....
                //端口是isSinkBuffer或者isNonSinkHALBufferOutput
                //isNonSinkHALBufferOutput flage defination
                //< Flag to indicate that the output port is not a sinkport but still outputs a
                ///HAL buffer. This will happen in cases if the output port is connected to an
                ///inplace node that outputs a HAL buffer.
                if (TRUE == IsSinkPortWithBuffer(portIndex)||TRUE == IsNonSinkHALBufferOutput(portIndex))
                {
                    ....
                    //为Port创建Fence同步信号
                    result = CSLCreatePrivateFence("NodeOutputPortFence", &hNewFence);
                    ......
                    //等待Fence被触发
                    //在Fence触发后会触发CSLFenceCallback回调
                    result = CSLFenceAsyncWait(hNewFence,
                                                Node::CSLFenceCallback,
                                                &pFenceHandlerData->nodeCSLFenceCallbackData)......
                    //向pImageBufferManager申请numBatchedFrames个ImageBuffer
                    //numBatchedFrames一般情况下为1,我现在只知道hfr numBatchedFrames可能为2,4,8
                    for (UINT i = 0; < numBatchedFrames ; i++)
                    {
                        // Is the output port enabled for the frame in the batch
                        if (TRUE == Utils::IsBitSet(pPerBatchedFrameInfo[i].activeStreamIdMask, outputPortStreamId))
                        {
                            FenceHandlerBufferInfo* pFenceHandlerBufferInfo =
                                &pFenceHandlerData->outputBufferInfo[pFenceHandlerData->numOutputBuffers];
                           //获取port的第batchedFrameIndex个ImageBuffer
                            ImageBuffer* pImageBuffer  =  pOutputPort->ppImageBuffers[batchedFrameIndex];
                            //断言NULL == pImageBuffer????
                            CAMX_ASSERT(NULL == pImageBuffer);
                            //从pImageBufferManager获取pImageBuffer
                            if (NULL == pImageBuffer)
                            {
                               //通过pImageBufferManager获取一个空闲的ImageBuffer
                               //如果没有则申请一个新的ImageBuffer
                                pImageBuffer = pOutputPort->pImageBufferManager->GetImageBuffer();
                            }
                            if (NULL != pImageBuffer)
                            {
                               ......
                               //phNativeHandle是CameraService通过
                               //Surface::dequeuebuffer()申请到的
                               //graphics buffer handle
                                BufferHandle* phNativeHandle = pPerBatchedFrameInfo[i].phBuffers[outputPortStreamId];
                                if (NULL != phNativeHandle)
                                {
                                    const ImageFormat* pImageFormat = &pOutputPort->bufferProperties.imageFormat;
                                    //设置flag,表示内存可读写的范围
                                    //有CSLMemFlagHw,CSLMemFlagCmdBuffer,
                                    //CSLMemFlagUMDAccess,CSLMemFlagKMDAccess
                                    //CSLMemFlagSharedAccess
                                    UINT32 flags = CSLMemFlagHw;
                                    ......
                                    //将graphics buffer handle填充给该pOutputPort的pImageBuffer
                                    result = pImageBuffer->Import(pImageFormat,
                                                                  *phNativeHandle,
                                                                  0, // Offset
                                                                  ImageFormatUtils::GetTotalSize(pImageFormat),
                                                                  flags,
                                                                  &m_deviceIndices[0],
                                                                  m_deviceIndexCount);
                                   .......
                                   //将pImageBuffer赋值给pOutputPort->ppImageBuffers
                                    pOutputPort->ppImageBuffers[batchedFrameIndex] = pImageBuffer;
                                    //将pImageBuffer赋值给pFenceHandlerData的pFenceHandlerBufferInfo
                                    pFenceHandlerBufferInfo->pImageBuffer          = pImageBuffer;
                                    //将graphicbuffer handle赋值给pFenceHandlerBufferInfo->pImageBuffer
                                    pFenceHandlerBufferInfo->phNativeHandle        = phNativeHandle;
                                    .....
                                    // Sent to the derived node
                                    //将pImageBuffer和pFenceHandlerData赋值给m_perRequestInfo
                                    //它们可以传递给下个Node,既连接这个Output的InputPort所在的Node
                                    UINT numOutputBuffers = pRequestOutputPort->numOutputBuffers;

                                    pRequestOutputPort->pImageBuffer[numOutputBuffers] = pImageBuffer;
                                    //将fence 赋值给pRequestOutputPort->phFence 
                                    pRequestOutputPort->phFence                        = &pFenceHandlerData->hFence;
                                    //fence触发标志位
                                    pRequestOutputPort->pIsFenceSignaled               =
                                        &pFenceHandlerData->isFenceSignaled;
                                    //numOutputBuffers++
                                    pRequestOutputPort->numOutputBuffers++;
                                    pFenceHandlerData->numOutputBuffers++;
                                }//if (NULL != phNativeHandle)
                            }// if (NULL != pImageBuffer)
                        }//f (TRUE == Utils::IsBitSet(pPerBatchedFrameInfo[i]....
                    }//for (UINT i = 0; < numBatchedFrames ; i++)
                }
                else//// Output port that does not output HAL buffer
                {   
                    // For ports outputting non-HAL buffers only the first entry is ever needed
                    //example:isSinkNoBuffer isLoopback
                    UINT sequenceId = pPerBatchedFrameInfo[0].sequenceId;
                    pRequestOutputPort->flags.isOutputHALBuffer = FALSE;
                    //对于ports outputting non-HAL buffers只传递一个buffer给deliver node
                    pRequestOutputPort->numOutputBuffers        = 1;
                    //为给pOutputPort向pImageBufferManager申请一个空闲的ImageBuffer
                    //如果没有则需要新申请一个
                    result = ProcessNonSinkPortNewRequest(m_tRequestId, sequenceId, pOutputPort);
                    if (CamxResultSuccess == result)
                    {
                        // For ports outputting non-HAL buffers only the first entry is ever needed
                        // Data to be sent to the derived node that implements request processing
                        //将pImageBuffer和pFenceHandlerData赋值给m_perRequestInfo
                        //它们可以传递给下个Node,既连接这个Output的InputPort所在的Node
                        pRequestOutputPort->pImageBuffer[0]          = pFenceHandlerData->outputBufferInfo[0].pImageBuffer;
                        pRequestOutputPort->phFence                  = &pFenceHandlerData->hFence;
                        pRequestOutputPort->pIsFenceSignaled         = &pFenceHandlerData->isFenceSignaled;
                        pRequestOutputPort->phDelayedBufferFence     = &pDelayedBufferFenceHandlerData->hFence;
                        pRequestOutputPort->pDelayedOutputBufferData = &pFenceHandlerData->delayedOutputBufferData;
                        m_perRequestInfo[requestIdIndex].numUnsignaledFences++;
                        if (TRUE == IsBypassableNode())
                        {
                            pRequestOutputPort->flags.isDelayedBuffer = TRUE;
                            // For bypassable node, a fence for buffer dependency is added.
                            // So, this needs to be accounted here
                            m_perRequestInfo[requestIdIndex].numUnsignaledFences++;
                        }//if (TRUE == IsBypassableNode())
                    }//if (CamxResultSuccess == result)
                }//if (TRUE == IsSinkPortWithBuffer(portIndex)||TRUE == IsNonSinkHALBufferOutput(portIndex))
                .....
            }//if (pOutputPort->bufferProperties.maxImageBuffers > 0)
        }
        pRequestPorts->numOutputPorts++;
    }//for (UINT portIndex = 0; portIndex < m_outputPortsData.numPorts; portIndex++)
    .....
    return result;
}

简单总结下:

  1. SetupRequestOutputPort会为Node中所有Enable的outputPort向pImageBufferManager申请numBatchedFrames个空闲的pImageBuffer并创建numBatchedFrames个与pImageBuffer对应的Fence。
  2. 如果是isSinkBuffer或者isNonSinkHALBufferOutput 的outputPort 则将CameraService申请的GraphicBuffer填充到该pImageBuffer。
  3. 如果不是isSinkBuffer和isNonSinkHALBufferOutput ,则直接使用该通过pImageBufferManager申请到的pImageBuffer
  4. 申请完pImageBuffer和创建完成Fence会,会为Fence注册一个CLSFenceCallback等待Fence触发
  5. 将请完pImageBuffer和创建完成Fence赋值给m_perRequestInfo,这个下个Node就可以通过m_perRequestInfo获取这个pImageBuffer和Fence及fence状态等信息了
    以UsecasePreview为例,其SetupRequestOutputPort大致示意图如下:

Android GraphicBuffer在CameraService、CameraProvider、CameraHAL的申请、传递、使用、归还流程_第4张图片

下边分析下SetupRequestInputPorts

CamxResult Node::SetupRequestInputPorts(
    PerBatchedFrameInfo* pPerBatchedFrameInfo)
{
    CamxResult             result         = CamxResultSuccess;
    UINT                   requestIdIndex = m_tRequestId % MaxRequestQueueDepth;
    PerRequestActivePorts* pRequestPorts  = &m_perRequestInfo[requestIdIndex].activePorts;

    pRequestPorts->numInputPorts  = 0;

    for (UINT portIndex = 0; portIndex < m_inputPortsData.numPorts; portIndex++)
    {
        if (TRUE == IsInputPortEnabled(portIndex))
        {
            InputPort*               pInputPort            = &m_inputPortsData.pInputPorts[portIndex];
            PerRequestInputPortInfo* pPerRequestInputPort  = &pRequestPorts->pInputPorts[pRequestPorts->numInputPorts];
            //parentOutputPort就是连接该inputport的outputport
            OutputPortRequestedData  parentOutputPort      = { 0 };
            //isSourceBuffer
            //Is it a source port with a HAL buffer
            if (FALSE == IsSourceBufferInputPort(portIndex))
            {
                Node* pParentNode           = pInputPort->pParentNode;
                UINT  parentOutputPortIndex = pInputPort->parentOutputPortIndex;
                ...
                //获取连接该inputport的outputport
                if (m_tRequestId > pInputPort->bufferDelta)
                {
                    result = pParentNode->GetOutputPortInfo(
                        m_tRequestId - pInputPort->bufferDelta,
                        static_cast<UINT32>(pPerBatchedFrameInfo[0].sequenceId - pInputPort->bufferDelta),
                        parentOutputPortIndex,
                        &parentOutputPort);
                }

                if (CamxResultSuccess == result)
                {
                    // If the parent node is sensor it will return a NULL imagebuffer
                    if (NULL != parentOutputPort.pImageBuffer)
                    {
                        pPerRequestInputPort->portId           = pInputPort->portId;
                        //isBypassable
                        if (TRUE == pParentNode->IsBypassableNode())
                        {
                            // 将parentOutputPort 的pImageBuffer和phFence
                            //赋值给m_perRequestInfo[requestIdIndex].activePorts.pInputPorts
                            pPerRequestInputPort->pImageBuffer             = parentOutputPort.pImageBuffer;
                            pPerRequestInputPort->phFence                  = parentOutputPort.phFence;
                            //表示parentOutputPort的Fence是否被触发
                            pPerRequestInputPort->pIsFenceSignaled         = parentOutputPort.pIsFenceSignaled;
                            pPerRequestInputPort->pDelayedOutputBufferData = parentOutputPort.pDelayedOutputBufferData;
                            pPerRequestInputPort->flags.isPendingBuffer    = TRUE;
                        }
                        else//not IsBypassableNode
                        {
                            //将parentOutputPort 的pImageBuffer和phFence
                            //赋值给m_perRequestInfo[requestIdIndex].activePorts.pInputPorts
                            pPerRequestInputPort->pImageBuffer     = parentOutputPort.pImageBuffer;
                            pPerRequestInputPort->phFence          = parentOutputPort.phFence;
                            //表示parentOutputPort的Fence是否被触发
                            pPerRequestInputPort->pIsFenceSignaled = parentOutputPort.pIsFenceSignaled;
                        }//if (TRUE == pParentNode->IsBypassableNode())
                        //numInputPorts++
                        pRequestPorts->numInputPorts++;
                    }//if (NULL != parentOutputPort.pImageBuffer)
                }// if (CamxResultSuccess == result)
            }
            else// if (TRUE== IsSourceBufferInputPort(portIndex))
            {
                //check if pCaptureRequest->streamBuffers[0].numInputBuffers>0
                //do something
                ...
            }// if (FALSE== IsSourceBufferInputPort(portIndex))
        }// if (FALSE == IsSourceBufferInputPort(portIndex))
    }//for (UINT portIndex = 0; portIndex < m_inputPortsData.numPorts; portIndex++)
    return result;
}

简单总结下:

  1. SetupRequestInputPorts首先会获取其parentOutputPort
  2. 通过parentOutputPort获取pImageBuffer,Fence,pIsFenceSignaled等信息赋值给m_perRequestInfo[requestIdIndex].activePorts.pInputPorts

至此,pipeLine 每个Node的m_perRequestInfo都有了ppImageBuffers和fence,接着将Node插入了m_pDeferredRequestQueue中。当m_pDeferredRequestQueue中某个Node的Dependencies满足时会执行该Node的ExecuteProcessRequest,将activeoutports中的ppImageBuffers和Fence打包发送给CameraKernel。

4.2 Dependencies

在SetupRequest为requestId初始化完了pipeLine中所有Node的outputports和inputports后,
pipeLine便通过

for (UINT i = 0; i < m_nodeCount ; i++)
{
    if (TRUE == Utils::IsBitSet(nodesEnabled, i))
    {
        CAMX_LOG_DRQ("Queueing Node: %s on pipeline: %d for new requestId: %llu",
            m_ppNodes[i]->NameAndInstanceId(), m_pipelineIndex, requestId);
        result = m_pDeferredRequestQueue->AddDeferredNode(requestId,
                                                          m_ppNodes[i],
                                                          NULL);
    }
}

将所有的Node和requestId打包到一个新的Dependency中,然后插入到DeferredRequestQueue的m_readyNodes队列中,log如下:

AddDependencyEntry() Adding dependencies for node: 0 nodeName: Sensor:0 pipeline: 0 request: 1 seqId: 0, to : m_readyNodes
AddDependencyEntry() Adding dependencies for node: 1 nodeName: Stats:0 pipeline: 0 request: 1 seqId: 0, to : m_readyNodes
AddDependencyEntry() Adding dependencies for node: 5 nodeName: AutoFocus:0 pipeline: 0 request: 1 seqId: 0, to : m_readyNodes
AddDependencyEntry() Adding dependencies for node: 65536 nodeName: IFE:0 pipeline: 0 request: 1 seqId: 0, to : m_readyNodes
AddDependencyEntry() Adding dependencies for node: 65538 nodeName: IPE:0 pipeline: 0 request: 1 seqId: 0, to : m_readyNodes
AddDependencyEntry() Adding dependencies for node: 65540 nodeName: FDHw:0 pipeline: 0 request: 1 seqId: 0, to : m_readyNodes
AddDependencyEntry() Adding dependencies for node: 8 nodeName: FDManager:0 pipeline: 0 request: 1 seqId: 0, to : m_readyNodes
AddDependencyEntry() Adding dependencies for node: 9 nodeName: StatsParse:0 pipeline: 0 request: 1 seqId: 0, to : m_readyNodes

然后通过m_pDeferredRequestQueue->DispatchReadyNodes();遍历m_readyNodes中所有Dependency成员,逐个将其提交给m_hDeferredWorker线程,执行pDependency->pNode的ProcessRequest方法,代码如下:

//camxdeferredrequestqueue.cpp
CamxResult DeferredRequestQueue::DeferredWorkerCore(
    Dependency* pDependency)
{
    CamxResult             result         = CamxResultSuccess;
    NodeProcessRequestData processRequest = { 0 };
    Node*                  pNode          = pDependency->pNode;
    processRequest.processSequenceId = pDependency->processSequenceId;
    if (NULL != pNode)
    {
        CAMX_LOG_DRQ("DRQ dispatching node=%d nodeName=%s:%d, request=%llu, pipeline=%d, seqId=%d",
                     pNode->Type(),
                     pNode->Name(),
                     pNode->InstanceID(),
                     pDependency->requestId,
                     pNode->GetPipelineId(),
                     pDependency->processSequenceId);
        //尝试执行一次pNode->ProcessRequest
        //在执行ProcessRequest时,Node会在执行CAM_CONFIG_DEV命令前将所有依赖的buffer,properties等信息
        //填充给processRequest.dependencyInfo[MaxDependencies]中,
        //如果没有任何依赖,则执行CAM_CONFIG_DEV命令将requestId对该Node的所有配置信息
        //打包发从给CameraKernel。
        //processSequenceId初始值为0,在所有依赖满足时会将其设置为1
        result = pNode->ProcessRequest(&processRequest, pDependency->requestId);
        CAMX_LOG_DRQ("DRQ execute complete node=%d nodeName=%s:%d, request=%llu, pipeline=%d, seqId=%d",
                     pNode->Type(),
                     pNode->Name(),
                     pNode->InstanceID(),
                     pDependency->requestId,
                     pNode->GetPipelineId(),
                     pDependency->processSequenceId);
        //执行完ProcessRequest后,需要检查processRequest.numDependencyLists是否大于0
        //如果processRequest.numDependencyLists大于0,则需要将该Node再插入到
        //DeferredRequestQueue的m_deferredNodes队列中
        if (CamxResultSuccess == result)
        {
            for (UINT index = 0; index < processRequest.numDependencyLists; index++)
            {
                DependencyUnit* pDependencyInfo = &processRequest.dependencyInfo[index];     
                ...
                //如果numDependencyLists大于0,会执行一些流程
                //1.首先会将获取的pDependencyInfo信息更新到pDependency中
                //2.然后将pDependency插入m_deferredNodes
                //3.将m_deferredNodes更新到m_pDependencyMap中,key值为
                //DependencyKey mapKey  = {request, pDependency->pipelineIds[i], pDependency->properties[i], NULL, NULL};
                result = AddDeferredNode(pDependency->requestId, pNode, pDependencyInfo);
            }
            ...
        }
    }
    ...
    // Consider any nodes ready immediately
    DispatchReadyNodes();
    return result;
}

运行Log如下:

DispatchReadyNodes() post ProcessRequest job for node Sensor:0, request 1
...
DispatchReadyNodes() post ProcessRequest job for node IFE:0, request 1
DispatchReadyNodes() post ProcessRequest job for node IPE:0, request 1
...
AddDependencyEntry() Adding dependencies for node: 0 nodeName: Sensor:0 pipeline: 0 request: 1 seqId: 1, to : m_deferredNodes
AddDependencyEntry() Adding dependencies for node: 65536 nodeName: IFE:0 pipeline: 0 request: 1 seqId: 1, to : m_deferredNodes
AddDependencyEntry() Adding dependencies for node: 65538 nodeName: IPE:0 pipeline: 0 request: 1 seqId: 1, to : m_deferredNodes

从Log上看,首次执行pNode->ProcessRequest时,Sensor、IFE、IPE 等Dependencies不满足,则将Sensor,IFE,IPE再次插入到m_pDeferredRequestQueue的m_deferredNodes队列中。
已Sensor为例,其dependence如下:

//AddDependencyEntry()
Node Dependency Name: Sensor:0 Pipeline: 0 request: 1 seqId: 1 -> property[0] = 30000000 PropertyIDAECFrameControl pipeline[0] = 0 request = 1
Node Dependency Name: Sensor:0 Pipeline: 0 request: 1 seqId: 1 -> property[1] = 30000002 PropertyIDAWBFrameControl pipeline[1] = 0 request = 1
Node Dependency Name: Sensor:0 Pipeline: 0 request: 1 seqId: 1 -> property[2] = 80110000 org.quic.camera2.sensor_register_control.sensor_register_control pipeline[2] = 0 request = 1

当NonSinkport Fence触发或者property、metadate变化时都会触发m_pDeferredRequestQueue更新。
如当NonSinkportFence触发时,会通过Pipeline::NonSinkPortFenceSignaled来更新m_pDeferredRequestQueue
代码如下:

VOID Pipeline::NonSinkPortFenceSignaled(
    CSLFence* phFence,
    UINT64    requestId)
{
    m_pDeferredRequestQueue->FenceSignaledCallback(phFence, requestId);
}

DeferredRequestQueue是通过UpdateOrRemoveDependency来完成更新的,代码如下

VOID DeferredRequestQueue::UpdateOrRemoveDependency(
    DependencyKey* pMapKey,
    Dependency*    pDependencyToRemove)
{
    LightweightDoublyLinkedList* pList = NULL;

    m_pDependencyMap->Get(pMapKey, reinterpret_cast<VOID**>(&pList));
    if (NULL != pList)
    {
        LightweightDoublyLinkedListNode* pNode = pList->Head();
        while (NULL != pNode)
        {
            LightweightDoublyLinkedListNode* pNext       = LightweightDoublyLinkedList::NextNode(pNode);
            Dependency*                      pDependency = static_cast<Dependency*>(pNode->pData);
            ...
            if (NULL != pDependency)
            {
                if ((NULL == pDependencyToRemove) || (pDependencyToRemove == pDependency))
                {
                    if (PropertyIDInvalid != pMapKey->dataId)
                    {
                        pDependency->publishedCount++;
                        ...
                    }
                    else if (NULL != pMapKey->pFence)
                    {
                        pDependency->signaledCount++;
                        ...
                    }
                    else if (NULL != pMapKey->pChiFence)
                    {
                        pDependency->chiSignaledCount++;
                        ...
                    }
                    //当所有pDependency都满足时,
                    //将pDeferred从m_deferredNodes队列中移除
                    //然后插入到m_readyNodes队列
                    //将pNode从m_pDependencyMap移除
                    if ((pDependency->propertyCount == pDependency->publishedCount) &&
                        (pDependency->fenceCount    == pDependency->signaledCount)  &&
                        (pDependency->chiFenceCount == pDependency->chiSignaledCount))
                    {
                        CAMX_LOG_DRQ("node: %s - all satisfied.request: %llu seqId: %d",
								pDependency->pNode->Name(),
								pDependency->requestId,
								pDependency->processSequenceId);
                        // Move the node to the ready queue
                        LightweightDoublyLinkedListNode* pDeferred = m_deferredNodes.FindByValue(pDependency);
                        m_deferredNodes.RemoveNode(pDeferred);
                        ...
                        m_readyNodes.InsertToTail(pDeferred);
                    }
                    pList->RemoveNode(pNode);
                }
            }
            pNode = pNext;
        }
        ...
    }
}

4.3 Node active outports中ImageBuffers和Fence打包发送给CameraKernel

以IFE为例:

CamxResult IFENode::ExecuteProcessRequest(
    ExecuteProcessRequestData* pExecuteProcessRequestData)
{
    //update Dependencies
    //Node 在执行ExecuteProcessRequest时,需要检查Dependencies是否满足
    //Dependencies 有hasBufferDependency,hasPropertyDependency,hasFenceDependency
    if (0 == sequenceNumber)
    {
        // If the sequence number is zero then it means we are not called from the DRQ, in which case we need to set our
        // dependencies.
        SetDependencies(pNodeRequestData, hasExplicitDependencies);
        // If no dependency, it should do process directly. Set sequneceNumber to 1 to do process directly
        // Or if no stats node, the first request will not be called.
        if (FALSE == Node::HasAnyDependency(pNodeRequestData->dependencyInfo))
        {
            sequenceNumber = 1;
        }
    }
    //if all dependencies are met sequenceNumber is set to 1
    if(sequenceNumber == 1{
    	for (UINT i = 0; i < pPerRequestPorts->numOutputPorts; i++)
			result = MapPortIdToChannelId(pOutputPort->portId, &channelId);
			result = m_pIQPacket->AddIOConfig(pImageBuffer,
			                                  channelId,
			                                  CSLIODirection::CSLIODirectionOutput,
			                                  pOutputPort->phFence,
			                                  1,
			                                  NULL,
			                                  NULL);
    	}
    	...
    	GetHwContext()->Submit(m_hDevice, m_pIQPacket)
    }
    ...
}

5. GraphicBuffer归还流程

camera kernel在收到帧数据后(kernel具体流程以后再研究),会触发fence,通过CSLFenceCallback通知camera HAL。
CSLFenceCallback的注册之前也提到过,是在Node::SetupRequestOutputPorts中注册给kernel的

//vendor\qcom\proprietary\camx\src\core\camxnode.cpp
CamxResult Node::SetupRequestOutputPorts(
    PerBatchedFrameInfo* pPerBatchedFrameInfo)
{
	....
	//创建Fence同步信号
	result = CSLCreatePrivateFence("NodeOutputPortFence", &hNewFence);
	......
    //异步等待Fence触发,CSLFenceAsyncWait立即返回
    //当Fence触发时会触发CSLFenceCallback
	result = CSLFenceAsyncWait(hNewFence,
	                            Node::CSLFenceCallback,
	                            &pFenceHandlerData->nodeCSLFenceCallbackData);
                
}

CSLFenceCallback代码如下:

//vendor\qcom\proprietary\camx\src\core\camxnode.cpp
VOID Node::CSLFenceCallback(
    VOID*           pNodePrivateFenceData,
    CSLFence        hSyncFence,
    CSLFenceResult  fenceResult)
{
    CamxResult result = CamxResultSuccess;
    FenceCallbackData*    pFenceCallbackData    = static_cast<FenceCallbackData*>(pNodePrivateFenceData);
    Node*                 pNode                 = pFenceCallbackData->pNode;
    NodeFenceHandlerData* pNodeFenceHandlerData = NULL;
    pNodeFenceHandlerData = static_cast<NodeFenceHandlerData*>(pFenceCallbackData->pNodePrivateData);
    .....
    CAMX_LOG_INFO(CamxLogGroupCore,
                      "Node:%d [%s] InstanceID:%d Fence %d signaled with success in node fence handler FOR %llu",
                      pNode->Type(),
                      pNode->m_pNodeName,
                      pNode->InstanceID(),
                      fenceResult,
                      pNodeFenceHandlerData->requestId);
   ....
    VOID* pData[] = { pFenceCallbackData, NULL };
   //发消息给hal层camxsession.app
   result = pNode->GetThreadManager()->PostJob(pNode->GetJobFamilyHandle(), NULL, &pData[0], FALSE, FALSE);

}

以IPE为例,CSLFenceCallback触发后打印的IPE Node打印的log如下:

"Node:65538 [IPE] InstanceID:0 Fence 0 signaled with success in node fence handler FOR 2"

对应的处理函数为Node::ProcessFenceCallback(注册在Session::FinalizePipeline函数中RegisterJobFamily)

VOID Node::ProcessFenceCallback(
    NodeFenceHandlerData* pFenceHandlerData)
{
    ...
    // Output port to which the fence belongs to
    OutputPort* pOutputPort    = pFenceHandlerData->pOutputPort; 
    UINT64      requestId      = pFenceHandlerData->requestId;
    ...
    // Only do processing if we havent already signalled the fence (for failure cases)
    //修改isFenceSignaled状态为1
    if (TRUE == CamxAtomicCompareExchangeU(&pFenceHandlerData->isFenceSignaled, 0, 1))
    {
        ...
        //该地方对帧数据二次处理,或者dump帧数据
        WatermarkImage(pFenceHandlerData);
        DumpData(pFenceHandlerData);
        ...
        //如果是isSinkBuffer,则说明GraphicBuffer已经被Camera生成完成了
        //可以归还给CameraService,然后由其归还给SurfaceFlinge进行合成显示了
        if (TRUE == pOutputPort->flags.isSinkBuffer)
        {
            for (UINT i = 0; i < numBatchedFrames; i++)
            {
                CAMX_LOG_DRQ("Reporting sink fence callback for Fence (%d) node: %s:%d, pipeline: %d, seqId: %d, request: %llu",
                            static_cast<INT32>(pFenceHandlerData->hFence),
                            m_pNodeName,
                            InstanceID(),
                            GetPipelineId(),
                            pFenceHandlerData->outputBufferInfo[i].sequenceId,
                            pFenceHandlerData->requestId);

                //帧数据已经生成完成,graphics  buffer 执行unmapped了
                // HAL buffer can now be unmapped since the HW is done generating the output
                pFenceHandlerData->outputBufferInfo[i].pImageBuffer->Release(FALSE);
                //通知m_pPipeline
                m_pPipeline->SinkPortFenceSignaled(pOutputPort->sinkTargetStreamId,
                     pFenceHandlerData->outputBufferInfo[i].sequenceId,
                     pFenceHandlerData->requestId,
                     //graphics buffer的handle
                     pFenceHandlerData->outputBufferInfo[i].phNativeHandle,
                     pFenceHandlerData->fenceResult);
            }//for (UINT i = 0; i < numBatchedFrames; i++)
        }//if (TRUE == pOutputPort->flags.isSinkBuffer)
    }//if (TRUE == CamxAtomicCompareExchangeU(&pFenceHandlerData->isFenceSignaled, 0, 1))
}

已IPE为例,其在ProcessFenceCallback中打印的Log为

Reporting sink fence callback for Fence (16) node: IPE:65538 , pipeline: 0, seqId: 0, request: 1
//vendor\qcom\proprietary\camx\src\core\camxpipeline.cpp
VOID Pipeline::SinkPortFenceSignaled(
    UINT           sinkPortStreamId,
    UINT32         sequenceId,
    UINT64         requestId,
    BufferHandle*  phHALBuffer,
    CSLFenceResult fenceResult)
{

   ResultsData     resultsData       = {};
    ...
   resultsData.type                        = CbType::Buffer;
   // graphics buffer的handle
   //将BufferHandle封装为ResultsData     类型
   resultsData.cbPayload.buffer.sequenceId = sequenceId;
   resultsData.cbPayload.buffer.streamId   = sinkPortStreamId;
   resultsData.cbPayload.buffer.phBuffer   = phHALBuffer;
   ...
   m_pSession->NotifyResult(&resultsData);
}
VOID Session::NotifyResult(
    ResultsData* pResultsData)
{
	...
	case CbType::Buffer:
	    //继续传递,buffer类型为CbPayloadBuffer
	    HandleBufferCb(&pResultsData->cbPayload.buffer, pResultsData->pipelineIndex,
	                   pResultsData->pPrivData);
	    break;
	...
}
VOID Session::HandleBufferCb(
    CbPayloadBuffer* pPayload,
    UINT             pipelineIndex,
    VOID*            pPrivData)
{
    ChiStreamBuffer outBuffer = { 0 };
    ....
    //将graphics buffer赋值给outBuffer
    //CbPayloadBuffer类型转换为ChiStreamBuffer
    outBuffer.phBuffer     = pPayload->phBuffer;
    outBuffer.bufferStatus = BufferStatusOK;
    //releaseFence 为-1,说明在归还GraphicBuffer不需要等待
    outBuffer.releaseFence = -1; // For the moment
   ....
   //将帧数据插入m_resultHolderList中
    InjectResult(ResultType::BufferOK, &outBuffer, pPayload->sequenceId, pPrivData);
}

帧数据插入m_resultHolderList具体实现

CamxResult Session::InjectResult(
    ResultType  resultType,
    //outBuffer
    VOID*       pPayload,
    UINT32      sequenceId,
    VOID*       pPrivData)
{
    //从m_resultHolderList中获取保存帧的ResultHolder
    ResultHolder* pHolder = GetResultHolderBySequenceId(sequenceId);
    .....
    else if (ResultType::BufferOK == resultType)
    {
        ChiStreamBuffer* pBuffer = static_cast<ChiStreamBuffer*>(pPayload);
        ChiStream*       pStream = pBuffer->pStream;
        ....
        if (MaxNumOutputBuffers != streamIndex)
        {
            ....
            if (pHolder->bufferHolder[streamIndex].pBuffer->pStream == pStream &&
                pHolder->bufferHolder[streamIndex].pBuffer->phBuffer == pBuffer->phBuffer)
            {
                //将outBuffer拷贝给pHolder->bufferHolder
                //即拷贝给m_resultHolderList
                Utils::Memcpy(pHolder->bufferHolder[streamIndex].pBuffer,
                              pBuffer,
                              sizeof(ChiStreamBuffer));
                pHolder->bufferHolder[streamIndex].valid = TRUE;
            }
        }
    }
   ...
   //触发异步处理
   VOID* pData[] = { this, NULL };
   result        = m_pThreadManager->PostJob(m_hJobFamilyHandle, NULL, &pData[0], FALSE, FALSE);

}

帧数据插入m_resultHolderList后触发异步处理PostJob,异步处理函数为Session::ProcessResults

//vendor\qcom\proprietary\camx\src\core\camxsession.cpp
CamxResult Session::ProcessResults()
{
    CamxResult    result              = CamxResultSuccess;
    UINT32        i                   = 0;
    UINT32        numResults          = 0;
    ResultHolder* pResultHolder       = NULL;
    SessionResultHolder* pSessionResultHolder   = NULL;
   ...
   //从m_resultHolderList中获取ResultHolder
    LightweightDoublyLinkedListNode* pNode = m_resultHolderList.Head();

    while (NULL != pNode)
    {
        if (NULL != pNode->pData)
        {
            pSessionResultHolder = reinterpret_cast<SessionResultHolder*>(pNode->pData);
            for (i = 0; i < pSessionResultHolder->numResults; i++)
            {
                pResultHolder = &(pSessionResultHolder->resultHolders[i]);
                if (NULL != pResultHolder)
                {
                    metadataReady = ProcessResultMetadata(pResultHolder, &numResults);
                    //将pResultHolder中的数值赋值给ChiCaptureResult类型的m_pCaptureResult
                    //将graphics buffer的handle封装为了ChiCaptureResult的pOutputBuffers中
                    //
                    bufferReady = ProcessResultBuffers(pResultHolder, metadataReady, &numResults);
                }
            }
        }
    }

    if (numResults > 0)
    {
        // Finally dispatch all the results to the Framework\
        //继续回调封装后的m_pCaptureResult
        DispatchResults(&m_pCaptureResult[0], numResults);
    }
....
    return result;
}

异步处理函数Session::ProcessResults从m_resultHolderList中获取帧数据赋值给m_pCaptureResult,
然后继续回调帧数据。
先看一下Session::ProcessResults如何将m_resultHolderList中的信息赋值给m_pCaptureResult

BOOL Session::ProcessResultBuffers(
    ResultHolder* pResultHolder,
    BOOL          metadataAvailable,
    UINT*         pNumResults)
{
	ChiCaptureResult* pResult    = &m_pCaptureResult[currentResult];
	ChiStreamBuffer* pStreamBuffer =const_cast<ChiStreamBuffer*>(&pResult->pOutputBuffers[pResult->numOutputBuffers]);
	....
	Utils::Memcpy(pStreamBuffer, pResultHolder->bufferHolder[bufIndex].pBuffer, sizeof(ChiStreamBuffer));
	.....           
	return gotResult;
}

归还流程

//vendor\qcom\proprietary\chi-cdk\vendor\chioverride\default\chxadvancedcamerausecase.cpp
VOID CameraUsecaseBase::SessionProcessResult(
    ChiCaptureResult*         pResult,
    const SessionPrivateData* pSessionPrivateData)
{

    UINT32              resultFrameNum          = pResult->frameworkFrameNum;
    UINT32              resultFrameIndex        = resultFrameNum % MaxOutstandingRequests;
    BOOL                isAppResultsAvailable   = FALSE;
    //强制类型转换,将ChiCaptureResult型转换为camera3_capture_result_t
    camera3_capture_result_t* pInternalResult   = reinterpret_cast<camera3_capture_result_t*>(pResult);
    //获取数组成员变量m_captureResult的第resultFrameIndex个元素,类型为camera3_capture_result_t
    //然后通过pResult更新m_captureResult[resultFrameIndex]中的信息
    camera3_capture_result_t* pUsecaseResult    = this->GetCaptureResult(resultFrameIndex);
    ...
    // Fill all the info in m_captureResult and call ProcessAndReturnFinishedResults to send the meta
    // callback in sequence
    //填充m_captureResult,将类型ChiCaptureResult中的output_buffers信息拷贝给m_captureResult
    m_pAppResultMutex->Lock();
    for (UINT i = 0; i < pResult->numOutputBuffers; i++)
    {
        camera3_stream_buffer_t* pResultBuffer =
            const_cast<camera3_stream_buffer_t*>(&pUsecaseResult->output_buffers[i + pUsecaseResult->num_output_buffers]);

        ChxUtils::Memcpy(pResultBuffer, &pResult->pOutputBuffers[i], sizeof(camera3_stream_buffer_t));
        isAppResultsAvailable = TRUE;
    }
    pUsecaseResult->num_output_buffers += pResult->numOutputBuffers;
    m_pAppResultMutex->Unlock();
   .....
    if (TRUE == isAppResultsAvailable)
    {   //通过ProcessAndReturnFinishedResults继续回调m_captureResult
        ProcessAndReturnFinishedResults();
    }
}

通过ProcessAndReturnFinishedResults继续回调m_captureResult

//vendor\qcom\proprietary\chi-cdk\vendor\chioverride\default\chxadvancedcamerausecase.cpp
VOID CameraUsecaseBase::ProcessAndReturnFinishedResults()
{
      .....
      camera3_capture_result_t result = { 0 };
      result.frame_number       = m_captureResult[frameIndex].frame_number;
      result.num_output_buffers = m_captureResult[frameIndex].num_output_buffers;
      result.output_buffers     = m_captureResult[frameIndex].output_buffers;
      ....
      ReturnFrameworkResult(&result, m_cameraId);
      ....
}

通过ReturnFrameworkResult继续回调帧数据

VOID Usecase::ReturnFrameworkResult(
    const camera3_capture_result_t* pResult, UINT32 cameraID)
{
    camera3_capture_result_t* pOverrideResult               = const_cast<camera3_capture_result_t*>(pResult);
   ....
    ExtensionModule::GetInstance()->ReturnFrameworkResult(reinterpret_cast<const camera3_capture_result_t*>(pOverrideResult),
                                                          cameraID);
}

通过 ExtensionModule::GetInstance()->ReturnFrameworkResult继续回调帧数据

VOID ExtensionModule::ReturnFrameworkResult(
    const camera3_capture_result_t* pResult,
    UINT32 cameraID)
{
//通过m_pHALOps回调给CameraProvider
m_pHALOps[cameraID]->process_capture_result(m_logicalCameraInfo[cameraID].m_pCamera3Device, pResult);

}

流程图如下:

Android GraphicBuffer在CameraService、CameraProvider、CameraHAL的申请、传递、使用、归还流程_第5张图片
CameraProvider到CameraService的归还流程,不在继续研究了,只是一些指针传递流程,
最终在CameraService将该graphics buffer会通过Surface::queueBuffer归还到SurfaceFlinger进程中
CameraService归还代码如下:

//frameworks\av\services\camera\libcameraservice\device3\Camera3OutputStream.cpp
status_t Camera3OutputStream::returnBufferCheckedLocked(
            const camera3_stream_buffer &buffer,
            nsecs_t timestamp,
            bool output,
            /*out*/
            sp<Fence> *releaseFenceOut) {

    status_t res;

    // Fence management - always honor release fence from HAL
    sp<Fence> releaseFence = new Fence(buffer.release_fence);
    int anwReleaseFence = releaseFence->dup();

    //从camera3_stream_buffer中获取ANativeWindowBuffer型anwBuffer
    ANativeWindowBuffer *anwBuffer = container_of(buffer.buffer, ANativeWindowBuffer, handle);
    /**
     * Return buffer back to ANativeWindow
     */
    //通过currentConsumer归还anwBuffer
    //其实currentConsumer就是Surface类对象
    res = queueBufferToConsumer(currentConsumer, anwBuffer, anwReleaseFence);
    *releaseFenceOut = releaseFence;
    return res;
}

6.0 总结

  1. GraphicBuffer申请是在CameraService prepareHalRequests时通过Surface::dequeueBuffer申请的
  2. GraphicBuffer通过processCaptureRequest从CameraService传递到CameraProvider
  3. GraphicBuffer通过process_capture_request从CameraProvider传递到CamX
  4. GraphicBuffer在Camx和CHI中进行一系列的传递最终传递到了pipeLine
  5. pipeLine在初始化PipeLine SinkportWithbuffer时将GraphicBuffer填充给该port对应的ImageBuffer中并创建一个与之对应的Fence,然后为Fence注册一个CSLFenceCallback监听回调
  6. SinkportWithbuffer所在CamXNode在Depences满足时会执行ExecuteProcessRequest将SinkportWithbuffer的ImageBuffer和Fence 一起打包发送给CameraKernel
  7. 当CameraKernel获取帧数据后会触发SinkportWithbuffer的Fence,通过CSLFenceCallback通知Camx SinkportWithbuffer已经获取了帧数据
  8. Camx通过processCaptureResult 将GraphicBuffer从CamX经CameraProvider最终传递给CameraService,
    在CameraService通过Surface::queueBuffer将GraphicBuffer归还给Surface。

至此完成了graphics buffer在Android Camera子系统中的申请、传递、归还流程的分析

你可能感兴趣的:(Android,camera)