webrtc截取图像

webrtc系列——截取图像

  • 一、原理介绍
  • 二、代码实现
  • 三、注意事项

一、原理介绍

对于webrtc p2p音视频功能,不管是本端视频流还是对端视频流,视频流的源头都是摄像头,终点是都是屏幕(不同的终端需要不同的控件支持)。明白了这一点,每一帧数据需要调用VideoRenderer,然后通过渲染器在控件上进行绘制。
VideoRenderer中定义了如下的接口:

public static interface Callbacks {
    void renderFrame(org.webrtc.VideoRenderer.I420Frame i420Frame);
}

每一帧数据刷新时,会回调该接口。

二、代码实现

private static void copyPlane(ByteBuffer src, ByteBuffer dst) {
    src.position(0).limit(src.capacity());
    dst.put(src);
    dst.position(0).limit(dst.capacity());
}

public static android.graphics.YuvImage ConvertTo(org.webrtc.VideoRenderer.I420Frame src, int imageFormat) {
    byte[] bytes = new byte[src.yuvStrides[0] * src.height +
            src.yuvStrides[1] * src.height / 2 +
            src.yuvStrides[2] * src.height / 2];
    int[] strides = new int[3];
    switch (imageFormat) {
        default:
            return null;
        case android.graphics.ImageFormat.YV12: {
            ByteBuffer tmp = ByteBuffer.wrap(bytes, 0, src.yuvStrides[0] * src.height);
            copyPlane(src.yuvPlanes[0], tmp);
            tmp = ByteBuffer.wrap(bytes, src.yuvStrides[0] * src.height, src.yuvStrides[2] * src.height / 2);
            copyPlane(src.yuvPlanes[2], tmp);
            tmp = ByteBuffer.wrap(bytes, src.yuvStrides[0] * src.height + src.yuvStrides[2] * src.height / 2, src.yuvStrides[1] * src.height / 2);
            copyPlane(src.yuvPlanes[1], tmp);
            strides[0] = src.yuvStrides[0];
            strides[1] = src.yuvStrides[2];
            strides[2] = src.yuvStrides[1];
            return new YuvImage(bytes, imageFormat, src.width, src.height, strides);
        }

        case android.graphics.ImageFormat.NV21: {
            if (src.yuvStrides[0] != src.width)
                return null;
            if (src.yuvStrides[1] != src.width / 2)
                return null;
            if (src.yuvStrides[2] != src.width / 2)
                return null;

            ByteBuffer tmp = ByteBuffer.wrap(bytes, 0, src.width * src.height);
            copyPlane(src.yuvPlanes[0], tmp);

            byte[] tmparray = new byte[src.width / 2 * src.height / 2];
            tmp = ByteBuffer.wrap(tmparray, 0, src.width / 2 * src.height / 2);

            copyPlane(src.yuvPlanes[2], tmp);
            for (int row = 0; row < src.height / 2; row++) {
                for (int col = 0; col < src.width / 2; col++) {
                    bytes[src.width * src.height + row * src.width + col * 2] = tmparray[row * src.width / 2 + col];
                }
            }
            copyPlane(src.yuvPlanes[1], tmp);
            for (int row = 0; row < src.height / 2; row++) {
                for (int col = 0; col < src.width / 2; col++) {
                    bytes[src.width * src.height + row * src.width + col * 2 + 1] = tmparray[row * src.width / 2 + col];
                }
            }
            return new YuvImage(bytes, imageFormat, src.width, src.height, null);
        }
    }
}

private static class ProxyRenderer implements VideoRenderer.Callbacks {
    private VideoRenderer.Callbacks target;

    @Override
    synchronized public void renderFrame(VideoRenderer.I420Frame frame) {
        if (target == null) {
            Logging.d(TAG, "Dropping frame in proxy because target is null.");
            VideoRenderer.renderFrameDone(frame);
            return;
        }

        // 查看帧数据
        Logging.d(TAG, "height = " + frame.height
                + " width = " + frame.width
                + " rotationDegree = " + frame.rotationDegree
                + " textureId = " + frame.textureId
                + " rotatedHeight = " + frame.rotatedHeight()
                + " rotatedWidth = " + frame.rotatedWidth());

        // 保存数据
        android.graphics.YuvImage yuvImage = ConvertTo(frame, ImageFormat.NV21);
        java.io.File newFile = new File("/storage/emulated/0/1/webrtc_1");
        FileOutputStream fileOutputStream = null;
        try {
            fileOutputStream = new FileOutputStream(newFile);
        } catch (FileNotFoundException e) {
            e.printStackTrace();
        }
        yuvImage.compressToJpeg(new Rect(0, 0, yuvImage.getWidth(), yuvImage.getHeight()), 100, fileOutputStream);

        target.renderFrame(frame);
    }

    synchronized public void setTarget(VideoRenderer.Callbacks target) {
        this.target = target;
    }
}

上面代码的主要功能是保存webrtc的第一帧图像至本地。

三、注意事项

截图图像后frame数据会被改变,渲染器可能会无法解析frame数据。
解决方案:创建一份帧数据备份

VideoRenderer.I420Frame frame2 =  new VideoRenderer.I420Frame(frame.width, frame.height, frame.rotationDegree, frame.yuvStrides, frame.yuvPlanes, NULL);

参考链接:
https://www.jianshu.com/p/5902d4953ed9
https://blog.csdn.net/weixin_38372482/article/details/80817274
https://www.jianshu.com/p/1513e51e043d

你可能感兴趣的:(音视频,webrtc,webrtc,截图,截取图像)