deepstream python yolov5使用记录

deepstream_python_apps

1、下载nvidia官方发布的 deepstream_python_apps到 /opt/nvidia/deepstream/deepstream/sour ces 目录下,根据deepstrema版本下载对应版本代码,我使用得deepstream6.0,所以我克隆v1.1.0版本代码

2、根据HOWTO.md文件安装依赖,主要是安装使用得Gst Python和pyds模块

3、运行官方例子,检查环境是否安装成功

deepstream_python_yolov5

1、下载yolov5得yolov5-deepstream-python代码,

2、编译,CUDA_VER版本根据自己版本设置

CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo/

3、根据tensorrt-yolov5将模型转换成engine模型

首先根据需要修改参数,主要是input w、h,batct_size:

yololayer.h    

    static constexpr int CLASS_NUM = 10;
    static constexpr int INPUT_H = 1088;  // yolov5's input height and width must be divisible by 32.
    static constexpr int INPUT_W = 1088;

yolov5.cpp

#define USE_FP16  // set USE_INT8 or USE_FP16 or USE_FP32
#define DEVICE 0  // GPU id
#define NMS_THRESH 0.4
#define CONF_THRESH 0.5
#define BATCH_SIZE 20
#define MAX_IMAGE_INPUT_SIZE_THRESH 3000 * 3000 // ensure it exceed the maximum size in the input images !
// clone code according to above #Different versions of yolov5
// download https://github.com/ultralytics/yolov5/releases/download/v6.0/yolov5s.pt
cp {tensorrtx}/yolov5/gen_wts.py {ultralytics}/yolov5
cd {ultralytics}/yolov5
python gen_wts.py -w yolov5s.pt -o yolov5s.wts
// a file 'yolov5s.wts' will be generated.
cd {tensorrtx}/yolov5/
// update CLASS_NUM in yololayer.h if your model is trained on custom dataset
mkdir build
cd build
cp {ultralytics}/yolov5/yolov5s.wts {tensorrtx}/yolov5/build
cmake ..
make
sudo ./yolov5 -s [.wts] [.engine] [n/s/m/l/x/n6/s6/m6/l6/x6 or c/c6 gd gw]  // serialize model to plan file
sudo ./yolov5 -d [.engine] [image folder]  // deserialize and run inference, the images in [image folder] will be processed.
// For example yolov5s
sudo ./yolov5 -s yolov5s.wts yolov5s.engine s
sudo ./yolov5 -d yolov5s.engine ../samples
// For example Custom model with depth_multiple=0.17, width_multiple=0.25 in yolov5.yaml
sudo ./yolov5 -s yolov5_custom.wts yolov5.engine c 0.17 0.25
sudo ./yolov5 -d yolov5.engine ../samples

4、检查 deepstream_yolov5_config.txt and main.py中得路径,步骤3会生成libmyplugins.so插件,因为我这边导入import ctypes报错,所以将其注销

#import ctypes

import pyds

#ctypes.cdll.LoadLibrary('/home/nvidia/lefugang/tensorrtx/yolov5/build/libmyplugins.so')

5、 修改main.py中sink插件,使得可以在终端输出结果,不需要再显示器上显示画面,主要是去掉nvegltransform插件,因为该插件时跟随nveglglessink使用的

#!/usr/bin/env python3

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

import sys
# import keyboard
sys.path.append('../')
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
#import ctypes

import pyds

#ctypes.cdll.LoadLibrary('/home/nvidia/lefugang/tensorrtx/yolov5/build/libmyplugins.so')

PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3


def osd_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    #Intiallizing object counter with 0.
    num_rects=0
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.glist_get_nvds_frame_meta()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            #frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break
        

        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)

        frame_number=frame_meta.frame_num
        num_rects = frame_meta.num_obj_meta
        l_obj=frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                #obj_meta=pyds.glist_get_nvds_object_meta(l_obj.data)
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break
            # set bbox color in rgba
            print(obj_meta.class_id, obj_meta.obj_label, obj_meta.confidence)
            # set the border width in pixel
            obj_meta.rect_params.border_width=0
            obj_meta.rect_params.has_bg_color=1
            obj_meta.rect_params.bg_color.set(0.0, 0.5, 0.3, 0.4)

            try: 
                l_obj=l_obj.next
            except StopIteration:
                break

        # Acquiring a display meta object. The memory ownership remains in
        # the C code so downstream plugins can still access it. Otherwise
        # the garbage collector will claim it when this probe function exits.
        
        display_meta.num_labels = 1
        py_nvosd_text_params = display_meta.text_params[0]
        # Setting display text to be shown on screen
        # Note that the pyds module allocates a buffer for the string, and the
        # memory will not be claimed by the garbage collector.
        # Reading the display_text field here will return the C address of the
        # allocated string. Use pyds.get_string() to get the string content.
        # py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])

        # Now set the offsets where the string should appear
        py_nvosd_text_params.x_offset = 10
        py_nvosd_text_params.y_offset = 12

        # Font , font-color and font-size
        py_nvosd_text_params.font_params.font_name = "Serif"
        py_nvosd_text_params.font_params.font_size = 10
        # set(red, green, blue, alpha); set to White
        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

        # Text background color
        py_nvosd_text_params.set_bg_clr = 1
        # set(red, green, blue, alpha); set to Black
        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
        # Using pyds.get_string() to get display_text as string
        # print(pyds.get_string(py_nvosd_text_params.display_text))
        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
			
    return Gst.PadProbeReturn.OK	



def main(args):
    # Check input arguments
    if len(args) != 2:
        sys.stderr.write("usage: %s \n" % args[0])
        sys.exit(1)

    # Standard GStreamer initialization
    GObject.threads_init()
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    # Since the data format in the input file is elementary h264 stream,
    # we need a h264parser
    print("Creating H264Parser \n")
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Use nvinfer to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    # Use convertor to convert from NV12 to RGBA as required by nvosd
    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    # Finally render the osd output
    if is_aarch64():
        transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")

    print("Creating EGLSink \n")
    sink = Gst.ElementFactory.make("fakesink", "nvvideo-renderer")
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

    print("Playing file %s " %args[1])
    source.set_property('location', args[1])
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)
    pgie.set_property('config-file-path', "config/deepstream_yolov5_config.txt")

    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)

    # we link the elements together
    # file-source -> h264-parser -> nvh264-decoder ->
    # nvinfer -> nvvidconv -> nvosd -> video-renderer
    print("Linking elements in the Pipeline \n")
    source.link(h264parser)
    h264parser.link(decoder)

    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad.link(sinkpad)
    streammux.link(pgie)
    pgie.link(nvvidconv)
    nvvidconv.link(nvosd)
    nvosd.link(sink)

    # create an event loop and feed gstreamer bus mesages to it
    #GObject.timeout_add_seconds(5, pipeline_pause(pipeline))
    loop = GObject.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")

    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()

    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)

if __name__ == '__main__':
    sys.exit(main(sys.argv))

6、运行

LD_PRELOAD=/home/nvidia/lfg/tensorrtx/yolov5/build/libmyplugins.so python main.py /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Creating Pipeline 
 
Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating Pgie 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating transform 
 
Creating EGLSink 

Atleast one of the sources is live
Adding elements to Pipeline 

Linking elements in the Pipeline 

Now playing...
1 :  rtsp://admin:[email protected]:554
Starting pipeline 

0:00:04.399294673  9779   0x5561021e30 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend()  [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/yolov5-deepstream-python/best.engine
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT data            3x1088x1088     
1   OUTPUT kFLOAT prob            6001x1x1        

0:00:04.399614801  9779   0x5561021e30 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext()  [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/yolov5-deepstream-python/best.engine
0:00:04.429867023  9779   0x5561021e30 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:config/deepstream_yolov5_config.txt sucessfully
Decodebin child added: source 

Decodebin child added: decodebin0 

Decodebin child added: rtph264depay0 

Decodebin child added: h264parse0 

Decodebin child added: capsfilter0 

Decodebin child added: nvv4l2decoder0 

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
In cb_newpad

gstname= video/x-raw
features= 
Frame Number= 0 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 1 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 2 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 3 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 4 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 5 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 6 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 7 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 8 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 9 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 10 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 11 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 12 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 13 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 14 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 15 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 16 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 17 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 18 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 19 Number of Objects= 0 Vehicle_count= 0 Person_count= 0

 7、成功

你可能感兴趣的:(算法,python,深度学习,人工智能,pytorch)