yolo11官方ONNXRuntime部署推理的脚本测试,包括检测模型和分割模型的部署推理

一、检测模型

1.脚本路径:

D:/ultralytics-main/examples/YOLOv8-ONNXRuntime/main.py

2.使用案例

下载好onnx模型保存至D:/ultralytics-main/models目录下,没有该目录则新建

打开终端,进入虚拟环境

以yolov8n.onnx模型为例,输入以下指令即可

python D:/ultralytics-main/examples/YOLOv8-ONNXRuntime/main.py	\
	--model D:/ultralytics-main/models/yolov8n.onnx	\
	--img D:/ultralytics-main/ultralytics/assets/bus.jpg

脚本路径:D:/ultralytics-main/examples/YOLOv8-ONNXRuntime/main.py
模型路径:D:/ultralytics-main/models/yolov8n.onnx,默认为yolov8n.onnx
图像路径:D:/ultralytics-main/ultralytics/assets/bus.jpg,默认为str(ASSETS / "bus.jpg")

ubuntu系统下可能会出现下列报错

qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin “xcb” in “/opt/conda/envs/yolo/lib/python3.9/site-packages/cv2/qt/plugins” even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: xcb.

那是因为ubuntu系统没有窗口显示
这时我们可以打开脚本,划到最后修改代码如下所示即可

    # save image
    cv2.imwrite("demo.jpg", output_image)

    # Display the output image in a window
    # cv2.namedWindow("Output", cv2.WINDOW_NORMAL)
    # cv2.imshow("Output", output_image)

    # Wait for a key press to exit
    # cv2.waitKey(0)

这样我们就能将推理后的图像保存至根目录查看,命名为demo.jpg

若需要对文件夹内的所有图像进行推理,则可修改最后的main函数如下所示

import os

if __name__ == "__main__":
    # Create an argument parser to handle command-line arguments
    parser = argparse.ArgumentParser()
    parser.add_argument("--model", type=str, required=True, help="Path to ONNX model")
    parser.add_argument("--images", type=str, required=True, help="Path to input images")
    parser.add_argument("--outputs", type=str, required=True, help="Path to output images")
    parser.add_argument("--conf-thres", type=float, default=0.5, help="Confidence threshold")
    parser.add_argument("--iou-thres", type=float, default=0.5, help="NMS IoU threshold")
    args = parser.parse_args()

    # Check the requirements and select the appropriate backend (CPU or GPU)
    check_requirements("onnxruntime-gpu" if torch.cuda.is_available() else "onnxruntime")

    # Create an inference session using the ONNX model and specify execution providers
    session = ort.InferenceSession(args.model, providers=["CUDAExecutionProvider", "CPUExecutionProvider"])

    # Read image by OpenCV
    image_files = [f for f in os.listdir(args.images) if f.endswith(('jpg', 'jpeg', 'png'))]
    total_images = len(image_files)
    for i, image_file in enumerate(image_files):
        image_path = os.path.join(args.images, image_file)
        if not os.path.exists(image_path):
            print(f"错误: 文件 '{image_path}' 不存在。")

        if not os.path.exists(args.outputs):
            print(f"输出文件夹不存在,为您创建新的输出文件夹 '{args.outputs}'")
            os.makedirs(args.outputs)

        # 显示进度
        print(f"处理图像 {i + 1}/{total_images}: {image_file}")

        # Create an instance of the YOLOv8 class with the specified arguments
        detection = YOLOv8(args.model, image_path, args.conf_thres, args.iou_thres)

        # Perform object detection and obtain the output image
        output_image = detection.main(session)

        # save image
        output_path = os.path.join(args.outputs, image_file)
        cv2.imwrite(output_path, output_image)

使用指令:

python D:/ultralytics-main/examples/YOLOv8-ONNXRuntime/main.py	\
	 --model D:/ultralytics-main/models/yolov8n.onnx	\
	 --images D:/ultralytics-main/datasets/img	\
	 --outputs D:/ultralytics-main/datasets/detect

模型路径:D:/ultralytics-main/models/yolov8n.onnx
输入文件夹路径:D:/ultralytics-main/datasets/img
输出文件夹路径:D:/ultralytics-main/datasets/detect

二、分割模型

1.脚本路径

D:\ultralytics-main\examples\YOLOv8-Segmentation-ONNXRuntime-Python\main.py

其他与检测模型类似,可自行修改

你可能感兴趣的:(YOLO,python)