一、自动驾驶激光3D点云处理的核心挑战与流程
自动驾驶系统依赖激光雷达(LiDAR)生成的高精度3D点云数据实现环境感知,其处理流程需解决以下核心问题:
读取:
pcd_tmp = o3d.io.read_point_cloud(scenes_path['bev'])
# voxel_size = 0.1
# # 应用体素降采样
# pcd_tmp = pcd_tmp.voxel_down_sample(voxel_size)
pcd = np.asarray(pcd_tmp.points)
colors = np.asarray(pcd_tmp.colors) # (N,3) RGB值
pcdwithcolor = np.concatenate([pcd, colors],axis=1)
保存:
points = pcd[...,:3]
colors = pcd[...,4:]
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(points)
pcd.colors = o3d.utility.Vector3dVector(colors) # 设置颜色
# 保存带颜色的点云(PLY 和 PCD 支持颜色)
o3d.io.write_point_cloud(save_path, pcd)
print("带颜色的点云保存成功!", save_path)
二、Open3D库函数在关键处理环节的应用
Open3D是专为3D数据处理设计的开源库,支持点云I/O、滤波、分割、配准等全流程操作。以下通过代码示例说明其在自动驾驶场景中的核心应用:
import open3d as o3d
# 读取点云(示例使用随机生成数据)
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(np.random.rand(1000, 3) * 10) # 模拟噪声点云
# 统计滤波:移除距离均值±2σ外的点
cl, ind = pcd.remove_statistical_outlier(nb_neighbors=20, std_ratio=2.0)
filtered_pcd = pcd.select_by_index(ind)
# 可视化对比
o3d.visualization.draw_geometries([pcd, filtered_pcd],
window_name="原始点云 vs 滤波后点云",
width=800, height=600)
# 生成含地面的模拟点云
points = np.random.rand(500, 3) * 5
ground_plane = np.array([[0, 0, 0], [5, 0, 0], [0, 5, 0], [5, 5, 0]]) # 地面矩形
ground_points = np.random.rand(1000, 3) * 0.1 + [2.5, 2.5, -0.01] # 地面扰动点
all_points = np.vstack([points, ground_points])
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(all_points)
# RANSAC平面分割
plane_model, inliers = pcd.segment_plane(distance_threshold=0.05,
ransac_n=3,
num_iterations=1000)
ground = pcd.select_by_index(inliers)
non_ground = pcd.select_by_index(inliers, invert=True)
# 可视化
ground.paint_uniform_color([0, 1, 0]) # 绿色表示地面
non_ground.paint_uniform_color([1, 0, 0]) # 红色表示障碍物
o3d.visualization.draw_geometries([ground, non_ground],
window_name="地面分割结果",
width=800, height=600)
# 对非地面点进行聚类
points = np.asarray(non_ground.points)
clustering = DBSCAN(eps=0.5, min_samples=10).fit(points)
labels = clustering.labels_
# 可视化聚类结果
max_label = labels.max()
colors = plt.get_cmap("tab20")(labels / (max_label if max_label > 0 else 1))
colors[labels < 0] = 0 # 噪声点设为黑色
non_ground.colors = o3d.utility.Vector3dVector(colors[:, :3])
o3d.visualization.draw_geometries([non_ground],
window_name="DBSCAN聚类结果",
width=800, height=600)
# 生成两帧存在旋转平移的点云
source = o3d.geometry.PointCloud()
source.points = o3d.utility.Vector3dVector(np.random.rand(500, 3) * 3)
target = o3d.geometry.PointCloud()
target.points = o3d.utility.Vector3dVector(np.random.rand(500, 3) * 3 + [1, 0, 0]) # 平移1米
# ICP配准
reg_p2p = o3d.pipelines.registration.registration_icp(
source, target, max_correspondence_distance=0.1,
estimation_method=o3d.pipelines.registration.TransformationEstimationPointToPoint(),
criteria=o3d.pipelines.registration.ICPConvergenceCriteria(max_iteration=2000))
print("配准变换矩阵:\n", reg_p2p.transformation)
# 应用变换并可视化
source.transform(reg_p2p.transformation)
o3d.visualization.draw_geometries([source, target],
window_name="ICP配准结果",
width=800, height=600)
效果:两帧点云精确对齐,误差小于0.01米。
三、技术选型建议
点云的库位检测没玩过吧,喜欢请关注;