基于单目视觉的巡检机器人同步定位方法
作者:
作者单位:

福建水利电力职业技术学院,福建 永安 366000

作者简介:

通讯作者:

基金项目:


Synchronous Positioning Method of Inspection Robot Based on Monocular Vision
Author:
Affiliation:

Fujian Water Conservancy and Electric Power Polytechnic College, Yong'an 366000, Fujian, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
    摘要:

    为提升巡检机器人定位精度,推出基于单目视觉的同步定位法。在巡检机器人上安装单目摄像头,多角度拍摄标定板图像,以获取相机内参数矩阵;把巡检空间坐标系转为巡检二维图像的像素坐标系,得到离散像素坐标数据;通过对比视频流图像帧像素亮度识别图像角点,算出点的位置与方向作为特征点描述符;运用随机抽样一致(random sample consensus,RANSAC)算法筛选描述符匹配结果,去除误匹配点;借助透视n点(perspective-n-point,PnP)算法求解单目视觉相机位姿,并转化为巡检机器人位姿,完成同步定位。实验结果表明:与激光雷达里程计与建图(lidar odometry and mapping,LOAM)算法和激光雷达(light detection and ranging,LiDAR)方法相比,本文方法最大移动路径跟踪偏移量平均值为0.89 m;在模拟环境1中,准确度较LOAM方法提升2.9%、较LiDAR方法提升1.5%,召回率较LOAM方法提升1.3%、较LiDAR方法提升1.7%,F1测度较LOAM方法提升2.1%、较LiDAR方法提升1.6%。本文方法具备较为理想的定位精度。

    Abstract:

    To improve the positioning accuracy of inspection robots, a monocular vision-based simultaneous localization method is proposed. A monocular camera was installed on the inspection robot to capture images of the calibration board from multiple angles to obtain the camera''s internal parameter matrix. The inspection space coordinate system was transformed into the pixel coordinate system of the inspection two-dimensional image to obtain discrete pixel coordinate data. By comparing the pixel brightness of the video stream image frames, the corner points of the image were identified, and the position and direction of the points were calculated as feature point descriptors. The random sample consensus(RANSAC) algorithm was used to filter the matching results of the descriptors to remove the mismatched points. The perspective-n-point(PnP) algorithm was used to solve the pose of the monocular vision camera and convert it into the pose of the inspection robot to complete the simultaneous localization. Experimental results show that compared with the lidar odometry and mapping(LOAM) algorithm and light detection and ranging(LiDAR), the average maximum path tracking offset of the proposed method is 0.89m. In the simulated environment 1, the accuracy is improved by 2.9% compared with the LOAM method and by 1.5% compared with the LiDAR method, the recall rate is improved by 1.3% compared with the LOAM method and by 1.7% compared with the LiDAR method, and the F1 measure is improved by 2.1% compared with the LOAM method and by 1.6% compared with the LiDAR method, demonstrating a relatively ideal positioning accuracy.

    参考文献
    相似文献
    引证文献
引用本文

罗钶.基于单目视觉的巡检机器人同步定位方法[J].西昌学院学报(自然科学版),2025,39(4):96-104.

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
历史
  • 收稿日期:2025-03-23
  • 最后修改日期:2025-05-08
  • 录用日期:2025-05-09
  • 在线发布日期: 2026-01-13