相机-线激光联合标定的点云彩色纹理映射方法
DOI:
CSTR:
作者:
作者单位:

1.中南大学机电工程学院长沙410083; 2.中南大学极端服役性能精准制造全国重点实验室长沙410083

作者简介:

通讯作者:

中图分类号:

TH74TP39

基金项目:

国家重点研发计划(2024YFB4708703)项目资助


Point cloud color texture mapping method via camera-laser line joint calibration
Author:
Affiliation:

1.College of Mechanical and Electrical Engineering, Central South University, Changsha 410083, China; 2.Key State Lab of High Performance Complex Manufacturing, Central South University, Changsha 410083, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对工业三维彩色重建中相机-线激光联合标定高精度依赖复杂高精密靶标的问题,提出了一套基于多特征、弱约束标定块的高精度标定方案,整体由多模态特征提取与配准框架以及标定模型的两阶段优化求解构成。首先,在标定块上引入圆孔中心并与角点协同检测:角点采用几何约束实现亚像素定位,圆心采用两阶段椭圆拟合实现精定位;随后,提出姿态自适应投影下的三维特征点重建方法,按降维—检测—提维流程将三维定位转化为二维检测并反投影重建三维点云,提升对噪声与姿态变化的鲁棒性;最后,结合几何先验实现2D-3D 特征点的无歧义配准。参数求解采用线性拆解-非线性重构的两阶段优化:在单帧条件下由特征点对线性估计映射矩阵初值,经过三角正交分解分离内外参数后,引入镜头畸变参数进行全局非线性精化,提升解的全局最优性与泛化能力。实验结果表明:归一化平均重投影误差为 0.84 pixels,对应物理距离误差为0.019 4 mm;相较基准方法,两项误差指标分别降低约 65% 与 61%;在3种光照条件下标定结果一致、误差波动小,表明方法具有较强鲁棒性。同时消融实验证明了圆心特征在透视变换下的定位稳定性显著优于角点。在齿面彩色重建任务中,基于所获映射矩阵的点云彩色纹理映射可高保真复现齿面微观印痕与划痕,验证了工程可用性。

    Abstract:

    To address the reliance of high-precision camera-line-laser joint calibration on complex highaccuracy targets in industrial 3D color reconstruction, this paper proposes a high-precision calibration scheme based on a multi-feature, weakly constrained calibration block. The overall method comprises a multimodal feature extraction and registration framework, together with a two-stage optimization-based solver for the calibration model. Circular-hole centers are introduced on the calibration block and jointly detected with corner points. Corners are localized with sub-pixel accuracy using geometric constraints, while circular-hole centers are precisely estimated via a two-stage ellipse fitting strategy. Subsequently, a pose-adaptive projection-based method for 3D feature reconstruction is presented Following a dimensionality-reduction-detection-lifting pipeline, the 3D localization problem is transformed into 2D feature detection and then back-projected to reconstruct the 3D point cloud, thereby improving robustness to noise and pose variations. Finally, unambiguous 2D-3D feature point registration is achieved by incorporating geometric priors. For parameter estimation, a linear decomposition-nonlinear reconstruction two-stage optimization is adopted. The initial mapping matrix is linearly estimated from single-frame feature correspondences; after separating intrinsic and extrinsic parameters via RQ decomposition, lens distortion is incorporated for global nonlinear refinement to enhance global optimality and generalization. Experimental results indicates that, the proposed method achieves a normalized mean reprojection error of 0.84 pixels, corresponding to a physical distance error of 0.019 4 mm. Compared with the baseline method, those two error metrics are reduced by approximately 65% and 61%, respectively. The proposed method also yields consistent calibration results with small error fluctuations under three illumination conditions, indicating strong robustness. Ablation results further confirm that center features are significantly more stable than corner features under perspective transformation. In the gear tooth-surface color reconstruction task, point-cloud color texture mapping based on the obtained mapping matrix faithfully reproduces microscopic impressions and scratches on the tooth surface, thereby validating its engineering applicability.

    参考文献
    相似文献
    引证文献
引用本文

蒲怀安,季彦均,唐进元,陈龙庭,宋碧芸.相机-线激光联合标定的点云彩色纹理映射方法[J].仪器仪表学报,2026,47(1):287-299

复制
分享
相关视频

文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2026-03-30
  • 出版日期:
文章二维码