基于无人机视觉信息的消防炮预瞄准系统
DOI:
CSTR:
作者:
作者单位:

1.中国矿业大学信息与控制工程学院徐州221116; 2.中国矿业大学机电工程学院徐州221116

作者简介:

通讯作者:

中图分类号:

TH721

基金项目:

国家自然科学基金(52204256)、江苏省重大科技示范项目(BE2023688)项目资助


A fire cannon pre-aiming system based on UAV visual information
Author:
Affiliation:

1.School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China; 2.School of Mechatronic Engineering, China University of Mining and Technology, Xuzhou 221116, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    无人机可以高效感知火灾环境,获得火场信息。为提高消防工作的智能化水平,提出了一种基于无人机视觉信息的消防炮预瞄准系统,预瞄准是由火场感知、位姿解算、角度调整这3个阶段构成的控制过程。在火场感知阶段,针对消防场景中无人机图像存在的目标尺寸小和烟雾干扰问题,考虑消防救援工作的实时性要求,提出了一种联合轻量化去雾处理与目标检测的感知模型。在图像去雾方面,考虑消防场景中烟雾具有非均匀分布和灰度值多样的特点,改进大气散射模型,设计一个编码器-解码器结构的神经网络求解透射图和烟雾灰度值,实现图像质量增强。在目标检测方面,以YOLOv8s为基线,在骨干网络中,将浅层的卷积运算替换为具有集中感受野的PSConv模块,保留更多小目标信息;将深层的卷积模块替换为GhostConv,采用SimA-former模块替代最深层C2f结构,进行模型轻量化。在颈部网络特征融合阶段,结合坐标注意力机制(CA)与小目标检测头构建高分辨率多尺度特征融合模块。基于感知的火场信息,采用相机成像模型解算消防炮与火源的相对位姿,据此计算消防炮应调整的水平与俯仰角度。在工业厂房外部自制消防场景进行实验,感知模型的mAP50指标达到了92.3%,相较不使用去雾处理的YOLOv8s算法提升了6.2%,测得预瞄准的水平角度误差在±4°,火源距离误差在6%以内,验证了所提方法的有效性和实用性。

    Abstract:

    UAV can efficiently perceive the fire environment and obtain the fire scene information. To improve the intelligence level of firefighting work, a fire cannon pre-aiming system based on UAV visual information is proposed. Pre-aiming is a control process consisting of three stages: Fire scene perception, pose calculation, and angle adjustment. In the stage of fire scene perception, considering the real-time requirements of firefighting work, a perception model combining lightweight object detection and dehazing processing is proposed to address the problems of small target size and smoke interference in UAV images of fire scenes. Regarding image dehazing, considering the characteristics of non-uniform distribution and diverse gray levels of haze in fire scenes, the atmospheric scattering model is improved. A neural network with an encoder-decoder structure is designed to solve transmission map and haze gray value, which significantly enhances the image quality. Regarding object detection, YOLOv8s is used as the baseline. In the backbone network, the convolution operations in shallow layers are replaced by PSConv module with a concentrated receptive field to extract more information of small targets; the convolution operations in deep layers are replaced by GhostConv, and the SimA-former module is employed to substitute the deepest C2f structure to achieve model lightweighting. During the feature fusion stage of the neck network, the coordinate attention mechanism (CA) and the small target detection head are combined to construct a high-resolution multi-scale feature fusion module. Based on the acquired fire scene information, the camera model is utilized to compute the relative position and orientation of the fire cannon and fire source. Subsequently, the required horizontal and pitch angles for the fire cannon adjustement are determined. Experiments were conducted in a custom-built fire scenario outside an industrial facility. The perception model achieved a mAP50 score of 92.3%, representing a 6.2% improvement over the YOLOv8s without dehazing preprocessing. The pre-alignment error in the horizontal angle was within ±4°, while the distance estimation error for the fire source remained below 6%. Those results demonstrate the effectiveness and practical applicability of the proposed method.

    参考文献
    相似文献
    引证文献
引用本文

张晓光,陈润泽,赵桔贤,李伟.基于无人机视觉信息的消防炮预瞄准系统[J].仪器仪表学报,2026,47(1):325-339

复制
分享
相关视频

文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2026-03-30
  • 出版日期:
文章二维码