Abstract:
Objective To address the low docking accuracy of autonomous underwater vehicles (AUVs) in complex underwater environments, a multi-feature fusion vision-based method is proposed.
Method A self-developed rudderless vector propulsion AUV with four thrusters was used, and the dark channel prior (DCP) dehazing algorithm was adopted for image enhancement. An improved Canny edge detection algorithm was combined with color threshold segmentation to achieve multi-feature fusion. The minimum enclosing circle method was utilized for circle center positioning, and coordinate transformation was performed to calculate the relative position and orientation for docking.
Results Unity 3D simulations and pool experiments revealed a distance-dependent trend: both mean difference and root mean square error decreased as docking distance decreased. Closer distances yielded higher visual ranging accuracy and docking precision. When the docking distance was less than 2 m, the positioning error was maintained below 5 cm, with an overall success rate of 88%.
Conclusion The proposed method fulfills the accuracy requirements for AUV autonomous docking and provides a highly robust solution for underwater equipment recovery.