Project description:
Autonomous underwater robots are significantly hindered by the underwater poor visibility conditions, decreasing their ability to closely perceive the environment. Nevertheless, optical cameras are one of the most cost effective perception sensors sensors that can contribute towards better navigation capabilities for autonomous underwater systems. This work is focused on developing vision based control and navigation architectures that combine computer vision techniques and control theory. Aspects such as focus of attention, target tracking and interaction, as well as underwater environment anomaly detection are studied using marine robots, optical cameras and laser systems.
Team members:
Nenyi Dadson, Graduate student in Mechanical Engineering, LSU, Email: ndadso1@lsu.edu
Mikhalib Green, Graduate student in Mechanical Engineering, LSU, Email: mgre156@lsu.edu
Publications:
[1] E. A. Olson, C. Barbalata, J. Zhang, K. A. Skinner, and M. Johnson-Roberson, “Synthetic data generation for deep learning of underwater disparity estimation,” in OCEANS 2018 MTS/IEEE Charleston, IEEE, 2018, pp. 1–6.
[2] Ard, W., & Barbalata, C. (2023). Sonar Image Composition for Semantic Segmentation Using Machine Learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 248-254).