Abstract:
This paper presents a comparative simulation study of three collision avoidance strategies applied to autonomous unmanned aerial vehicle (UAV) missions in natural environments. The approaches tested are: artificial potential fields (APF), sampling-based Rapidly-exploring Random Tree Star (RRT*), and deep reinforcement learning (DRL). Using the Robot Operating System (ROS) with Gazebo and the AirSim simulation platforms, the UAVs were tasked to navigate cluttered terrains modeled with static obstacles (trees) and dynamic obstacles (birds). Metrics evaluated include mission success rate, path length, time delays, and collision rates. Results showed that hybrid strategies, such as combining artificial potential fields with Rapidly-exploring Random Tree Star (APF-RRT*), achieve higher reliability and shorter paths compared to standalone approaches, while deep reinforcement learning performed best in familiar settings. The study demonstrates that combining global and local planning techniques enhances UAV safety and mission efficiency.