Fault-Tolerant Control of a Quadcopter Using Reinforcement Learning

Features
Authors Abstract
Content
This study presents a novel reinforcement learning (RL)-based control framework aimed at enhancing the safety and robustness of the quadcopter, with a specific focus on resilience to in-flight one propeller failure. This study addresses the critical need of a robust control strategy for maintaining a desired altitude for the quadcopter to save the hardware and the payload in physical applications. The proposed framework investigates two RL methodologies, dynamic programming (DP) and deep deterministic policy gradient (DDPG), to overcome the challenges posed by the rotor failure mechanism of the quadcopter. DP, a model-based approach, is leveraged for its convergence guarantees, despite high computational demands, whereas DDPG, a model-free technique, facilitates rapid computation but with constraints on solution duration. The research challenge arises from training RL algorithms on large dimension and action domains. With modifications to the existing DP and DDPG algorithms, the controllers were trained to not only cater for large continuous state and action domain but also achieve a desired state after an in-flight propeller failure. To verify the robustness of the proposed control framework, extensive simulations were conducted in a MATLAB environment across various initial conditions and underscoring their viability for mission-critical quadcopter applications. A comparative analysis was performed between both RL algorithms and their potential for applications in faulty aerial systems.
Meta TagsDetails
DOI
https://doi.org/10.4271/01-18-01-0006
Pages
15
Citation
Qureshi, M., Maqsood, A., and Fayyaz ud Din, A., "Fault-Tolerant Control of a Quadcopter Using Reinforcement Learning," SAE Int. J. Aerosp. 18(1):75-89, 2025, https://doi.org/10.4271/01-18-01-0006.
Additional Details
Publisher
Published
Mar 03
Product Code
01-18-01-0006
Content Type
Journal Article
Language
English