این سایت در حال حاضر پشتیبانی نمی شود و امکان دارد داده های نشریات بروز نباشند
Iranian Journal of Chemistry and Chemical Engineering، جلد ۴۴، شماره ۱، صفحات ۲۶۵-۲۸۲

عنوان فارسی
چکیده فارسی مقاله
کلیدواژه‌های فارسی مقاله

عنوان انگلیسی Reinforcement Learning Based Adaptive PID Controller for a Continuous Stirred Tank Heater Process
چکیده انگلیسی مقاله The application of traditional controllers is restricted to the real-time analysis of the nonlinear process due to the need to linearize a nonlinear system. Furthermore, tuning poses a significant challenge, especially when dealing with nonlinear systems, as traditional methods often require intricate manual computations to operate under various constraints. The Continuous Stirred Tank Heater (CSTH) process considered for the study has a wide range of operating points and is highly nonlinear. Hence, this research aims to pioneer a new approach by leveraging Reinforcement Learning (RL) to streamline
the traditional Proportional Integral Derivative (PID) controller tuning process, adapting to real-time dynamic process demands. The study focuses mainly on temperature control of the CSTH process, which is renowned for its nonlinear and time-delay characteristics. By employing policy-based RL techniques, specifically Twin Delayed Deep Deterministic Policy (TD3) and Soft Actor-Critic (SAC) RL agents with suitable reward functions, the investigation evaluates their adaptability to various set points and resilience to disturbances. Through rigorous experimentation and analysis, it is observed that TD3 with Gaussian reward function performs well compared to SAC. The study seeks to demonstrate the performance of TD3 RL-based methodologies in simplifying PID tuning by the reduction of performance metrics such as ISE, IAE, Settling Time, and overshoot as 47.6%, 26.5%, 3.8%, and 100% for servo response and ISE and Settling Time as 37.7% and 4.7% for the regulatory response than traditional PID controller.
کلیدواژه‌های انگلیسی مقاله Continuous Stirred Tank Heater,Adaptive PID,Reinforcement Learning,Soft actor-critic,Twin Delayed Deep Deterministic Policy

نویسندگان مقاله Gomathi Veerasamy |
Department of Instrumentation Engineering, Madras Institute of Technology Campus, Anna University, Chennai, Tamilnadu, INDIA

Suwetha Balaji |
Department of Instrumentation Engineering, Madras Institute of Technology Campus, Anna University, Chennai, Tamilnadu, INDIA

Thirutajaswin Kadirvelu |
Department of Instrumentation Engineering, Madras Institute of Technology Campus, Anna University, Chennai, Tamilnadu, INDIA

Valarmathi Ramasamy |
School of Electrical and Electronics Engineering, SASTRA Deemed to be University, Thanjavur, Tamilnadu, INDIA


نشانی اینترنتی https://ijcce.ac.ir/article_715625_1d2aa3828fce766bb14f6c5e522a5b8d.pdf
فایل مقاله فایلی برای مقاله ذخیره نشده است
کد مقاله (doi)
زبان مقاله منتشر شده en
موضوعات مقاله منتشر شده
نوع مقاله منتشر شده
برگشت به: صفحه اول پایگاه   |   نسخه مرتبط   |   نشریه مرتبط   |   فهرست نشریات