cover
Contact Name
Iswanto
Contact Email
-
Phone
+628995023004
Journal Mail Official
jrc@umy.ac.id
Editorial Address
Kantor LP3M Gedung D Kampus Terpadu UMY Jl. Brawijaya, Kasihan, Bantul, Yogyakarta 55183
Location
Kab. bantul,
Daerah istimewa yogyakarta
INDONESIA
Journal of Robotics and Control (JRC)
ISSN : 27155056     EISSN : 27155072     DOI : https://doi.org/10.18196/jrc
Journal of Robotics and Control (JRC) is an international open-access journal published by Universitas Muhammadiyah Yogyakarta. The journal invites students, researchers, and engineers to contribute to the development of theoretical and practice-oriented theories of Robotics and Control. Its scope includes (but not limited) to the following: Manipulator Robot, Mobile Robot, Flying Robot, Autonomous Robot, Automation Control, Programmable Logic Controller (PLC), SCADA, DCS, Wonderware, Industrial Robot, Robot Controller, Classical Control, Modern Control, Feedback Control, PID Controller, Fuzzy Logic Controller, State Feedback Controller, Neural Network Control, Linear Control, Optimal Control, Nonlinear Control, Robust Control, Adaptive Control, Geometry Control, Visual Control, Tracking Control, Artificial Intelligence, Power Electronic Control System, Grid Control, DC-DC Converter Control, Embedded Intelligence, Network Control System, Automatic Control and etc.
Articles 21 Documents
Search results for , issue "Vol 5, No 6 (2024)" : 21 Documents clear
Autonomous Robotic Systems with Artificial Intelligence Technology Using a Deep Q Network-Based Approach for Goal-Oriented 2D Arm Control Bashabsheh, Murad
Journal of Robotics and Control (JRC) Vol 5, No 6 (2024)
Publisher : Universitas Muhammadiyah Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18196/jrc.v5i6.23850

Abstract

Accurate control robotic arms in two-dimensional environments present significant challenges, particularly in dynamic, real-time applications. Traditional model-based approaches require substantial system modeling, rendering them computationally extensive. This paper presents an adaptive Artificial Intelligence (AI)-driven approach through the use of Deep Q-Networks (DQN) control for a two–link robotic arm thus supporting better scalability. The DQN algorithm, a model-free Reinforcement Learning (RL) technique, allows the robotic arm to independently learn optimal control strategies by interaction with the environment and adapting to dynamic conditions. The task of the robot established reaches a specific target (red point) within a limited number of episodes. Key components of the methodology contain problem statement, DQN architecture, representation of the state and action spaces, a reward function, and the training process. Experimental results indicate that the DQN agent effectively learns to find optimal actions with high accuracy and robustness in guiding the arm to the target. The performance steadily improves during initial training, followed by stabilization, indicating an effective control policy. This study contributes to the knowledge of reinforcement learning in robotic control tasks and demonstrates, in particular, the potential of DQN for solving complex, goal-oriented tasks with minimal prior modeling. Compared to conventional control approaches, the DQN-driven one reveals higher flexibility, scalability, and efficiency. Although carried out in a simplified 2D environment, the novelty of this research lies in its emphasis on enabling the robotic arm to accomplish goal-oriented reaching tasks, lays a strong foundation for future applications in industrial automation and service robotics.

Page 3 of 3 | Total Record : 21