Claim Missing Document
Check
Articles

Found 2 Documents
Search
Journal : TELKOMNIKA (Telecommunication Computing Electronics and Control)

Teleautonomous Control on Rescue Robot Prototype Handy Wicaksono; Handry Khoswanto; Son Kuswadi
TELKOMNIKA (Telecommunication Computing Electronics and Control) Vol 10, No 4: December 2012
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12928/telkomnika.v10i4.849

Abstract

Robot application in disaster area can help responder team to save victims. In order to finish task, robot must have flexible movement mechanism so it can pass through uncluttered area. Passive linkage can be used on robot chassis so it can give robot flexibility. On physical experiments, robot is succeeded to move through gravels and 5 cm obstacle. Rescue robot also has specialized control needs. Robot must able to be controlled remotely. It also must have ability to move autonomously. Teleautonomous control method is combination between those methods. It can be concluded from experiments that on teleoperation mode, operator must get used to see environment through robot’s camera. While on autonomous mode, robot is succeeded to avoid obstacle and search target based on sensor reading and controller program. On teleautonomous mode, robot can change control mode by using bluetooth communication for data transfer, so robot control will be more flexible.
Behaviors Coordination and Learning on Autonomous Navigation of Physical Robot Handy Wicaksono; Handry Khoswanto; Son Kuswadi
TELKOMNIKA (Telecommunication Computing Electronics and Control) Vol 9, No 3: December 2011
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12928/telkomnika.v9i3.738

Abstract

 Behaviors coordination is one of keypoints in behavior based robotics. Subsumption architecture and motor schema are example of their methods. In order to study their characteristics, experiments in physical robot are needed to be done. It can be concluded from experiment result that the first method gives quick, robust but non smooth response. Meanwhile the latter gives slower but smoother response and it is tending to reach target faster. Learning behavior improve robot’s performance in handling uncertainty. Q learning is popular reinforcement learning method that has been used in robot learning because it is simple, convergent and off policy. The learning rate of Q affects robot’s performance in learning phase. Q learning algorithm is implemented in subsumption architecture of physical robot. As the result, robot succeeds to do autonomous navigation task although it has some limitations in relation with sensor placement and characteristic.