Claim Missing Document
Check
Articles

Found 3 Documents
Search

Robot Manipulator Control with Inverse Kinematics PD-Pseudoinverse Jacobian and Forward Kinematics Denavit Hartenberg Indra Agustian; Novalio Daratha; Ruvita Faurina; Agus Suandi; Sulistyaningsih Sulistyaningsih
Jurnal Elektronika dan Telekomunikasi Vol 21, No 1 (2021)
Publisher : LIPI Press

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.14203/jet.v21.8-18

Abstract

This paper presents the development of vision-based robotic arm manipulator control by applying Proportional Derivative-Pseudoinverse Jacobian (PD-PIJ) kinematics and Denavit Hartenberg forward kinematics. The task of sorting objects based on color is carried out to observe error propagation in the implementation of manipulator on real system. The objects image captured by the digital camera were processed based on HSV-color model and the centroid coordinate of each object detected were calculated. These coordinates are end effector position target to pick each object and were placed to the right position based on its color. Based on the end effector position target, PD-PIJ inverse kinematics method was used to determine the right angle of each joint of manipulator links. The angles found by PD-PIJ is the input of DH forward kinematics. The process was repeated until the square end effector reached the target. The experiment of model and implementation to actual manipulator were analyzed using Probability Density Function (PDF) and Weibull Probability Distribution. The result shows that the manipulator navigation system had a good performance. The real implementation of color sorting task on manipulator shows the probability of success rate cm is 94.46% for euclidian distance error less than 1.2 cm.
Comparative study of ensemble deep learning models to determine the classification of turtle species Ruvita Faurina; Andang Wijanarko; Aknia Faza Heryuanti; Sahrial Ihsani Ishak; Indra Agustian
Computer Science and Information Technologies Vol 4, No 1: March 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v4i1.p24-32

Abstract

Sea turtles are reptiles listed on the international union for conservation of nature (IUCN) red list of threatened species and the convention on international trade in endangered species of wild fauna and flora (CITES) Appendix I as species threatened with extinction. Sea turtles are nearly extinct due to natural predators and people who are frequently incorrect or even ignorant in determining which turtles should not be caught. The aim of this study was to develop a classification system to help classify sea turtle species. Therefore, the ensemble deep learning of convolutional neural network (CNN) method based on transfer learning is proposed for the classification of turtle species found in coastal communities. In this case, there are five well-known CNN models (VGG-16, ResNet-50, ResNet-152, Inception-V3, and DenseNet201). Among the five different models, the three most successful were selected for the ensemble method. The final result is obtained by combining the predictions of the CNN model with the ensemble method during the test. The evaluation result shows that the VGG16 - DenseNet201 ensemble is the best ensemble model, with accuracy, precision, recall, and F1-Score values of 0.74, 0.75, 0.74, and 0.76, respectively. This result also shows that this ensemble model outperforms the original model.
Image captioning to aid blind and visually impaired outdoor navigation Ruvita Faurina; Anisa Jelita; Arie Vatresia; Indra Agustian
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 12, No 3: September 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v12.i3.pp1104-1117

Abstract

Artificial intelligence technology has dramatically improved the quality of services for human needs, one of which is technology to improve the quality of services for the blind and visually impaired, particularly technology that can help them understand visual sights to facilitate navigation in their daily lives. This study developed an image captioning model to aid the blind and visually impaired in outdoor navigation. The image captioning model employs the encoder-decoder method, with the convolutional neural network (CNN) feature extraction and attention layer as encoders and the long short-term memory (LSTM) as decoders. ResNet101 and ResNet152 are used in the encoder to extract image features. The results of the extraction and caption are forwarded to the attention layer and the LSTM network. The attention layer uses the Bahdanau attention mechanism. The accuracy of the model is calculated using the bilingual evaluation understudy score (BLEU), metric for evaluation of translation with explicit ordering (METEOR) and recall-oriented understudy for gisting evaluation-longest common subsequence (ROUGE-L). ResNet101 performed the best on BLEU-4, scoring 91.811% and 94.0337% in the METEOR evaluation. The captioning results show that the model is quite successful in displaying a simple caption that is suitable for each image.