Robinson Jimenez-Moreno
Militar Nueva Granada University

Published : 6 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 6 Documents
Search

Tool delivery robot using convolutional neural network Javier Pinzon-Arenas; Robinson Jimenez-Moreno
International Journal of Electrical and Computer Engineering (IJECE) Vol 10, No 6: December 2020
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v10i6.pp6300-6308

Abstract

In the following article, it is presented a human-robot interaction system where algorithms were developed to control the movement of a manipulator in order to allow it to search and deliver, in the hand of the user, a desired tool with a certain orientation. A Convolutional Neural Network (CNN) was used to detect and recognize the user's hand, geometric analysis for the adjustment of the delivery status of the tool from any position of the robot and any orientation of the gripper, and a trajectory planning algorithm for the movement of the manipulator. It was possible to use the activations of a CNN developed in previous works for the detection of the position and orientation of the hand in the workspace and thus track it in real time, both in a simulated environment and in a real environment.
Robotic hex-nut sorting system with deep learning Cristian Almanza; Javier Martínez Baquero; Robinson Jiménez-Moreno
International Journal of Electrical and Computer Engineering (IJECE) Vol 11, No 4: August 2021
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v11i4.pp3575-3583

Abstract

This article exposes the design and implementation of an automation system based on a robotic arm for hex-nut classification, using pattern recognition and image processing.  The robotic arm work based on three servo motors and an electromagnetic end effector. The pattern recognition implemented allows classifying three different types of hex-nut through deep learning algorithms based on convolutional neural network architectures. The proposed methodology exposes four phases: the first is the design, implementation, and control of a robotic arm. The second is the capture, classification, and image treatment; the third allows gripping the nut through the robot’s inverse kinematic. The final phase is the re-localization of the hex-nut in the respective container. The automation system successfully classifies all the types of hex-nuts, where the convolutional network used is an efficient and recent pattern recognition method, with an accuracy of 100% in 150 iterations. This development allows for obtaining a novel algorithm for robotic applications in hex-nut sorting.
Visual control system for grip of glasses oriented to assistance robotics Robinson Jimenez-Moreno; Astrid Rubiano; Jose L. Ramirez
International Journal of Electrical and Computer Engineering (IJECE) Vol 10, No 6: December 2020
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v10i6.pp6330-6339

Abstract

Assistance robotics is presented as a means of improving the quality of life of people with disabilities, an application case is presented in assisted feeding. This paper presents the development of a system based on artificial intelligence techniques, for the grip of a glass, so that it does not slip during its manipulation by means of a robotic arm, as the liquid level varies. A faster R-CNN is used for the detection of the glass and the arm's gripper, and from the data obtained by the network, the mass of the beverage is estimated, and a delta of distance between the gripper and the liquid. These estimated values are used as inputs for a fuzzy system which has as output the torque that the motor that drives the gripper must exert. It was possible to obtain a 97.3% accuracy in the detection of the elements of interest in the environment with the faster R-CNN, and a 76% performance in the grips of the glass through the fuzzy algorithm.
Object gripping algorithm for robotic assistance by means of deep learning Robinson Jimenez-Moreno; Astrid Rubiano Fonseca; Jose Luis Ramirez
International Journal of Electrical and Computer Engineering (IJECE) Vol 10, No 6: December 2020
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v10i6.pp6292-6299

Abstract

This paper exposes the use of recent deep learning techniques in the state of the art, little addressed in robotic applications, where a new algorithm based on Faster R-CNN and CNN regression is exposed. The machine vision systems implemented, tend to require multiple stages to locate an object and allow a robot to take it, increasing the noise in the system and the processing times. The convolutional networks based on regions allow one to solve this problem, it is used for it two convolutional architectures, one for classification and location of three types of objects and one to determine the grip angle for a robotic gripper. Under the establish virtual environment, the grip algorithm works up to 5 frames per second with a 100% object classification, and with the implementation of the Faster R-CNN, it allows obtain 100% accuracy in the classifications of the test database, and over a 97% of average precision locating the generated boxes in each element, gripping successfully the objects.
Classification and Grip of Occluded Objects Robinson Jimenez-Moreno; Paula Useche-Murillo
Indonesian Journal of Electrical Engineering and Informatics (IJEEI) Vol 9, No 1: March 2021
Publisher : IAES Indonesian Section

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52549/ijeei.v9i1.1846

Abstract

The present paper exposes a system for detection, classification, and grip of occluded objects by machine vision, artificial intelligence, and an anthropomorphic robot, to generate a solution for the subjection of elements that present occlusions. The deep learning algorithm used is based on Convolutional Neural Networks (CNN), specifically Fast R-CNN (Fast Region-Based CNN) and DAG-CNN (Directed Acyclic Graph CNN) for pattern recognition, the three-dimensional information of the environment was collected through Kinect V1, and tests simulations by the tool VRML. A sequence of detection, classification, and grip was programmed to determine which elements present occlusions and which type of tool generates the occlusion. According to the user's requirements, the desired elements are delivered (occluded or not), and the unwanted elements are removed. It was possible to develop a program with 88.89% accuracy in gripping and delivering occluded objects using networks Fast R-CNN and DAG-CNN with achieving of 70.9% and 96.2% accuracy respectively, detecting elements without occlusions for the first net and classifying the objects into five tools (Scalpel, Scissor, Screwdriver, Spanner, and Pliers), with the second net. The grip of occluded objects requires accurate detection of the element located at the top of the pile of objects to remove it without affecting the rest of the environment. Additionally, the detection process requires that a part of the occluded tool be visible to determine the existence of occlusions in the stack
Deep learning speech recognition for residential assistant robot Robinson Jiménez-Moreno; Ricardo A. Castillo
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 12, No 2: June 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v12.i2.pp585-592

Abstract

This work presents the design and validation of a voice assistant to command robotic tasks in a residential environment, as a support for people who require isolation or support due to body motor problems. The preprocessing of a database of 3600 audios of 8 different categories of words like “paper”, “glass” or “robot”, that allow to conform commands such as "carry paper" or "bring medicine", obtaining a matrix array of Mel frequencies and its derivatives, as inputs to a convolutional neural network that presents an accuracy of 96.9% in the discrimination of the categories. The command recognition tests involve recognizing groups of three words starting with "robot", for example, "robot bring glass", and allow identifying 8 different actions per voice command, with an accuracy of 88.75%.