Robinson Jiménez-Moreno
Nueva Granada Military University

Published : 8 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 8 Documents
Search

Offline signature verification using DAG-CNN Javier O. Pinzón-Arenas; Robinson Jiménez-Moreno; César G Pachón-Suescún
International Journal of Electrical and Computer Engineering (IJECE) Vol 9, No 4: August 2019
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (641.031 KB) | DOI: 10.11591/ijece.v9i4.pp3314-3322

Abstract

This paper presents the implementation of a DAG-CNN which aims to classify and verify the authenticity of the offline signatures of 3 users, using the writer-independent method. In order to develop this work, 2 databases (training / validation and testing) were built manually, i.e. the manual collection of the signatures of the 3 users as well as forged signatures made by people not belonging to the base and altered by the same users were done, and signatures of another 115 people were used to create the category of non-members. Once the network is trained, its validation and subsequent testing is performed, obtaining overall accuracies of 99.4% and 99.3%, respectively, showing the features learned by the network and verifying the ability of this configuration of neural network to be used in applications for identification and verification of offline signatures.
Comparison between handwritten word and speech record in real-time using CNN architectures Javier Orlando Pinzón-Arenas; Robinson Jiménez-Moreno
International Journal of Electrical and Computer Engineering (IJECE) Vol 10, No 4: August 2020
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (711.44 KB) | DOI: 10.11591/ijece.v10i4.pp4313-4321

Abstract

This paper presents the development of a system of comparison between words spoken and written by means of deep learning techniques. There are used 10 words acquired by means of an audio function and, these same words, are written by hand and acquired by a webcam, in such a way as to verify if the two data match and show whether or not it is the required word. For this, 2 different CNN architectures were used for each function, where for voice recognition, a suitable CNN was used to identify complete words by means of their features obtained with mel frequency cepstral coefficients, while for handwriting, a faster R-CNN was used, so that it both locates and identifies the captured word. To implement the system, an easy-to-use graphical interface was developed, which unites the two neural networks for its operation. With this, tests were performed in real-time, obtaining a general accuracy of 95.24%, allowing showing the good performance of the implemented system, adding the response speed factor, being less than 200 ms in making the comparison.
Assistant robot through deep learning Robinson Jiménez-Moreno; Javier Orlando Pinzón-Arenas; César Giovany Pachón-Suescún
International Journal of Electrical and Computer Engineering (IJECE) Vol 10, No 1: February 2020
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (918.575 KB) | DOI: 10.11591/ijece.v10i1.pp1053-1062

Abstract

This article presents a work oriented to assistive robotics, where a scenario is established for a robot to reach a tool in the hand of a user, when they have verbally requested it by his name. For this, three convolutional neural networks are trained, one for recognition of a group of tools, which obtained an accuracy of 98% identifying the tools established for the application, that are scalpel, screwdriver and scissors; one for speech recognition, trained with the names of the tools in Spanish language, where its validation accuracy reach a 97.5% in the recognition of the words; and another for recognition of the user's hand, taking in consideration the classification of 2 gestures: Open and Closed hand, where a 96.25% accuracy was achieved. With those networks, tests in real time are performed, presenting results in the delivery of each tool with a 100% of accuracy, i.e. the robot was able to identify correctly what the user requested, recognize correctly each tool and deliver the one need when the user opened their hand, taking an average time of 45 seconds in the execution of the application.
Abnormal gait detection by means of LSTM Cesar G. Pachon-Suescun; Javier O. Pinzon-Arenas; Robinson Jimenez-Moreno
International Journal of Electrical and Computer Engineering (IJECE) Vol 10, No 2: April 2020
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (613.997 KB) | DOI: 10.11591/ijece.v10i2.pp1495-1506

Abstract

This article presents a system focused on the detection of three types of abnormal walk patterns caused by neurological diseases, specifically Parkinsonian gait, Hemiplegic gait, and Spastic Diplegic gait. A Kinect sensor is used to extract the Skeleton from a person during its walk, to then calculate four types of bases that generate different sequences from the 25 points of articulations that the Skeleton gives. For each type of calculated base, a recurrent neural network (RNN) is trained, specifically a Long short-term memory (LSTM). In addition, there is a graphical user interface that allows the acquisition, training, and testing of trained networks. Of the four trained networks, 98.1% accuracy is obtained with the database that was calculated with the distance of each point provided by the Skeleton to the Hip-Center point.
ResSeg: Residual encoder-decoder convolutional neural network for food segmentation Javier O. Pinzón-Arenas; Robinson Jiménez-Moreno; César G. Pachón-Suescún
International Journal of Electrical and Computer Engineering (IJECE) Vol 10, No 1: February 2020
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1501.071 KB) | DOI: 10.11591/ijece.v10i1.pp1017-1026

Abstract

This paper presents the implementation and evaluation of different convolutional neural network architectures focused on food segmentation. To perform this task, it is proposed the recognition of 6 categories, among which are the main food groups (protein, grains, fruit, vegetables) and two additional groups, rice and drink or juice. In addition, to make the recognition more complex, it is decided to test the networks with food dishes already started, i.e. during different moments, from its serving to its finishing, in order to verify the capability to see when there is no more food on the plate. Finally, a comparison is made between the two best resulting networks, a SegNet with architecture VGG-16 and a network proposed in this work, called Residual Segmentation Convolutional Neural Network or ResSeg, with which accuracies greater than 90% and interception-over-union greater than 75% were obtained. This demonstrates the ability, not only of SegNet architectures for food segmentation, but the use of residual layers to improve the contour of the segmentation and segmentation of complex distribution or initiated of food dishes, opening the field of application of this type of networks to be implemented in feeding assistants or in automated restaurants, including also for dietary control for the amount of food consumed.
Robotic navigation algorithm with machine vision Cesar G. Pachon-Suescun; Carlos J. Enciso-Aragon; Robinson Jimenez-Moreno
International Journal of Electrical and Computer Engineering (IJECE) Vol 10, No 2: April 2020
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (638.391 KB) | DOI: 10.11591/ijece.v10i2.pp1308-1316

Abstract

In the field of robotics, it is essential to know the work area in which the agent is going to develop, for that reason, different methods of mapping and spatial location have been developed for different applications. In this article, a machine vision algorithm is proposed, which is responsible for identifying objects of interest within a work area and determining the polar coordinates to which they are related to the observer, applicable either with a fixed camera or in a mobile agent such as the one presented in this document. The developed algorithm was evaluated in two situations, determining the position of six objects in total around the mobile agent. These results were compared with the real position of each of the objects, reaching a high level of accuracy with an average error of 1.3271% in the distance and 2.8998% in the angle.
Obstacle Evasion Algorithm Using Convolutional Neural Networks and Kinect-V1 Paula Catalina Useche-Murillo; Javier O Pinzón-Arenas; Robinson Jimenez-Moreno
Indonesian Journal of Electrical Engineering and Informatics (IJEEI) Vol 8, No 3: September 2020
Publisher : IAES Indonesian Section

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52549/ijeei.v8i3.2078

Abstract

The following paper presents the development of an algorithm for the evasion of static obstacles during the process of gripping the desired object, using an anthropomorphic robot, artificial intelligence, and machine vision systems. The algorithm has developed to detect a variable number of obstacles (between 1 and 15) and the grip desired element, using a robot with 3 degrees of freedom (DoF). A Kinect V1 was used to capture the RGB-D information of the environment and Convolutional Neural Networks for the detection and classification of each element. The capture of the three-dimensional information of the detected objects allows comparing the distance between the obstacles and the robot, to make decisions regarding the movement of the gripper to evade elements present in the path and hold the desired object without colliding. Obstacles of less than 18 cm in height were avoided, concerning the ground, with a probability of collision of 0% under specific environmental conditions, moving the robot since initial path in a straight line to the desired object, which is prone to changes according to the obstacles present in its. Function tests have been according to the manipulator's ability to evade possible obstacles of different heights located between the robot and the desired object
Flexible Gripper, Design and Control for Soft Robotics Catalina Castillo-Rodriguez; Robinson Jimenez-Moreno
Indonesian Journal of Electrical Engineering and Informatics (IJEEI) Vol 9, No 4: December 2021
Publisher : IAES Indonesian Section

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52549/ijeei.v9i4.3325

Abstract

This paper presents the 3D design of a flexible gripper used for gripping polyform objects that require a certain degree of adaptation of the effector for its manipulation. For this case, the 3D printing of the gripper and its construction is exposed, where a fuzzy controller is implemented for its manipulation. The effector has a flexo resistance that provides information of the deflection of the gripper, this information and the desired grip force are part of the fuzzy controller that seeks to regulate the current of the servomotors that make up the structure of the gripper and are responsible for ensuring the grip. An efficient system is obtained for gripping polyform objects involving deflection of up to 5 mm with a current close to 112 mA.