Jimenez-Moreno, Robinson
Unknown Affiliation

Published : 7 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 7 Documents
Search

Virtual environment for assistant mobile robot Herrera, Jorge Jaramillo; Jimenez-Moreno, Robinson; Martinez Baquero, Javier Eduardo
International Journal of Electrical and Computer Engineering (IJECE) Vol 13, No 6: December 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v13i6.pp6174-6184

Abstract

This paper shows the development of a virtual environment for a mobile robotic system with the ability to recognize basic voice commands, which are oriented to the recognition of a valid command of bring or take an object from a specific destination in residential spaces. The recognition of the voice command and the objects with which the robot will assist the user, is performed by a machine vision system based on the capture of the scene, where the robot is located. In relation to each captured image, a convolutional network based on regions is used with transfer learning, to identify the objects of interest. For human-robot interaction through voice, a convolutional neural network (CNN) of 6 convolution layers is used, oriented to recognize the commands to carry and bring specific objects inside the residential virtual environment. The use of convolutional networks allowed the adequate recognition of words and objects, which by means of the associated robot kinematics give rise to the execution of carry/bring commands, obtaining a navigation algorithm that operates successfully, where the manipulation of the objects exceeded 90%. Allowing the robot to move in the virtual environment even with the obstruction of objects in the navigation path.<
Robust identification of users by convolutional neural network in MATLAB and Raspberry Pi Murillo, Paula Useche; Jiménez-Moreno, Robinson; Baquero, Javier Eduardo Martinez
International Journal of Electrical and Computer Engineering (IJECE) Vol 14, No 4: August 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v14i4.pp3876-3884

Abstract

The following article presents the development of an algorithm embedded in a Raspberry Pi 3B board, where a user identification was made, using the convolutional neural network (CNN) for 5 predefined users, with the option of loading remotely a new network for a new user. Comparatively, the same application was programmed in MATLAB programming software to evaluate the results and identify the advantages between them. Networks were trained for 5 different users, using the Caffe library on the Raspberry Pi, and the MATLAB neural network package on the computer. Where it was found that the training made by Caffe on an embedded system is much slower and less efficient than the ones performed in MATLAB, obtaining less than 55% accuracy with Caffe networks and more than 90% with MATLAB networks, training with the same number of samples, the same architecture, and the same database. Finally, the accuracy obtained through confusion matrix is over 88% in each case of users identification.
Comparison of convolutional neural network models for user’s facial recognition Pinzón-Arenas, Javier Orlando; Jimenez-Moreno, Robinson; Martinez Baquero, Javier Eduardo
International Journal of Electrical and Computer Engineering (IJECE) Vol 14, No 1: February 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v14i1.pp192-198

Abstract

This paper compares well-known convolutional neural networks (CNN) models for facial recognition. For this, it uses its database created from two registered users and an additional category of unknown persons. Eight different base models of convolutional architectures were compared by transfer of learning, and two additional proposed models called shallow CNN and shallow directed acyclic graph with CNN (DAG-CNN), which are architectures with little depth (six convolution layers). Within the tests with the database, the best results were obtained by the GoogLeNet and ResNet-101 models, managing to classify 100% of the images, even without confusing people outside the two users. However, in an additional real-time test, in which one of the users had his style changed, the models that showed the greatest robustness in this situation were the Inception and the ResNet-101, being able to maintain constant recognition. This demonstrated that the networks of greater depth manage to learn more detailed features of the users' faces, unlike those of shallower ones; their learning of features is more generalized. Declare the full term of an abbreviation/acronym when it is mentioned for the first time.
Smart chatbot for surveys by convolutional networks speech recognition Jimenez-Moreno, Robinson; Baquero, Javier Eduardo Martínez; Umaña, Luis Alfredo Rodriguez
International Journal of Electrical and Computer Engineering (IJECE) Vol 15, No 3: June 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v15i3.pp3410-3417

Abstract

This paper details the development of an innovative voice chatbot interface specifically designed for evaluating user options using a Likert scale by color. The core of this interface is designing a convolutional neural network architecture, which has been trained with MEL spectrogram inputs from seven possible words for each answer. These spectrograms are crucial in capturing the audio features necessary for effective voice recognition and establishing the interactions that occur between the chatbot and the user, allowing the convolutional network to learn and distinguish between different types of user responses accurately. During the training phase, the convolutional neural network achieved an accuracy rate of 91.4%, indicating its robust performance in processing and interpreting voice commands. The interface was tested in a controlled environment, with a group of ten users and a survey of 5 questions, where it achieved a perfect detection accuracy of 100%. The results demonstrate the system's capacity for natural user interaction by voice and employing a free text to speech (TTS) algorithm for the chatbot voice.
Learning assistance module based on a small language model Jinete, Marco Antonio; Jiménez-Moreno, Robinson; Espitia-Cubillos, Anny Astrid
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 5: October 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i5.pp4202-4210

Abstract

This paper presents the development of a low-cost learning assistant embedded in an NVIDIA Jetson Xavier board that uses speech and gesture recognition, together with a long language model for offline work. Using the large language model (LLM) Phi-3 Mini (3.8B) model and the Whisper (model base) model for automatic speech recognition, a learning assistant is obtained under a compact and efficient design based on extensive language model architectures that give a general answer set of a topic. Average processing times of 0.108 seconds per character, a speech transcription efficiency of 94.75%, an average accuracy of 9.5/10 and 8.5/10 in the consistency of the responses generated by the learning assistant, a full recognition of the hand raising gesture when done for at least 2 seconds, even without fully extending the fingers, were obtained. The prototype is based on the design of a graphical interface capable of responding to voice commands and generating dynamic interactions in response to the user's gesture detection, representing a significant advance towards the creation of comprehensive and accessible human-machine interface solutions.
Robotic product-based manipulation in simulated environment Guacheta-Alba, Juan Camilo; Espitia-Cubillos, Anny Astrid; Jimenez-Moreno, Robinson
International Journal of Electrical and Computer Engineering (IJECE) Vol 15, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v15i6.pp5894-5903

Abstract

Before deploying algorithms in industrial settings, it is essential to validate them in virtual environments to anticipate real-world performance, identify potential limitations, and guide necessary optimizations. This study presents the development and integration of artificial intelligence algorithms for detecting labels and container formats of cleaning products using computer vision, enabling robotic manipulation via a UR5 arm. Label identification is performed using the speeded-up robust features (SURF) algorithm, ensuring robustness to scale and orientation changes. For container recognition, multiple methods were explored: edge detection using Sobel and Canny filters, Hopfield networks trained on filtered images, 2D cross-correlation, and finally, a you only look once (YOLO) deep learning model. Among these, the custom-trained YOLO detector provided the highest accuracy. For robotic control, smooth joint trajectories were computed using polynomial interpolation, allowing the UR5 robot to execute pick-and-place operations. The entire process was validated in the CoppeliaSim simulation environment, where the robot successfully identified, classified, and manipulated products, demonstrating the feasibility of the proposed pipeline for future applications in semi-structured industrial contexts.
Image segmentation using fuzzy clustering for industrial applications Jiménez-Moreno, Robinson; Vargas Duanca, Laura María; Espitia-Cubillos, Anny Astrid
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i6.pp4636-4642

Abstract

This paper presents a fuzzy logic clustering algorithm oriented to image segmentation and the procedure designed to evaluate its performance by varying two parameters: the number of clusters (c) and the diffusivity parameter (m), which leads to the conclusion that an adjusted number of clusters is sufficient to recognize main elements of the image, but a more detailed reconstruction requires a higher number of clusters. Also, the diffusivity parameter influences the smoothness of the boundaries between clusters, low values generate a segmentation with more abrupt transitions and sharper contours, high values smooth the segmentation, its excessive increase may cause the elements to merge, losing details. In general, the balance between these two parameters is key to obtaining an effective segmentation. Three validation scenarios were used, the first two allowed to establish the most appropriate parameters for segmentation, regulating the clusters to a maximum of 4 and keeping the diffusivity level at 2.0, the third scenario validated the algorithm with real images of industrial cleaning products, all with noise, establishing the computational cost and processing times for images of 350×350 and 2000×3000 pixels resolution. In conclusion, applications of the algorithm are foreseen in automatic quality control and inventory control of finished products and raw materials, thanks to its high efficiency and low response time, even in scenarios involving noisy and large images.