cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
International Journal of Reconfigurable and Embedded Systems (IJRES)
ISSN : 20894864     EISSN : 27222608     DOI : -
Core Subject : Economy,
The centre of gravity of the computer industry is now moving from personal computing into embedded computing with the advent of VLSI system level integration and reconfigurable core in system-on-chip (SoC). Reconfigurable and Embedded systems are increasingly becoming a key technological component of all kinds of complex technical systems, ranging from audio-video-equipment, telephones, vehicles, toys, aircraft, medical diagnostics, pacemakers, climate control systems, manufacturing systems, intelligent power systems, security systems, to weapons etc. The aim of IJRES is to provide a vehicle for academics, industrial professionals, educators and policy makers working in the field to contribute and disseminate innovative and important new work on reconfigurable and embedded systems. The scope of the IJRES addresses the state of the art of all aspects of reconfigurable and embedded computing systems with emphasis on algorithms, circuits, systems, models, compilers, architectures, tools, design methodologies, test and applications.
Arjuna Subject : -
Articles 23 Documents
Search results for , issue "Vol 15, No 1: March 2026" : 23 Documents clear
An edge AIoT system for non-invasive biological indicators estimation and continuous health monitoring using PPG and ECG signals K. Nguyen, Hung; V. Pham, Manh
International Journal of Reconfigurable and Embedded Systems (IJRES) Vol 15, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijres.v15.i1.pp97-108

Abstract

This paper presents the design and implementation of an artificial intelligence of things (AIoT)-based system that integrates deep learning and edge computing for real-time non-invasive health monitoring, focusing on the estimation of mean arterial pressure (MAP) alongside vital parameters such as heart rate (HR), blood oxygen saturation (SpO₂), and body temperature. Photoplethysmography (PPG) and electrocardiography (ECG) signals are acquired using low-power MAX30102 and AD8232 sensors, preprocessed with lightweight digital filters, and processed through a 1D convolutional neural network (CNN) deployed on a SEEED Studio XIAO ESP32S3 microcontroller. The model trained using the cuff-less blood pressure estimation dataset, achieved a mean absolute error (MAE) of 2.51 mmHg on the embedded microcontroller and 2.93 mmHg when validated against a standard blood pressure monitor. Experimental results demonstrate high accuracy, achieving a MAE below 5 mmHg, thereby meeting the AAMI and British Hypertension Society (BHS) Grade A standards for blood pressure measurement. The system achieves real-time inference with an average latency of 16 ms and efficient memory utilization, ensuring suitability for wearable and embedded devices. Physiological data are transmitted via Wi-Fi to a Firebase cloud platform and visualized through a cross-platform mobile application. The proposed system demonstrates strong potential for remote healthcare applications, particularly in continuous monitoring and early health risk detection.
Optimizing call center agent efficiency through deep learning-based classifications using SMFCCAE Periyasamy, Ramachandran; Govindaraji, Manikandan; Nasurulla, I.; Srinivasan, V.; Rama Devi, K.
International Journal of Reconfigurable and Embedded Systems (IJRES) Vol 15, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijres.v15.i1.pp31-41

Abstract

Call centers are vital to business operations worldwide, acting as the primary interface between companies and their customers. They handle customer inquiries, manage complaints, and facilitate telephonic sales, making them essential to customer service. However, ensuring quality in the call center industry remains challenging, primarily due to the heavy reliance on call center representatives (CSRs) who manage high volumes of calls. Traditional methods of evaluating CSR performance often rely on manual assessments of small call samples, which can be time-consuming and limited in scope. With the advancement of deep learning techniques (DLTs), there is an opportunity to more accurately assess CSR performance. This study introduces the selecting minimal features for call center agents efficiency (SMFCCE) approach, which optimizes feature selection from CSR data to enhance classification accuracy and speed. The proposed method achieves approximately 85% accuracy, offering valuable insights and recommendations for improving overall call center operations.
Heart disease prediction using hybrid deep learning and medical imaging with wavelet-based feature extraction Palanisamy, Chairmadurai; Pachamuthu, Kavitha; Kumar Ramamoorthy, Arun
International Journal of Reconfigurable and Embedded Systems (IJRES) Vol 15, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijres.v15.i1.pp183-193

Abstract

The process of heart disease prediction is based on patient medical information, which can be addressed in terms of medical image as well as the results of an electrocardiogram (ECG) conducted to determine the risk of developing heart disease. The hybrid deep learning (DL) algorithms are developed using past data that can identify trends related to cardiovascular disease (CVDs). In the current paper, it is possible to offer a new method of heart disease prediction that would combine high-quality image processing and hybrid DL to enhance the effectiveness of predictions and avoid the shortcomings of the modern approaches. First, medical images like ECG images are pre-processed with butterworth adaptive 2D wavelet filter, which ensures maximal noise reduction, followed by maintenance of spatial and frequency information. The Gabor Wavelet-based feature extraction technique is applied to extract meaningful patterns, including both spatial and frequency domain information, which is essential for detecting heart-related anomalies. The resultant features are then categorized, along with both convolutional neural networks (CNN) and long short-term memory (LSTM), to make reliable and precise predictions of heart disease. The performance indicators, including accuracy (92.4%), precision (91.2%), recall (93.5%), and F1-score (91.0%), are utilized. Applying the model yields significant levels of reliability and generalization compared to traditional applications.
Home grocery listing hardware system and mobile application with speech recognition feature Faris Eizlan Suhaimi, Mohamad; Zakwan Jidin, Aiman; Mohd Nasir, Haslinah; Haidar Md Hamzah, Mohd; Syafiq Mispan, Mohd
International Journal of Reconfigurable and Embedded Systems (IJRES) Vol 15, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijres.v15.i1.pp109-118

Abstract

A home grocery list is a crucial aspect of household management that ensures sufficient kitchen supplies. The classic pen-and-paper grocery list is ineffective since it is time-consuming and prone to human error. Therefore, in this study, we proposed a microcontroller-based home grocery listing system using a barcode scanner and speech recognition. The proposed system consists of hardware and a mobile application. The main hardware components are the ESP32-S3 microcontroller, MH-ET barcode scanner v3.0, 20×4 LCD, and 2.4 GHz wireless keyboard. The mobile application is developed using the MIT App Inventor. Through the hardware, the system receives user input from barcode scanning or manual data entry using the keyboard. The data captured using a barcode scanner or keyboard is stored in the memory. Subsequently, the data is transmitted to the mobile application of the home grocery listing system via WiFi. Moreover, the mobile application is also equipped with user input via speech recognition and manual data entry using the keyboard. Hence, users have the flexibility to input the grocery list using four methods within the system. The developed home grocery listing system gives a new, satisfying experience to the users and a convenient way for them to make a home grocery list.
Synaptic shield: fusion of ResNext–50 and long short-term memory for enhanaced deepfake detection Mishra, Amit; Chinchmalatpure, Prajwal; Sambare, Govinda B.; Singh, Viomesh Kumar; Pawar, Atul Gulabrao; Mirajkar, Rahul Prakash; Takalkar, Priyanka K.; Vayadande, Kuldeep
International Journal of Reconfigurable and Embedded Systems (IJRES) Vol 15, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijres.v15.i1.pp224-235

Abstract

Recent developments in deepfakes have created much anxiety about the authenticity of any digital content and thus, calls for implementing detection mechanisms that will work accordingly. This paper uses Synaptic Shield, a innovative deep learning (DL) framework which is customized to detect alterations by deepfakes with high precision levels. It employs both convolution neural networks (CNNs) as well as modules for time feature extractions to test spatial and motion indicators from video data. High-level preprocessing pipelines in combination with confidence scoring mechanism help make Synaptic Shield adaptive toward manipulation techniques such as FaceSwap and DeepFake. The accuracy of our model surpasses other deepfake detection models with a high accuracy of 98.3%. The above results are based on exhaustive experimentation on standard datasets like FaceForensics++, DeepFake detection challenge (DFDC), and Celeb DeepFake (Celeb-DF). Synaptic Shield is shown to be the best with outstanding results that maintain a higher confidence score equivalent to its precision and reliability. Scalability in having the capacity to accommodate various manipulation techniques and levels of video quality indicates robustness in offering an effective method toward ensuring integrity in digital media. The work is an important move forward in addressing the problems created by DeepFake technologies.
HDGC-hybrid task offloading framework using deep reinforcement learning and genetic algorithms for 6G edge cloud Radhakrishnan, Kaniezhil; Horng, Mong-Fong; Shankar Subramanian, Siva; Lo, Chun-Chih
International Journal of Reconfigurable and Embedded Systems (IJRES) Vol 15, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijres.v15.i1.pp236-247

Abstract

The rapid evolution of 6G networks has brought new challenges in the domain of task offloading (TO), particularly within edge computing environments that are heavily reliant on the internet of things (IoT). Traditional TO methods that based on rule-based heuristics or shallow learning techniques fail to adapt efficiently to the dynamic, unpredictable network conditions, resource heterogeneity, and varying task demands. The proliferation of edge computing, the IoT, and 6G networks has introduced new challenges in TO due to dynamic network conditions, resource heterogeneity, and unpredictable task demands. To address these challenges, this work proposes an innovative TO method that integrates deep reinforcement learning (DRL) with heuristic search methods. The combination of DRL and heuristic algorithms enhances adaptability, convergence speed, and decision-making efficiency, making it well-suited for real-time TO in complex and unpredictable environments This paper proposes a novel hybrid TO framework that integrates DRL with genetic algorithms (GA) to address these challenges. The proposed hybrid optimization technique offer promising solutions by leveraging the strengths of individual approaches to balance competing objectives, such as energy consumption, task completion time, and resource utilization. This method explores optimization strategies to enhance TO efficiency in decentralized environments mainly focusing on optimizing energy use while ensuring performance metrics like latency, throughput, and task deadlines are met.
FPGA implementation of high-performance Huffman encoder for image processing applications Ahmad Mahammad, Masood; Raju Uppala, Appala; Mazhar Hussain, Shaik; Marouthu, Anusha
International Journal of Reconfigurable and Embedded Systems (IJRES) Vol 15, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijres.v15.i1.pp68-77

Abstract

An optimized Huffman encoder is essential in all applications where it is necessary to achieve the best performance, such as audio coding, data encryption, data compression, and image processing applications. This article presents a space-optimized encoding scheme to maximize performance and minimize latency in Dual Huffman encoding. The proposed approach employs dynamic tree selection using Dual Huffman encoding. A Dual Huffman code with dynamic tree selection can be run in parallel to support high-throughput applications. The resulting design optimally creates the Huffman dual encoding. This codeword table is based on a dynamic tree generation and selection algorithm, leading to a faster encoding process with lower latency. The architecture was developed using Vivado Xilinx 2023.2 and tested on three different field programmable gate array (FPGA) platforms (Zynq 7045, Zynq 7100, and Kria KV260 AI Vision board). A performance comparison between devices demonstrates that the Kria KV260 had the lowest latency (100 ns), as opposed to the Zync 7045 and Zynq 7100, which had latencies of 200 ns and 150 ns, respectively. These results elucidate the scalability of the architecture and its suitability for real-time image compression. When implemented on the Kria KV260, the dynamic tree selection-based Dual Huffman encoder is capable of fast, parallel image compression. The compression makes it a good candidate for advanced FPGA-based image processing systems with internet of things (IoT) applications.
Advanced MRI-based deep learning for brain tumors: a five-year review of oncology–radiology–AI synergy Ramesh, Shrisha Maddur; Gururaj, Chitrapadi
International Journal of Reconfigurable and Embedded Systems (IJRES) Vol 15, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijres.v15.i1.pp214-223

Abstract

Rapid advancements in computer vision and machine learning have significantly revolutionized medical imaging one such application is brain tumor detection and classification. Deep learning has emerged as a powerful tool, which offers exceptional capabilities in handling complex medical datasets. However, the current systems still face challenges in achieving optimal accuracy, robustness and clinical interpretability. This study presents a comprehensive survey of brain tumor segmentation, classification and detection techniques using deep learning, metaheuristic and hybrid approaches. The detailed quantitative evaluations of conventional and emerging methods are conducted by examining key performance metrics, dataset characteristics, strengths, and limitations. This review highlights recent breakthroughs by analyzing state-of-the-art techniques from the past five years, research gaps and potential directions for future advancements. These findings provide insights into novel architectures, optimization strategies and clinical applications which ultimately guide researchers towards more robust, interpretable and clinically impactful artificial intelligence (AI)-driven solutions for brain tumor analysis.
Energy-efficient reconfigurable architectures for Edge AI in healthcare IoT: trends, challenges, and future directions Sutikno, Tole; Zakwan Jidin, Aiman; Handayani, Lina
International Journal of Reconfigurable and Embedded Systems (IJRES) Vol 15, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijres.v15.i1.pp1-20

Abstract

The integration of Edge artificial intelligence (AI) with internet of things (IoT) technologies is transforming healthcare applications, including wearable monitoring, telemedicine, and implantable medical devices, by enabling low-latency and intelligent data processing close to patients. However, stringent requirements on energy efficiency, reliability, real-time responsiveness, and data privacy continue to hinder scalable and long-term deployment in resource-constrained healthcare environments. Energy-efficient reconfigurable architectures—such as field-programmable gate arrays (FPGAs), coarse-grained reconfigurable arrays (CGRAs), and emerging memory-centric and heterogeneous platforms—have emerged as promising solutions to address these challenges by balancing flexibility, adaptability, and power efficiency. This review systematically examines recent advances in reconfigurable Edge AI architectures for healthcare IoT, highlighting key trends in hardware–software co-design, AI-assisted design automation, memory-centric optimization, and domain-specific overlays. It further identifies critical challenges, including energy–performance trade-offs, runtime reconfiguration overheads, security and privacy vulnerabilities, limited standardization, and reliability concerns in dynamic clinical settings. Finally, future research directions are outlined, emphasizing self-optimizing and context-aware architectures, secure and trustworthy reconfiguration mechanisms, unified frameworks for heterogeneous healthcare workloads, and sustainable, carbon-aware edge computing. Collectively, this review positions energy-efficient reconfigurable architectures as a foundational enabler for next-generation Edge AI in IoT-enabled healthcare systems.
FPGA implementation of a coprocessor architecture for random data generation and encryption Kumar, Manoj
International Journal of Reconfigurable and Embedded Systems (IJRES) Vol 15, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijres.v15.i1.pp21-30

Abstract

Coprocessors are designed to perform some specific tasks to enhance system performance and speed. Information security is the main focus in internet of thing (IoT), cryptography, and cybersecurity applications. In this work, a coprocessor architecture is designed to generate 4-bits of random data and perform encryption. Coprocessor architecture uses true random number generator (TRNG) and pseudo-random number generator (PRNG) architectures to generate random data. Modified linear feedback shift register (LFSR)-based PRNG and modified transition effect ring oscillator (TERO) and ring oscillator-based TRNG architectures are designed and implemented for performing encryption. A serial-in-parallel-out (SIPO) shift register circuit is used to generate 4-bit random data. A 15-bit instruction word is assigned to coprocessor architecture to perform its task. The coprocessor architecture is designed using VHSIC Hardware Description Language (VHDL) and implemented on an Artix-7 field programmable gate array (FPGA). All simulation and synthesis results of the proposed coprocessor architecture are obtained by the Xilinx Vivado 2015.2 tool. Coprocessor architecture efficiency (throughput (Mbps)/LUTs) is 2.31, and it operates at a 100 MHz clock.

Page 2 of 3 | Total Record : 23