International Journal of Informatics and Communication Technology (IJ-ICT)
International Journal of Informatics and Communication Technology (IJ-ICT) is a common platform for publishing quality research paper as well as other intellectual outputs. This Journal is published by Institute of Advanced Engineering and Science (IAES) whose aims is to promote the dissemination of scientific knowledge and technology on the Information and Communication Technology areas, in front of international audience of scientific community, to encourage the progress and innovation of the technology for human life and also to be a best platform for proliferation of ideas and thought for all scientists, regardless of their locations or nationalities. The journal covers all areas of Informatics and Communication Technology (ICT) focuses on integrating hardware and software solutions for the storage, retrieval, sharing and manipulation management, analysis, visualization, interpretation and it applications for human services programs and practices, publishing refereed original research articles and technical notes. It is designed to serve researchers, developers, managers, strategic planners, graduate students and others interested in state-of-the art research activities in ICT.
Articles
462 Documents
High accuracy sensor nodes for a peat swamp forest fire detection using ESP32 camera
Shipun Anuar Hamzah;
Mohd Noh Dalimin;
Mohamad Md Som;
Mohd Shamian Zainal;
Khairun Nidzam Ramli;
Wahyu Mulyo Utomo;
Nor Azizi Yusoff
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 11, No 3: December 2022
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijict.v11i3.pp229-239
The use of smoke sensors in high-precision and low-cost forest fire detection kits needs to be developed immediately to assist the authorities in monitoring forest fires especially in remote areas more efficiently and systematically. The implementation of automatic reclosing operation allows the fire detector kit to distinguish between real smoke and non-real smoke successfully. This has profitably reduced kit errors when detecting fires and in turn prevent the users from receiving incorrect messages. However, using a smoke sensor with automatic reclosing operation has not been able to optimize the accuracy of identifying the actual smoke due to the working sensor node situation is difficult to predict and sometimes unexpected such as the source of smoke received. Thus, to further improve the accuracy when detecting the presence of smoke, the system is equipped with two digital cameras that can capture and send pictures of fire smoke to the users. The system gives the users choice of three interesting options if they want the camera to capture and send pictures to them, namely request, smoke trigger and movement for security purposes. In all cases, users can request the system to send pictures at any time. The system equipped with this camera shows the accuracy of smoke detection by confirming the actual smoke that has been detected through images sent in the user’s Telegram channel and on the Graphical User Interface (GUI) display. As a comparison of the system before and after using this camera, it was found that the system that uses the camera gives advantage to the users in monitoring fire smoke more effectively and accurately.
A learning-based approach to breast cancer screening using mammography images
Khalid Shaikh;
Sabitha Krishnan;
Rohit Thanki
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 12, No 1: April 2023
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijict.v12i1.pp1-11
The current big challenge facing radiologists in healthcare is the automatic detection and classification of masses in breast mammogram images. In the last few years, many researchers have proposed various solutions to this problem. These solutions are effectively dependent and work on annotated breast image data. But these solutions fail when applied to unlabeled and non-annotated breast image data. Therefore, this paper provides the solution to this problem with the help of a neural network that considers any kind of unlabeled data for its procedure. In this solution, the algorithm automatically extracts tumors in images using a segmentation approach, and after that, the features of the tumor are extracted for further processing. This approach used a double thresholding-based segmentation technique to obtain a perfect location of the tumor region, which was not possible in existing techniques in the literature. The experimental results also show that the proposed algorithm provides better accuracy compared to the accuracy of existing algorithms in the literature.
Automated machine learning for analysis and prediction of vehicle crashes
Abhishek Saxena;
Stefan A. Robila
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 12, No 1: April 2023
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijict.v12i1.pp46-53
This work discusses the study and development of a graphical interface and implementation of a machine learning model for vehicle traffic injury and fatality prediction for a specified date range and for a certain zip (US postal) code based on the New York City's (NYC) vehicle crash data set. While previous studies focused on accident causes, little insight has been offered into how such data may be utilized to forecast future incidents, Studies that have historically concentrated on certain road segment types, such as highways and other streets, and a specific geographic region, this study offers a citywide review of collisions. Using cutting-edge database and networking technology, a user-friendly interface was created to display vehicle crash series. Following this, a support vector machine learning model was built to evaluate the likelihood of an accident and the consequent injuries and deaths at the zip code level for all of NYC and to better mitigate such events. Using the visualization and prediction approach, the findings show that it is efficient and accurate. Aside from transportation experts and government policymakers, the machine learning approach deliver useful insights to the insurance business since it quantifies collision risk data collected at specific places.
Satellite dish antenna control for distributed mobile telemedicine nodes
Bonaventure Onyeka Ekengwu;
Paulinus Chinaenye Eze;
Christopher Nnaemeka Asiegbu;
Samuel Chukwuemeka Olisa;
Chimezie Felix Udechukwu
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 11, No 3: December 2022
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijict.v11i3.pp206-217
The positioning control of a dish antenna mounted on distributed mobile telemedicine nodes (DMTNs) within Nigeria communicating via NigComSat-1R has been presented. It was desired to improve the transient and steady performance of satellite dish antenna and reduce the effect of delay during satellite communication. In order to overcome this, the equations describing the dynamics of the antenna positioning system were obtained and transformed into state space variable equations. A full-state feedback controller was developed with forward path gain and an observer. The proposed controller was introduced into the closed loop of the dish antenna positioning control system. The system was subjected to unit step forcing function in MATLAB/Simulink simulation environment considering three different cases so as to obtain time domain parameters that characterized the transient and steady-state response performances. The simulation results obtained revealed that the introduction of the full state feedback controller provided improved position tracking to unit step input with a rise time of 0.42 s, settling time of 1.22 s, and overshoot of 4.91%. With the addition of the observer, the rise time achieved was 0.39 s, settling time of 1.31 s, and overshoot of 10.7%. The time domain performance comparison of the proposed system with existing systems revealed its superiority over them.
Natural language understanding challenges for sentiment analysis tasks and deep learning solutions
Radha Guha;
Tole Sutikno
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 11, No 3: December 2022
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijict.v11i3.pp247-256
When it comes to purchasing a product or attending an event, most people want to know what others think about it first. To construct a recommendation system, a user's likeness of a product can be measured numerically, such as a five-star rating or a binary like or dislike rating. If you don't have a numerical rating system, the product review text can still be used to make recommendations. Natural language comprehension is a branch of computer science that aims to make machines capable of natural language understanding (NLU). Negative, neutral, or positive sentiment analysis (SA) or opinion mining (OM) is an algorithmic method for automatically determining the polarity of comments and reviews based on their content. Emotional intelligence relies on text categorization to work. In the age of big data, there are countless ways to use sentiment analysis, yet SA remains a challenge. As a result of its enormous importance, sentiment analysis is a hotly debated topic in the commercial world as well as academic circles. When it comes to sentiment analysis tasks and text categorization, classical machine learning and newer deep learning algorithms are at the cutting edge of current technology.
A broadband MIMO antenna's channel capacity for WLAN and WiMAX applications
Raefat-Jalila El Bakouchi;
Abdelilah Ghammaz
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 11, No 3: December 2022
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijict.v11i3.pp240-246
This paper describes the findings of a research into the multiple input multiple output (MIMO) channel capacity of a broadband dual-element printed inverted F-antenna (PIFA) antenna array. The dual-element antenna array is made up of two PIFAs that are meant to fit on a teeny-tiny and small wireless communication device that runs at 5 GHz. The device's frequency range is between 3.5 and 4.5 GHz. These PIFAs are also loaded into the device during the installation process. In order to investigate the channel capacity, the ray tracing method is employed in two different kinds of circumstances. For the purpose of carrying out this analysis of the channel capacity, both the simulated and measured mutual couplings of the broadband MIMO antenna are utilized.
Development of a PC-based sign language translator
Kamoli Akinwale Amusa;
Ayorinde Joseph Olanipekun;
Tolulope Christiana Erinosho;
Abiodun Akeem Salaam;
Sodiq Segun Razaq
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 12, No 1: April 2023
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijict.v12i1.pp23-31
While a hearing-impaired individual depends on sign language and gestures, non-hearing-impaired person uses verbal language. Thus, there is need for means of arbitration to forestall situation when a non-hearing-impaired individual who does not understand the sign language wants to communicate with a hearing-impaired person. This paper is concerned with the development of a PC-based sign language translator to facilitate effective communication between hearing-impaired and non-hearing-impaired persons. Database of hand gestures in American sign language (ASL) is created using Python scripts. TensorFlow (TF) is used in the creation of a pipeline configuration model for machine learning of annotated images of gestures in the database with the real time gestures. The implementation is done in Python software environment and it runs on a PC equipped with a web camera to capture real time gestures for comparison and interpretations. The developed sign language translator is able to translate ASL/gestures to written texts along with corresponding audio renderings at an average duration of about one second. In addition, the translator is able to match real time gestures with the equivalent gesture images stored in the database even at 44% similarity.
Cloud and internet-of-things secure integration along with security concerns
Arif Ullah;
Imane Laassar;
Canan Batur Şahin;
Ozlem Batur Dinle;
Hanane Aznaoui
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 12, No 1: April 2023
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijict.v12i1.pp62-71
Cloud computing is a new technology which refers to an infrastructure where both software and hardware application are operate for the network with the help of internet. Cloud computing provide these services with the help of rule know as you pay as you go on. Internet of things (IoT) is a new technology which is growing rapidly in the field of telecommunications. The aim of IoT devices is to connect all things around us to the internet and thus provide us with smarter cities, intelligent homes and generally more comfortable lives. The combation of cloud computing and IoT devices make rapid development of both technologies. In this paper, we present information about IoT and cloud computing with a focus on the security issues of both technologies. Concluding we present the contribution of cloud computing to the IoT technology. Thus, it shows how the cloud computing technology improves the function of the IoT. Finally present the security challenges of both technologies IoT and cloud computing.
Smart parking for smart cities: a novel approach to reducing frivolous parking zone determination
Atiqur Rahman;
Ali Md Liton
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 12, No 1: April 2023
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijict.v12i1.pp72-78
Internet of things (IoT) infrastructures are rapidly expanding, which will lead to an unanticipated rise in demand for smart cities. The concept of a "smart city" has recently gained traction in urban planning circles. An IoT-based smart parking system is the focus of this article, and it allows a motorist to locate a car park and an available parking space in an indoor metropolis, all from the comfort of their own vehicle. Additional efforts are made to reduce the time spent defining parking zones. Reduced fuel use helps to cut down on pollution, as well as avoid needless travel through congested parking lots, which can help to reduce unlawful parking and alleviate traffic congestion in the city we all live in. These innovations include automobile particular identification via radio frequency identification (RFID) tags, unoccupied slot detection through the use of ultrasonic sensors, and cost calculating based largely on parking duration. The technology we've used is unique in that it runs on separate parts of the system for the hardware and the software.
CNN inference acceleration on limited resources FPGA platforms_epilepsy detection case study
Saidi, Afef;
Ben Othman, Slim;
Dhouibi, Meriam;
Ben Saoud, Slim
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 12, No 3: December 2023
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijict.v12i3.pp251-260
The use of a convolutional neural network (CNN) to analyze and classify electroencephalogram (EEG) signals has recently attracted the interest of researchers to identify epileptic seizures. This success has come with an enormous increase in the computational complexity and memory requirements of CNNs. For the sake of boosting the performance of CNN inference, several hardware accelerators have been proposed. The high performance and flexibility of the field programmable gate array (FPGA) make it an efficient accelerator for CNNs. Nevertheless, for resource-limited platforms, the deployment of CNN models poses significant challenges. For an ease of CNN implementation on such platforms, several tools and frameworks have been made available by the research community along with different optimization techniques. In this paper, we proposed an FPGA implementation for an automatic seizure detection approach using two CNN models, namely VGG-16 and ResNet-50. To reduce the model size and computation cost, we exploited two optimization approaches: pruning and quantization. Furthermore, we presented the results and discussed the advantages and limitations of two implementation alternatives for the inference acceleration of quantized CNNs on Zynq-7000: an advanced RISC machine (ARM) software implementation-based ARM, NN, software development kit (SDK) and a software/hardware implementation-based deep learning processor unit (DPU) accelerator and DNNDK toolkit.