IAES International Journal of Artificial Intelligence (IJ-AI)
IAES International Journal of Artificial Intelligence (IJ-AI) publishes articles in the field of artificial intelligence (AI). The scope covers all artificial intelligence area and its application in the following topics: neural networks; fuzzy logic; simulated biological evolution algorithms (like genetic algorithm, ant colony optimization, etc); reasoning and evolution; intelligence applications; computer vision and speech understanding; multimedia and cognitive informatics, data mining and machine learning tools, heuristic and AI planning strategies and tools, computational theories of learning; technology and computing (like particle swarm optimization); intelligent system architectures; knowledge representation; bioinformatics; natural language processing; multiagent systems; etc.
Articles
123 Documents
Search results for
, issue
"Vol 13, No 4: December 2024"
:
123 Documents
clear
CryptoGAN: a new frontier in generative adversarial network-driven image encryption
Bhat, Ranjith;
Nanjundegowda, Raghu
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 4: December 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i4.pp4813-4821
There is a growing need for an image encryption scheme, for huge amount of social media data or even the medical data to secure the privacy of the patients or the user. This study introduces a ground-breaking deep learning architecture named crypto generative adversarial networks (CryptoGAN), a novel architecture for generating cipher images. This architecture has the ability to generate both encrypted and decrypted images. The CryptoGAN system consists of an initial encryption network, a generative network that verifies the output against the desired domain, and a subsequent decryption phase. The generative adversarial networks (GAN) are utilised as the learning network to generate cipher images. This is achieved by training the neural network using images encrypted from a conventional image encryption scheme such as advanced encryption standards (AES), and learning from the resulting losses. This enhances security measures when dealing with a large dataset of photos. The assessment of the performance metrics of the encrypted image, including entropy, histogram, correlation plot, and vulnerability to assaults, demonstrates that the suggested generative network may get a higher level of security.
Optimizing the position of photovoltaic solar tracker panels with artificial intelligence using MATLAB Simulink
Linelson, Ricardo;
Rinanda Saputri, Fahmy
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 4: December 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i4.pp4003-4018
This research aims to apply an artificial intelligence (AI) system to control the position of photovoltaic (PV) panels to maximize the use of solar energy using the solar tracker. The implementation of AI algorithms to achieve optimal panel orientation, considering factors such as sunlight intensity and sun position is also discussed. The simulation results using matrix laboratory (MATLAB) Simulink can be observed on the scope, displaying the position control graph of the solar panel from sunrise to sunset. By employing proportional integral derivative (PID) control, the error is likely to be minimal, ensuring that the panel will continue to follow the sun until it sets at the maximum point of 4:00 PM. After that, the panel can be adjusted back or reset to the initial position at 6:00 AM for the following day. In a full-day simulation, the solar panel will follow the sun's movement from sunset to sunrise. At the basic level, sunrise occurs in the first hour at position 1.0, which is 6:00 AM in the minimum point at the bottom left corner of the curve, and sunset occurs in the afternoon at position 5.25, which is 4:00 PM at the maximum point in the top right corner of the curve.
Optimized triangular observer based adaptive supertwisting sliding mode control for wind turbine system
El Bouassi, Sanae;
El Afou, Youssef;
Chalh, Zakaria;
Mellouli, El Mehdi;
Haidi, Touria
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 4: December 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i4.pp4229-4240
This paper presents a modified adaptive supertwisting sliding mode controller (AST-SMC) that dynamically adjusts control settings without prior knowledge of uncertainty limits, thereby removing chattering and putting reliability first while maintaining the original benefits of sliding mode control (SMC). First, we model and build the wind turbine system using three different controllers: the AST-SMC, the supertwisting sliding mode controller (ST-SMC), and the first-order sliding mode controller (FOSMC). A second comparison is necessary. Only the rotor speed is available to the control law because of concealed state information, which makes use of the full system state. In order to minimize observing errors over time, an asymptotic observer triangle is used to estimate the unknown rotor acceleration. By improving AST-SMC's control law, particle swarm optimization finds the most effective controller. The stability of AST-SMC over a finite time is shown via the Lyapunov stability theorem. Based on simulation findings, it is proven to be more effective than traditional SMC in wind turbine system control. It excels in settling time, tracking accuracy, energy consumption, and control input smoothness.
Regularized Xception for facial expression recognition with extra training data and step decay learning rate
Azrien, Elang Arkanaufa;
Hartati, Sri;
Frisky, Aufaclav Zatu Kusuma
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 4: December 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i4.pp4703-4710
Despite extensive research on facial expression recognition, achieving the highest level of accuracy remains challenging. The objective of this study is to enhance the accuracy of current models by adjusting the structure, the data used, and the training procedure. The incorporation of regularization into the Xception architecture, the augmentation of training data, and the utilization of step decay learning rate together address and surpass current constraints. A substantial improvement in accuracy is demonstrated by the assessment conducted on the facial expression recognition (FER2013) dataset, achieving a remarkable 94.34%. This study introduces potential avenues for enhancing facial expression recognition systems, specifically targeting the requirement for increased accuracy within this domain.
Automated diagnosis of brain tumor classification and segmentation of magnetic resonance imaging images
B. Muddaraju, Chandrakala;
Shrinivasa, Shrinivasa;
Narasimhamurthy, Shobha;
Sontakke, Vaishali
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 4: December 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i4.pp4833-4842
Brain tumors are one of the most prevalent disorders of the central nervous system and are dangerous. For patients to receive the best treatment, early diagnosis is crucial. For radiologists to correctly detect brain tumor images, an automated approach is required. The identification procedure can be time-consuming and prone to mistakes. In this work, the issue of fully automated brain tumor classification and segmentation of magnetic resonance imaging (MRI) including meningioma, glioma, pituitary, and no tumor is taken into consideration. In this study, convolutional neural network (CNN) and mask region-based convolutional neural network (R-CNN) are proposed for classification and segmentation problems respectively. This study employed 3,200 images as a training set and the system achieved an accuracy of 96% for classifying the tumors and 94% accuracy in segmentation of tumors.
Federated inception-multi-head attention models for cyber-attacks detection
AL-Halboosi, Imad Tareq;
Mohamed Elbagoury, Bassant;
Amin El-Regaily, Salsabil;
M. El-Horbaty, El-Sayed
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 4: December 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i4.pp4778-4794
With the proliferation of internet of things (IoT) devices, ensuring the security of these interconnected systems has become a critical concern. Cyberattacks targeting IoT devices pose significant threats to individuals and organizations due to the generation of vast amounts of data across many connected devices, which traditional centralized methods cannot solve. Federated learning (FL) could be a promising solution to mitigate privacy concerns associated with centralized approaches and address cybersecurity concerns. This paper uses FL and deep learning (DL) approaches to cybersecurity in IoT applications. The goal of cyber security is achieved by forming a federation of acquired and shared models at the head of the various participants. We use inception time and multi-head attention (CNN) algorithm based on FL to detect cyber-attacks and avoid data privacy leaks under two distribution modes, namely IID and Non-IID. In contrast, the FedAvg and FedMA algorithms aggregate local model updates. A global model is produced after several communication rounds between the IoT devices and the model parameter server. Cyber threats are simulated using edge-IIoT datasets. Experiment results show that the federated inception model's best global accuracy was 93, 91%, and 93, 49% using multi-head attention.
Anchor selection based deep learning two stage fabric defect localization
Pooja, Hattarki;
Soma, Shrideva
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 4: December 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i4.pp4711-4721
Localizing and classifying fabric defects is a crucial step in the quality control process used in the production of textiles. Recently, fabric defect classification and detection have made use of deep learning approaches based on anchor selection. But due to in effectiveness in anchor selection, the computational overhead and localization error are higher in these solutions. As a solution to this problem, this work proposes a two-stage improvised anchor selection deep learning technique. In first stage, quaternion fourier transform frequency domain analysis along with super pixel segmentation is done over the fabric image to select probable defect regions. In the second stage deep learning based regression along with super pixel segment comparison is done over the probable defect regions localize and categorize the defect. Due to effectiveness in selection of probable defect regions and categorization of regions, the defect localization accuracy is increased at a comparative low computational overhead in the proposed two stage improvise anchor selection deep learning technique. Testing against the irish longitudinal study on ageing (TILDA) fabric defect detection dataset, the proposed solution is found to provide 1.2% higher fabric defect localization accuracy at a 3% lower computation overhead compared to most recent existing works.
Network intrusion detection in big datasets using Spark environment and incremental learning
Elmoutaoukkil, Abdelwahed;
Hamlich, Mohamed;
Khatib, Amine;
Chriss, Marouane
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 4: December 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i4.pp4414-4421
Internet of things (IoT) systems have experienced significant growth in data traffic, resulting in security and real-time processing issues. Intrusion detection systems (IDS) are currently an indispensable tool for self-protection against various attacks. However, IoT systems face serious challenges due to the functional diversity of attacks, resulting in detection methods with machine learning (ML) and limited static models generated by the linear discriminant analysis (LDA) algorithm. The process entails adjusting the model parameters in real time as new data arrives. This paper proposes a new method of an IDS based on the LDA algorithm with the incremental model. The model framework is trained and tested on the IoT intrusion dataset (UNSW-NB15) using the streaming linear discriminant analysis (SLDA) ML algorithm. Our approach increased model accuracy after each training, resulting in continuous model improvement. The comparison reveals that our dynamic model becomes more accurate after each batch and can detect new types of attacks.
Implementation of global navigation satellite system software-defined radio baseband processing algorithms in system on chip
Devi Kh, Chetna;
Panduranga Rao, Malode Vishwanatha
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 4: December 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i4.pp3869-3878
The global navigation satellite system (GNSS) is an international navigation system that determines users' locations globally using a constellation of satellites. Conventional hardware-based receivers often face challenges related to cost-effectiveness and lack of reconfigurability. To address these issues, GNSS software receivers have emerged, executing baseband processing methods on host computers. However, host PC-based GNSS software receivers encounter obstacles during real-time signal acquisition, such as computational complexity and data loss. This research paper introduces a real-time system on chip (SoC)-based GNSS software receiver to mitigate these concerns. The receiver utilizes the USRP N210 radio frequency (RF) front end to acquire GNSS signals in real-time. Baseband processing algorithms are executed using the Zynq 7000 SoC board, with modifications applied to the acquisition module. The effectiveness of the SoC-based GNSS receiver is evaluated under both stationary and dynamic conditions. Experimental outcomes indicate that the receiver provides precise user localization and facilitates prototype development. This methodology not only overcomes the limitations of conventional hardware-based receivers but also leverages the benefits of SoC architecture to process GNSS signals in a flexible and efficient manner.
3D visualization diagnostics for lung cancer detection
M. Mahmoud, Rana;
Elgendy, Mostafa;
Taha, Mohamed
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 4: December 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i4.pp4630-4641
Lung cancer is a leading cause of cancer deaths worldwide with an estimated 2 million new cases and 1ยท76 million deaths yearly. Early detection can improve survival, and CT scans are a precise imaging technique to diagnose lung cancer. However, analyzing hundreds of 2D CT slices is challenging and can cause false alarms. 3D visualization of lung nodules can aid clinicians in detection and diagnosis. The MobileNet model integrates multi-view and multi-scale nodule features using depthwise separable convolutional layers. These layers split standard convolutions into depthwise and pointwise convolutions to reduce computational cost. Finally, the 3D pulmonary nodular models were created using a ray-casting volume rendering approach. Compared to other state-of-the-art deep neural networks, this factorization enables MobileNet to achieve a much lower computational cost while maintaining a decent degree of accuracy. The proposed approach was tested on an LIDC dataset of 986 nodules. Experiment findings reveal that MobileNet provides exceptional segmentation performance on the LIDC dataset, with an accuracy of 93.3%. The study demonstrates that the MobileNet detects and segments lung nodules somewhat better than other older technologies. As a result, the proposed system proposes an automated 3D lung cancer tumor visualization.