Claim Missing Document
Check
Articles

Found 4 Documents
Search

Predicting and detecting fires on multispectral images using machine learning methods Aitimov, Murat; Kaldarova, Mira; Kassymova, Akmaral; Makulov, Kaiyrbek; Muratkhan, Raikhan; Nurakynov, Serik; Sydyk, Nurmakhambet; Bapiyev, Ideyat
International Journal of Electrical and Computer Engineering (IJECE) Vol 14, No 2: April 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v14i2.pp1842-1850

Abstract

In today's world, fire forecasting and early detection play a critical role in preventing disasters and minimizing damage to the environment and human settlements. The main goal of the study is the development and testing of machine learning algorithms for automated detection of the initial stages of fires based on the analysis of multispectral images. Within the framework of this study, the capabilities of three popular machine learning methods: extreme gradient boosting, logistic regression, and vanilla convolutional neural network (vanilla CNN), are considered in the task of processing and interpreting multispectral images to predict and detect fires. XGBoost, as a gradient-boosted decision tree algorithm, provides high processing speed and accuracy, logistic regression stands out for its simplicity and interpretability, while vanilla CNN uses the power of deep learning to analyze spatial and spectral data. The results of the study show that the integration of these methods into monitoring systems can significantly improve the efficiency of early fire detection, as well as help in predicting potential fires.
Generating images using generative adversarial networks based on text descriptions Turarova, Marzhan; Bekbayeva, Roza; Abdykerimova, Lazzat; Aitimov, Murat; Bayegizova, Aigulim; Smailova, Ulmeken; Kassenova, Leila; Glazyrina, Natalya
International Journal of Electrical and Computer Engineering (IJECE) Vol 14, No 2: April 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v14i2.pp2014-2023

Abstract

Modern developments in the fields of natural language processing (NLP) and computer vision (CV) emphasize the increasing importance of generating images from text descriptions. The presented article analyzes and compares two key methods in this area: generative adversarial network with conditional latent semantic analysis (GAN-CLS) and ultra-long transformer network (XLNet). The main components of GAN-CLS, including the generator, discriminator, and text encoder, are discussed in the context of their functional tasks—generating images from text inputs, assessing the realism of generated images, and converting text descriptions into latent spaces, respectively. A detailed comparative analysis of the performance of GAN-CLS and XLNet, the latter of which is widely used in the organic light-emitting diode (OEL) field, is carried out. The purpose of the study is to determine the effectiveness of each method in different scenarios and then provide valuable recommendations for selecting the best method for generating images from text descriptions, taking into account specific tasks and resources. Ultimately, our paper aims to be a valuable research resource by providing scientific guidance for NLP and CV experts.
Classification of pathologies on digital chest radiographs using machine learning methods Aitimov, Murat; Shekerbek, Ainur; Pestunov, Igor; Bakanov, Galitdin; Ostayeva, Aiymkhan; Ziyatbekova, Gulzat; Mediyeva, Saule; Omarova, Gulmira
International Journal of Electrical and Computer Engineering (IJECE) Vol 14, No 2: April 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v14i2.pp1899-1905

Abstract

This article is devoted to the research and development of methods for classifying pathologies on digital chest radiographs using two different machine learning approaches: the eXtreme gradient boosting (XGBoost) algorithm and the deep convolutional neural network residual network (ResNet50). The goal of the study is to develop effective and accurate methods for automatically classifying various pathologies detected on chest X-rays. The study collected an extensive dataset of digital chest radiographs, including a variety of clinical cases and different classes of pathology. Developed and trained machine learning models based on the XGBoost algorithm and the ResNet50 convolutional neural network using pre-processed images. The performance and accuracy of both models were assessed on test data using quality metrics and a comparative analysis of the results was carried out. The expected results of the article are high accuracy and reliability of methods for classifying pathologies on chest radiographs, as well as an understanding of their effectiveness in the context of clinical practice. These results may have significant implications for improving the diagnosis and care of patients with chest diseases, as well as promoting the development of automated decision support systems in radiology.
Data generation using generative adversarial networks to increase data volume Aitimova, Ulzada; Aitimov, Murat; Mukhametzhanova, Bigul; Issakulova, Zhanat; Kassymova, Akmaral; Ismailova, Aisulu; Kadirkulov, Kuanysh; Zhumabayeva, Assel
International Journal of Electrical and Computer Engineering (IJECE) Vol 14, No 2: April 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v14i2.pp2369-2376

Abstract

The article is an in-depth analysis of two leading approaches in the field of generative modeling: generative adversarial networks (GANs) and the pixel-to-pixel (Pix2Pix) image translation model. Given the growing interest in automation and improved image processing, the authors focus on the key operating principles of each model, analyzing their unique characteristics and features. The article also explores in detail the various applications of these approaches, highlighting their impact on modern research in computer vision and artificial intelligence. The purpose of the study is to provide readers with a scientific understanding of the effectiveness and potential of each of the models, and to highlight the opportunities and limitations of their application. The authors strive not only to cover the technical aspects of the models, but also to provide a broad overview of their impact on various industries, including medicine, the arts, and solving real-world problems in image processing. In addition, we have identified prospects for the use of these technologies in various fields, such as medicine, design, art, entertainment, and in unmanned aerial vehicle systems. The ability of GANs and Pix2Pix to adapt to a variety of tasks and produce high-quality results opens up broad prospects for industry and research.