Claim Missing Document
Check
Articles

Found 2 Documents
Search

Using deep learning to diagnose retinal diseases through medical image analysis Azhibekova, Zhanar; Bekbayeva, Roza; Yussupova, Gulbakhar; Kaibassova, Dinara; Ostretsova, Idiya; Muratbekova, Svetlana; Kakabayev, Anuarbek; Sultanova, Zhanylsyn
International Journal of Electrical and Computer Engineering (IJECE) Vol 14, No 6: December 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v14i6.pp6455-6465

Abstract

The scientific article focuses on the application of deep learning through simple U-Net, attention U-Net, residual U-Net, and residual attention U-Net models for diagnosing retinal diseases based on medical image analysis. The work includes a thorough analysis of each model's ability to detect retinal pathologies, taking into account their unique characteristics such as attention mechanisms and residual connections. The obtained experimental results confirm the high accuracy and reliability of the proposed models, emphasizing their potential as effective tools for automated diagnosis of retinal diseases based on medical images. This approach opens up new prospects for improving diagnostic procedures and increasing the efficiency of medical practice. The authors of the article propose an innovative method that can significantly facilitate the process of identifying retinal diseases, which is critical for early diagnosis and timely treatment. The results of the study support the prospect of using these models in clinical practice, highlighting their ability to accurately analyze medical images and improve the quality of eye health care.
Generating images using generative adversarial networks based on text descriptions Turarova, Marzhan; Bekbayeva, Roza; Abdykerimova, Lazzat; Aitimov, Murat; Bayegizova, Aigulim; Smailova, Ulmeken; Kassenova, Leila; Glazyrina, Natalya
International Journal of Electrical and Computer Engineering (IJECE) Vol 14, No 2: April 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v14i2.pp2014-2023

Abstract

Modern developments in the fields of natural language processing (NLP) and computer vision (CV) emphasize the increasing importance of generating images from text descriptions. The presented article analyzes and compares two key methods in this area: generative adversarial network with conditional latent semantic analysis (GAN-CLS) and ultra-long transformer network (XLNet). The main components of GAN-CLS, including the generator, discriminator, and text encoder, are discussed in the context of their functional tasks—generating images from text inputs, assessing the realism of generated images, and converting text descriptions into latent spaces, respectively. A detailed comparative analysis of the performance of GAN-CLS and XLNet, the latter of which is widely used in the organic light-emitting diode (OEL) field, is carried out. The purpose of the study is to determine the effectiveness of each method in different scenarios and then provide valuable recommendations for selecting the best method for generating images from text descriptions, taking into account specific tasks and resources. Ultimately, our paper aims to be a valuable research resource by providing scientific guidance for NLP and CV experts.