Beladgham, Mohammed
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Transfer learning with Resnet-50 for detecting COVID-19 in chest X-ray images Hamlili, Fatima-Zohra; Beladgham, Mohammed; Khelifi, Mustapha; Bouida, Ahmed
Indonesian Journal of Electrical Engineering and Computer Science Vol 25, No 3: March 2022
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v25.i3.pp1458-1468

Abstract

The novel coronavirus, also known as COVID-19, initially appeared in Wuhan, China, in December 2019 and has since spread around the world. The purpose of this paper is to use deep convolutional neural networks (DCCN) to improve the detection of COVID-19 from X-ray images. In this study, we create a DCNN based on a residual network (Resnet-50) that can identify COVID-19 from two other classes (pneumonia and normal) in chest X-ray images. DCNN was evaluated using two classification methods: binary (BC-1: COVID-19 vs. normal, BC-2: COVID-19 vs. pneumonia) and multi-class (pneumonia vs. normal vs. COVID-19). In all experiments, four fold cross-validation was used to train and test the model. This architecture's average accuracy is 99.9% for BC-1, 99.8% for BC-2, and 97.3% for multi-class cases. The experimental findings demonstrated that the suggested system detects COVID-19 with an average precision and sensitivity of 95% and 95.1% for multi-class classification, respectively. According to our findings, the proposed DCNN may help health professionals in confirming their first evaluation of COVID-19 patients.
A multi-scale convolutional neural network and discrete wavelet transform based retinal image compression Chikhaoui, Dalila; Beladgham, Mohammed; Benaissa, Mohamed; Taleb-Ahmed, Abdelmalik
Indonesian Journal of Electrical Engineering and Computer Science Vol 38, No 1: April 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v38.i1.pp243-253

Abstract

The different applications of medical images have contributed significantly to the growing amount of image data. As a result, compression techniques become essential to allow real-time transmission and storage within limited network bandwidth and storage space. Deep learning, particularly convolutional neural networks (CNN) have marked rapid advances in many computer vision tasks and have progressively drawn attention for being used in image compression. Therefore, we present a method for compressing retinal images based on deep CNN and discrete wavelet transform (DWT). To further enhance CNN capabilities, multi-scale convolutions are introduced into the network architecture. In this proposed method, multiscale CNNs are used to extract useful features to provide a compact representation at the encoding stage and guarantee a better reconstruction quality of the image at the decoding stage. Based on compression efficiency and reconstructed image quality, a wide range of experiments have been conducted to validate the proposed technique performance compared with popular image compression standards and existing deep learning-based methods. At a compression ratio (CR) of 80, the proposed method achieved an average peak signal-to-noise ratio (PSNR) value of 38.98 dB and 96.8% similarity in terms of multi-scale structural similarity (MS-SSIM), demonstrating its effectiveness.