Khalaf, Ashraf A. M.
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Braille code classifications tool based on computer vision for visual impaired Sadak, Hany M.; Khalaf, Ashraf A. M.; Hussein, Aziza I.; Salama, Gerges Mansour
International Journal of Electrical and Computer Engineering (IJECE) Vol 14, No 6: December 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v14i6.pp6992-7000

Abstract

Blind and visually impaired people (VIP) face many challenges in writing as they usually use traditional tools such as Slate and Stylus or expensive typewriters as Perkins Brailler, often causing accessibility and affordability issues. This article introduces a novel portable, cost-effective device that helps VIP how to write by utilizing a deep-learning model to detect a Braille cell. Using deep learning instead of electrical circuits can reduce costs and enable a mobile app to act as a virtual teacher for blind users. The app could suggest sentences for the user to write and check their work, providing an independent learning platform. This feature is difficult to implement when using electronic circuits. A portable device generates Braille character cells using light- emitting diode (LED) arrays instead of Braille holes. A smartphone camera captures the image, which is then processed by a deep learning model to detect the Braille and convert it to English text. This article provides a new dataset for custom-Braille character cells. Moreover, applying a transfer learning technique on the mobile network version 2 (MobileNetv2) model offers a basis for the development of a comprehensive mobile application. The accuracy based on the model reached 97%.
A systematic evaluation of pre-trained encoder architectures for multimodal brain tumor segmentation using U-Net-based architectures Abbas, Marwa; Khalaf, Ashraf A. M.; Mogahed, Hussein; Hussein, Aziza I.; Gaber, Lamya; Mabrook, M. Mourad
Indonesian Journal of Electrical Engineering and Computer Science Vol 40, No 2: November 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v40.i2.pp850-859

Abstract

Accurate brain tumor segmentation from medical imaging is critical for early diagnosis and effective treatment planning. Deep learning methods, particularly U-Net-based architectures, have demonstrated strong performance in this domain. However, prior studies have primarily focused on limited encoder backbones, overlooking the potential advantages of alternative pretrained models. This study presents a systematic evaluation of twelve pretrained convolutional neural networks—ResNet34, ResNet50, ResNet101, VGG16, VGG19, DenseNet121, InceptionResNetV2, InceptionV3, MobileNetV2, EfficientNetB1, SE-ResNet34, and SE-ResNet18—used as encoder backbones in the U-Net framework for identification and extraction of tumor-affected brain areas using the BraTS 2019 multimodal MRI dataset. Model performance was assessed through cross-validation, incorporating fault detection to enhance reliability. The MobileNetV2-based U-Net configuration outperformed all other architectures, achieving 99% cross-validation accuracy and 99.3% test accuracy. Additionally, it achieved a Jaccard coefficient of 83.45%, and Dice coefficients of 90.3% (Whole Tumor), 86.07% (Tumor Core), and 81.93% (Enhancing Tumor), with a low-test loss of 0.0282. These results demonstrate that MobileNetV2 is a highly effective encoder backbone for U-Net in extracting tasks for tumor-affected brain regions using multimodal medical imaging data.