Ibrahim Mahmood Ibrahim
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Optimization by Nature: A Review of Genetic Algorithm Techniques Waysi, Diyar; Ahmed, Berivan Tahir; Ibrahim Mahmood Ibrahim
The Indonesian Journal of Computer Science Vol. 14 No. 1 (2025): The Indonesian Journal of Computer Science (IJCS)
Publisher : AI Society & STMIK Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33022/ijcs.v14i1.4596

Abstract

The Genetic Algorithm (GA) is a technique that uses the selection principle of genetics to optimize the search tool for challenging issues. It is used for research and development as well as machine learning in addition to optimization, the purpose of this literature review is to determine the current state of research on the use and applications of genetic algorithms (GAs) for optimization across a range of sectors. Natural selection and biological evolution serve as the foundation for genetic algorithms (GAs), which replicate solutions through crossover, mutation, and selection. The review accentuates the diversity and universality of GAs in solving numerous complex problems such as path finding, image analytics and data referral systems. It examines the effectiveness of GAs in solving optimization problems as compared to other methods and focuses on GAs efficient properties in searching large and chaotic solution spaces. The results indicate that GAs can be considered as a strong result-oriented tool to further improve the machine learning and artificial intelligence operability.
Architectural Evolution of Transformer Models in NLP: A Comparative Survey of Recent Developments Waysi Naaman, Diyar; Berivan Tahir Ahmed; Ibrahim Mahmood Ibrahim
The Indonesian Journal of Computer Science Vol. 14 No. 5 (2025): The Indonesian Journal of Computer Science
Publisher : AI Society & STMIK Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33022/ijcs.v14i5.4984

Abstract

This literature review examines the impact and advancements of XLM-RoBERTa in the field of multilingual natural language processing. As language technologies increasingly transcend linguistic boundaries, XLM-RoBERTa has emerged as a pivotal cross-lingual model that extends the capabilities of its predecessors. Through comprehensive pre-training on multilingual corpora spanning 100 languages, this model demonstrates remarkable zero-shot cross-lingual transfer capabilities while maintaining competitive performance on monolingual benchmarks. This review synthesizes research findings on XLM-RoBERTa's architecture, pre-training methodology, and performance across diverse NLP tasks including named entity recognition, question answering, and text classification. By examining comparative analyses with other multilingual models, we identify key strengths, limitations, and potential directions for future research. The findings underscore XLM-RoBERTa's significance in advancing language-agnostic representations and bridging the performance gap between high-resource and low-resource languages, with substantial implications for global accessibility of language technologies.