Deshmukh, Meghna
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Design of an efficient Transformer-XL model for enhanced pseudo code to Python code conversion Kuche, Snehal H.; Gaikwad, Amit K.; Deshmukh, Meghna
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 2: August 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i2.pp223-230

Abstract

The landscape of programming has long been challenged by the task of transforming pseudo code into executable Python code, a process traditionally marred by its labor-intensive nature and the necessity for a deep understanding of both logical frameworks and programming languages. Existing methodologies often grapple with limitations in handling variable-length sequences and maintaining context over extended textual data. Addressing these challenges, this study introduces an innovative approach utilizing the Transformer-XL model, a significant advancement in the domain of deep learning. The Transformer-XL architecture, an evolution of the standard Transformer, adeptly processes variable-length sequences and captures extensive contextual dependencies, thereby surpassing its predecessors in handling natural language processing (NLP) and code synthesis tasks. The proposed model employs a comprehensive process involving data preprocessing, model input encoding, a self-attention mechanism, contextual encoding, language modeling, and a meticulous decoding process, followed by post-processing. The implications of this work are far-reaching, offering a substantial leap in the automation of code conversion. As the field of NLP and deep learning continues to evolve, the Transformer-XL based model is poised to become an indispensable tool in the realm of programming, setting a new benchmark for automated code synthesis.