Prathama, Ubaidillah Ariq
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Exploring The Effectiveness of In-Context Methods in Human-Aligned Large Language Models Across Languages Prathama, Ubaidillah Ariq; Ayu Purwarianti; Samuel Cahyawijaya
JUTI: Jurnal Ilmiah Teknologi Informasi Vol.23, No.2, July 2025
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v23i2.a1323

Abstract

Most of past studies about in-context methods like in-context learning (ICL), cross-lingual ICL (X-ICL), and in-context alignment (ICA) come from older, unaligned large language models (LLMs). However, modern human-aligned LLMs are different; they come with chat-style prompt templates, are extensively human-aligned, and cover many more languages. We re-examined these in-context techniques using two recent, human-aligned multilingual LLMs. Our study covered 20 languages from seven different language families, representing high, mid, and low-resource levels. We tested how well these methods generalized using two tasks: topic classification (SIB-200) and machine reading comprehension (Belebele). We found that utilizing prompt templates significantly improves the performance of both ICL and X-ICL. Furthermore, ICA proves particularly effective for mid- and low-resource languages, boosting their f1-score by up to 6.1%. For X-ICL, choosing a source language that is linguistically similar to the target language, rather than defaulting to English, can lead to substantial gains, with improvements reaching up to 21.98%. Semantically similar ICL examples continue to be highly relevant for human-aligned LLMs, providing up to a 31.42% advantage over static examples. However, this gain decreases when using machine translation model to translate query from target language. These results collectively suggest that while modern human-aligned LLMs definitely benefit from in-context information, the extent of these gains is highly dependent on careful prompt design, the language's resource level, language pairing, and the overall complexity of the task.