JUTI: Jurnal Ilmiah Teknologi Informasi
Vol.23, No.2, July 2025

Exploring The Effectiveness of In-Context Methods in Human-Aligned Large Language Models Across Languages

Ubaidillah Ariq Prathama (Institut Teknologi Bandung)
Ayu Purwarianti (Institut Teknologi Bandung)
Samuel Cahyawijaya (Cohere, United Kingdom)



Article Info

Publish Date
08 Jul 2025

Abstract

Most of past studies about in-context methods like in-context learning (ICL), cross-lingual ICL (X-ICL), and in-context alignment (ICA) come from older, unaligned large language models (LLMs). However, modern human-aligned LLMs are different; they come with chat-style prompt templates, are extensively human-aligned, and cover many more languages. We re-examined these in-context techniques using two recent, human-aligned multilingual LLMs. Our study covered 20 languages from seven different language families, representing high, mid, and low-resource levels. We tested how well these methods generalized using two tasks: topic classification (SIB-200) and machine reading comprehension (Belebele). We found that utilizing prompt templates significantly improves the performance of both ICL and X-ICL. Furthermore, ICA proves particularly effective for mid- and low-resource languages, boosting their f1-score by up to 6.1%. For X-ICL, choosing a source language that is linguistically similar to the target language, rather than defaulting to English, can lead to substantial gains, with improvements reaching up to 21.98%. Semantically similar ICL examples continue to be highly relevant for human-aligned LLMs, providing up to a 31.42% advantage over static examples. However, this gain decreases when using machine translation model to translate query from target language. These results collectively suggest that while modern human-aligned LLMs definitely benefit from in-context information, the extent of these gains is highly dependent on careful prompt design, the language's resource level, language pairing, and the overall complexity of the task.

Copyrights © 2025