Khuralay, Moldamurat
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Large language models for pattern recognition in text data Kosayakova, Aknur; Ildar, Kurmashev; Spada, Luigi La; Zeeshan, Nida; Bakyt, Makhabbat; Khuralay, Moldamurat; Abdirashev, Omirzak
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i6.pp5311-5332

Abstract

Large language models (LLMs) are widely deployed in settings where both reliability and efficiency matter. We present a calibrated, seed‑robust empirical comparison of an encoder fine‑tuned model (bidirectional encoder representations from transformers (BERT)‑base) and a decoder in‑context model (generative pre-trained transformer (GPT)‑2 small) across Stanford question answering dataset v2.0 (SQuAD v2.0) and general language understanding evaluation (GLUE)-multi-genre natural language inference (MNLI), Stanford sentiment treebank 2 (SST‑2). Beyond accuracy, we assess reliability (expected calibration error with reliability diagrams and confidence–coverage analysis) and efficiency (latency, memory, throughput) under matched conditions and three fixed seeds. BERT‑base yields higher accuracy and lower calibration error, while GPT‑2 narrows gaps under few‑shot prompting but remains more sensitive to prompt design and context length. Efficiency benchmarks show that decoder‑only prompting incurs near‑linear latency/memory growth with k‑shot exemplars, whereas fine‑tuned encoders maintain stable per‑example cost. These findings offer practical guidance on when to prefer fine‑tuning versus prompting and demonstrate that reliability must be evaluated alongside accuracy for risk‑aware deployment.