Breakthroughs Information Technology
Vol 1 No 2 (2025)

Enhancing Zero-Shot Reasoning in Language Models Via Hybrid Instruction Marginalization

Shirmohammad Tavangari (University of British Columbia)
Aref Yelği (Istanbul Topkapi University)



Article Info

Publish Date
22 Dec 2025

Abstract

Objective: The study aims to enhance the reasoning abilities of Large Language Models (LLMs), which often remain shallow, inconsistent, and error-prone in complex multi-step tasks. It introduces the Hybrid Instruction Tuning Framework (HITF) to improve zero-shot reasoning through a task-aware hybrid selector that integrates both human-annotated and automatically generated examples. Research Design & Methods: HITF strengthens reasoning performance using three main techniques: synthesizing transitional results, context-aware prompt merging, and recurrent optimization, all executed without model recalibration. The framework is empirically evaluated using rigorous cognitive benchmarks, including SuperGLUE, MMLU, GSM8K, and FermiQA. Component isolation tests examine the independent contribution of the example selector, output synthesizer, and instruction combiner. Statistical variability assessments further validate result reliability. Findings: Results show that HITF consistently outperforms state-of-the-art methods across multiple metrics, demonstrating higher measurement accuracy, stronger argumentative quality, and deeper analytical processing. All core modules exhibit significant and measurable contributions, supported by stable statistical outcomes. Implications & Recommendations: Findings suggest that combining context-driven instruction selection with statistical consolidation techniques can substantially improve deductive reasoning in LLMs, particularly in data-scarce and example-free settings. Future research should explore HITF’s integration with larger models and its application in real-world reasoning-intensive domains. Contribution & Value Added: This study offers an innovative framework that enhances zero-shot reasoning without retraining. By merging hybrid instruction selection and iterative optimization strategies, HITF narrows the reasoning gap between LLMs and humans and provides a scalable, reliable approach for advancing high-level reasoning in modern language models

Copyrights © 2025






Journal Info

Abbrev

bit

Publisher

Subject

Computer Science & IT Control & Systems Engineering Electrical & Electronics Engineering Engineering Industrial & Manufacturing Engineering

Description

BIT is an open-access journal which means that all content is freely available at no cost to the user or the institution. The scope of the journal includes empirical and theoretical articles relating to all aspects of information science, engineering and technology. It focuses on the biggest ...