Claim Missing Document
Check
Articles

Found 2 Documents
Search

A Hybrid NAKA-FA-PSO Algorithm with Nakagami Distribution for Multi-Objective Portfolio Optimization Aref Yelği; Shirmohammad Tavangari
Start-up and Financial Technology Vol. 1 No. 2 (2025)
Publisher : Start-up and Financial Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70764/gdpu-sft.2025.1(2)-07

Abstract

Objective: This study aims to optimize portfolio allocation under cardinality constraints by maximizing expected return and minimizing risk, while addressing the NP-complete nature of the problem. Research Design & Methods: A hybrid multi-objective optimization approach is proposed by combining Particle Swarm Optimization and Firefly Algorithm (PSO-FA) with Nakagami distribution to preserve solution diversity and achieve optimal results. The algorithms were applied to the OR-library dataset and executed 30 times for analysis and evaluation. Findings: The experimental results demonstrate that the proposed algorithm outperforms existing methods in terms of accuracy, diversity, and stability. On the P5 test sample, the reported metrics were 2.76E-07 IGD, 7.43E-08 GD, and 2.94E-03 HV, with consistent improvements also observed in other test samples. Implications & Recommendations: The findings suggest that the PSO-FA with Nakagami distribution can serve as an effective alternative for solving cardinality-constrained portfolio optimization problems, particularly in tackling NP-complete challenges in finance. Future research may extend its application to larger datasets and dynamic market conditions. Contribution & Value Added: This study contributes by introducing a novel hybrid optimization framework (PSO-FA and Nakagami distribution) that enhances solution quality in portfolio optimization. The value added lies in its ability to balance return, risk, and solution diversity, offering new insights beyond existing approaches in the literature.
Enhancing Zero-Shot Reasoning in Language Models Via Hybrid Instruction Marginalization Shirmohammad Tavangari; Aref Yelği
Breakthroughs Information Technology Vol 1 No 2 (2025)
Publisher : Generate Digital Publishing

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70764/gdpu-bit.2025.1(2)-01

Abstract

Objective: The study aims to enhance the reasoning abilities of Large Language Models (LLMs), which often remain shallow, inconsistent, and error-prone in complex multi-step tasks. It introduces the Hybrid Instruction Tuning Framework (HITF) to improve zero-shot reasoning through a task-aware hybrid selector that integrates both human-annotated and automatically generated examples. Research Design & Methods: HITF strengthens reasoning performance using three main techniques: synthesizing transitional results, context-aware prompt merging, and recurrent optimization, all executed without model recalibration. The framework is empirically evaluated using rigorous cognitive benchmarks, including SuperGLUE, MMLU, GSM8K, and FermiQA. Component isolation tests examine the independent contribution of the example selector, output synthesizer, and instruction combiner. Statistical variability assessments further validate result reliability. Findings: Results show that HITF consistently outperforms state-of-the-art methods across multiple metrics, demonstrating higher measurement accuracy, stronger argumentative quality, and deeper analytical processing. All core modules exhibit significant and measurable contributions, supported by stable statistical outcomes. Implications & Recommendations: Findings suggest that combining context-driven instruction selection with statistical consolidation techniques can substantially improve deductive reasoning in LLMs, particularly in data-scarce and example-free settings. Future research should explore HITF’s integration with larger models and its application in real-world reasoning-intensive domains. Contribution & Value Added: This study offers an innovative framework that enhances zero-shot reasoning without retraining. By merging hybrid instruction selection and iterative optimization strategies, HITF narrows the reasoning gap between LLMs and humans and provides a scalable, reliable approach for advancing high-level reasoning in modern language models