This research develops and evaluates an adaptive parameter-based fixed point iterative algorithm within generalized metric vector spaces to improve stability and convergence speed in optimization problems. The study extends fixed point theory beyond classical metric spaces by incorporating a more flexible structure that accommodates non-Euclidean systems, commonly found in machine learning, data analysis, and dynamic systems optimization. The proposed adaptive fixed point algorithm modifies the conventional iterative method: where the adaptive parameter dynamically adjusts based on the previous iterations: with as a control constant. A numerical case study demonstrates the algorithm’s effectiveness, comparing it with the classical Banach Fixed Point Theorem. Results show that the adaptive method requires fewer iterations to achieve convergence while maintaining higher stability, significantly outperforming the standard approach. The findings suggest that incorporating adaptive parameters in fixed point iterations enhances computational efficiency, particularly in non-convex optimization and deep learning training models. Future research will explore the algorithm’s robustness in high-dimensional spaces, its integration with hybrid optimization techniques, and applications in uncertain and noisy environments.
                        
                        
                        
                        
                            
                                Copyrights © 2024