Balancing perceptual quality with computational efficiency remains challenging in speech enhancement systems. This research presents an adaptive filtering framework integrating psychoacoustic modeling with multi-stage noise reduction. The architecture combines spectral subtraction and Wiener filtering, modulated by Bark-scale perceptual weighting derived from critical band theory. Unlike conventional approaches, the system exploits frequency-dependent auditory sensitivity to concentrate processing on perceptually salient regions while reducing representation of masked components. Experimental validation across diverse acoustic conditions yielded an average SNR improvement of 4.2 dB over baseline techniques, with simultaneous 31.7% file size reduction through psychoacoustically-guided quantization. PESQ assessment produced a mean opinion score of 4.23, confirming excellent quality preservation. Convergence analysis revealed 23% faster adaptation attributed to perceptually-weighted cost functions. Robustness testing across white noise, babble, and environmental sounds demonstrated consistent performance with minimal variance, indicating strong generalization capability. These findings show that incorporating human auditory principles simultaneously improves perceptual quality, computational efficiency, and system adaptability—critical for bandwidth-constrained applications in mobile communications, streaming platforms, and assistive devices
Copyrights © 2026