Image Quality Assessment (IQA) is crucial in fields like digital imaging and telemedicine, where intricate details and overall scene composition affect human perception. Existing methodologies often prioritize either local or global features, leading to insufficient quality assessments. A hybrid deep learning framework, LoDaPro (Local Detail and Global Projection), that integrates EfficientNet for precise local detail extraction with a Vision Transformer (ViT) for comprehensive global context modelling was introduced. Its balanced feature representation makes it easier to do a more thorough and human-centered evaluation of image quality. Assessed using the KonIQ-10k and TID2013 benchmark datasets, LoDaPro attained a validation SRCC of 91% and PLCC of 92%, exceeding the predictive accuracy of prominent IQA methods. The results illustrate LoDaPro's capacity to proficiently learn the intricate relationship between image content and perceived quality, providing strong and generalizable performance across various image quality contexts.
Copyrights © 2025