Video compression is widely used to reduce bandwidth and storage requirements when storing and transmitting videos, most existing neural video compression approaches adopt the predictive residue-coding framework, which is suboptimal for removing redundancy across frames. Additionally, minimizing only the pixel-wise differences between the raw and decompressed frames is ineffective in improving the perceptual quality of the videos, blocking artifacts degrade the visual quality, especially near edges and texture areas. Hence, to solve these problems, this research proposes a scaler enhanced deformable attention graph neural network (SEDA-GNN) to utilized for reduce inter-frame redundancy by employing a deformable attention mechanism that efficiently captures motion and structural changes, thereby minimizing redundancy. Modelling complex temporal dynamics with graph neural networks (GNNs) captures dependencies between frames, thereby facilitating highly efficient video encoding, then constrained directional enhancement filter (CDEF) effectively reduces blocking artifacts while preserving sharp edges through directional and constrained filtering, thereby improving visual quality in compressed video. The SEDA-GNN approach achieved a bjontegaard delta bit rate (BD-BR) reduction of 2.372% on the joint collaborative team on video coding (JCT-VC) database and 3.230% of BD-BR on the ultra video group (UVG) dataset, demonstrating significant performance when compared to invertible neural networks (INNs).
Copyrights © 2026