posted on 2025-01-29, 14:28authored byLiu Chunxiao, Zelong Wang, Phil BirchPhil Birch, Xun Wang
Images captured in low-light environments often suffer from significant degradation. However, most existing Retinex based methods require an additional decomposition network and overlook the degradation caused by the illumination adjustment process, which results in the consumption of significant computational resources to achieve only average performance. To address the above issues, this paper proposes a more efficient Retinex based approach named RetinexMac that allows training without an additional decomposition network or regularization functions.
RetinexMac first employs an illumination coefficient estimation network to estimate the transform map and light up the global illumination and the local contrast of input images, then a multiscale degradation estimation network is used to suppress the degradation amplified by the illumination adjustment. In order to accurately estimate the degradation, a convolution and attention
mixed module integrates the global and local spatial information.
This is shown to also significantly improve the performance of other previous Retinex-based methods. Extensive experiments on several representative datasets show that our RetinexMac achieves both current state-of-the-art (SOTA) performance and more ideal visual appearance in terms of illumination and detail, as well as computational efficiency.
History
Publication status
Published
File Version
Accepted version
Journal
IEEE transactions on circuits and systems for video technology (Print)