Opportunity
Capturing high-quality images is often hindered by poor environmental lighting, leading to under-exposed (too dark) or over-exposed (too bright) regions. These exposure problems cause irreversible loss of texture and structural details, diminishing visual quality. While High Dynamic Range (HDR) imaging can solve this by combining multiple exposures, many real-world scenarios provide only a single Standard Dynamic Range (SDR) image. Reconstructing an HDR image from a single SDR image is highly challenging because the missing information in saturated (over/under-exposed) areas must be hallucinated. Existing deep learning methods apply the same convolution kernel to the entire image, often creating unnatural artifacts where recovered details in saturated regions adversely impact normal regions. A more targeted, content-aware approach is needed.
Technology
This patent presents an Exposure-induced Network (EIN) for reconstructing an HDR image from a single SDR image. The core innovation is a three-branch architecture with progressive, confidence-guided detail recovery.
The system receives a single SDR input image. It generates two gated images (Io for over-exposed, Iu for under-exposed) using a Gaussian function to identify problem areas. These are fed into two parallel Exposure Gated Detail Recovering Branches (EGDRBs). Each EGDRB uses an Exposure-guided Confidence Map Learning Module (ECMLM) to learn multi-scale confidence maps. These maps guide the network to progressively focus on the most severely under/over-exposed regions at deeper network layers, recovering texture and structure only where needed. A third branch, the Dynamic Range Expansion Branch (DREB), processes the original SDR image using a U-net with Spatial-Channel Attention Modules (SCAM) to expand the global dynamic range. Finally, a Feature Fusion Module (FFM) adaptively merges features from the DREB (global expansion) and the two EGDRBs (local detail recovery) to reconstruct the final HDR image. The network is trained with a combined loss function including content, perceptual, and color losses.
Advantages
- Targeted Detail Recovery: The EGDRB with learned confidence maps recovers missing details only in under/over-exposed regions, avoiding unnatural artifacts in normal areas.
- Progressive Refinement: Multi-scale confidence maps allow the network to first restore moderately exposed areas and then focus on extremely saturated regions as it deepens.
- Superior Visual Quality: Quantitative evaluation shows the invention outperforms existing methods (ExpandNet, HDRCNN, SingleHDR) across HDR-VDP-2, PSNR, SSIM, and FSIM metrics.
- Handles Both Exposure Extremes: Unlike prior work focusing only on over-exposure, the two dedicated EGDRBs address both over-exposed and under-exposed region degradation.
- Consistent Performance: Works reliably across different SDR images with varying exposure levels and camera response functions.
Applications
- Smartphone Photography: Enhancing single photos taken in challenging lighting (backlight, night scenes) by recovering lost details and expanding dynamic range.
- Security & Surveillance: Improving visibility of details in under-exposed or over-exposed frames from security footage.
- Digital Restoration: Restoring old or damaged photographs with exposure-related degradation.
- Computational Photography: As a core module for HDR pipeline in cameras and image editing software (e.g., Photoshop).
- Automotive Imaging: Enhancing images from vehicle cameras for better perception in varying lighting conditions (tunnels, night driving).
