VICTORIS
Budget Express 2026

co-presented by

  • LIC
  • JIO BlackRock

ASSOCIATE SPONSORS

  • Sunteck
  • SBI
  • Emirates
  • Dezerv
Parallel Income Plan 2026
Parallel Income Plan 2026

Apple research reveals how AI could transform extreme low-light iPhone photography

Apple researchers have demonstrated how integrating an AI diffusion model directly into the camera pipeline could dramatically improve extreme low-light photos by recovering detail from raw sensor data.

December 21, 2025 / 14:33 IST
Apple iPhone

Apple researchers have outlined a new approach to low-light photography that could significantly improve how iPhones capture images in near-darkness. The study explores how artificial intelligence can be embedded directly into the camera’s image signal processor, allowing the system to recover detail from raw sensor data that would otherwise be lost in extreme lighting conditions.

Low-light photography has long been one of the hardest problems in mobile imaging. When a camera sensor does not receive enough light, images often end up filled with digital noise, muddy colours and smeared textures. To compensate, smartphone makers rely heavily on computational photography techniques that brighten scenes and suppress noise. While effective in many cases, these methods are frequently criticised for producing overly smooth results, where fine textures disappear and complex details turn into flat, oil-painting-like surfaces.

What the research focuses on

The new research tackles this limitation by rethinking where AI should be applied in the imaging pipeline. Instead of using artificial intelligence only after the image has already been processed, Apple’s researchers propose integrating an AI model directly into the camera’s core processing workflow. The model they developed, known as DarkDiff, is designed to enhance extremely dark raw images before critical detail is lost.

At the heart of the approach is a diffusion-based image model that has been repurposed to work alongside traditional camera processing. Diffusion models are typically associated with image generation, where they learn to create realistic visuals by gradually refining noise into structure. In this case, the researchers adapted a pre-trained diffusion model to understand what visual detail is likely to exist in dark regions of a photograph, based on the broader context of the scene.

Rather than replacing the camera’s existing image signal processor, DarkDiff works with it. The ISP still performs essential early-stage operations such as white balance correction and demosaicing, which converts raw sensor data into a usable colour image. Once this linear RGB image is produced, DarkDiff steps in to denoise the image and generate the final output with improved clarity, colour accuracy and texture retention.

Key challenges addressed

One of the key challenges with AI-based enhancement is hallucination, where the system invents details that were never present. To address this, the researchers introduced a mechanism that focuses attention on local image patches. This helps preserve real structures in the scene and reduces the risk of the AI changing the content entirely. The model also uses a technique known as classifier-free guidance, which controls how strongly it follows the original input versus its learned visual patterns. Lower guidance results in smoother images, while higher guidance brings out sharper textures but increases the risk of artefacts.

To evaluate the system, the researchers tested DarkDiff on real-world photos captured in extremely low-light environments. These images were taken with very short exposure times, sometimes as low as a fraction of a second, and compared against reference images shot with exposures hundreds of times longer using a tripod. Across multiple benchmarks, DarkDiff produced images that were closer to the reference shots than existing raw enhancement methods, particularly in terms of perceived detail and colour fidelity.

Despite the impressive results, the researchers are clear about the limitations. The processing required for DarkDiff is far more computationally demanding than traditional image pipelines. Running it locally on a smartphone would be slow and could have a significant impact on battery life. As a result, any real-world implementation would likely require cloud-based processing or substantial advances in on-device AI hardware. The study also notes challenges with accurately rendering text in low-light scenes, especially for non-English scripts.

 

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Ayush Mukherjee
first published: Dec 21, 2025 02:32 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347
CloseParallel Income Plan 2026