Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

Google’s Deblur AI: Sharpify your Images
Latest   Machine Learning

Google’s Deblur AI: Sharpify your Images

Last Updated on July 15, 2023 by Editorial Team

Author(s): Muhammad Arham

Originally published on Towards AI.

Say goodbye to blurry images. Google’s new technique unlocks the true potential of your phone’s camera.

Image by Author

Introduction

In our ever-evolving digital age, where capturing and sharing moments through photography has become an integral part of our lives, the frustration of ending up with blurry images can be disheartening. Whether it’s a cherished family photo, a breathtaking landscape, or a snapshot of a special occasion, blurry images can diminish the visual impact and rob us of the clarity we desire.

But fear not. Google’s new methodology provides a way to capture clear images straight from your phone. Most phones nowadays come with multiple cameras. Utilizing a single capture from two different cameras, Google uses learnable post-processing to refocus blurry images. By using the same scene captured using a Wide Angle (W) and Ultra-Wide Angle (UW) camera simultaneously, the method aims to combine both to obtain sharper results.

Architecture

Image from Paper

The DFNet model receives the wide-angle and ultra-wide-angle shots of the same scene as input, along with their defocus maps. The input and target defocus map represent the fuzziness of the original and output image, where each pixel value is proportional to the blurriness of the corresponding image pixel.

As the ultra-wide and ultra-wide angle images are extremely different, having varying depths of field, symmetry, focus, and colors, combining these images is not a trivial task. Therefore, Google introduces a learning-based methodology to stitch these images together.

The model takes the wide-angle image as the base image, where the ultra-wide image is used as a reference for high-frequency details. The model aims to blend both images, following the provided defocus maps, such that the output is a deblurred image.

At test time, one can easily change the target defocus map, to deblur different parts of the image as required.

Image from Paper

As shown, to generate fully clear images, we can set the defocus map to all zeros. This motivates the model, to deblur all parts of the image. In other cases, specific portions of the image can be deblurred in accordance with the provided defocus map at test time.

Results

Achieving a PSNR and SSIM score of 29.78 and 0.898 respectively, the post-processing method outperforms previous methods in both qualitative and quantitative analysis.

Image from Paper
Image from Paper

The results show the state-of-the-art results from previous methods, and Google’s DFNet, which attains better sharpness and details than its predecessors.

The model has potential uses in the domains of image refocus, depth of field (DoF) control and rerendering, and deblurring.

Limitations

Need for Multiple Cameras

The model uses Wide and Ultra-wide cameras which provide references for high-frequency details. Both images need to have different depths of fields, focusing on different parts of the scene. Images captured from identical cameras will not be able to replicate such results. Also, there is a major dependency on dual-camera phones, and image restoration is not possible with a single image input.

Dataset Generation

It is difficult to have a dataset of images captured using wide and ultra-wide angles that are widely available. It is also impossible to synthetically generate such datasets by adding a Gaussian blur to images that can replicate noise in real-world scenarios. To reduce the domain gap, the authors captured 100 image stacks for this method.

Dependency on Pre-Existing Methods

The data preprocessing part is a necessity to generate defocus maps, along with depth and occlusion masks. The preprocessing uses preexisting Optical Flow and Stereo Depth algorithms that are known to generate severe artifacts, resulting in the degradation of output images.

Conclusion

Blurriness Begone. Put an end to fuzzy images with Google’s recent advancement in image restoration. If incorporated into the AI behind phone cameras, we can see a picture-perfect world every day, right through our phones.

Consider reading the paper for a detailed understanding.

Paper: https://defocus-control.github.io/static/dc2_paper.pdf

Follow me if you liked this article, and want to learn more about machine learning and recent advancements in the research community.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓