Spatially-varying autofocus to produce an optical all-in-focus image. Left: A conventional phot…
Spatially-varying autofocus to produce an optical all-in-focus image. Left: A conventional photo with a regular lens, where objects at a single focal plane appear sharp. Right: An all-in-focus photo captured through spatially-varying autofocusing. To achieve this, we combine (i) a programmable lens with spatially-varying control over focus, and (ii) a spatially-varying autofocus algorithm to drive the focus of this lens. Note that this is an optically-captured image of a real scene with no post-capture processing used. Credit: Carnegie Mellon College of Engineering
Imagine snapping a photo where every detail, near and far, is perfectly sharp—from the flower petal right in front of you to the distant trees on the horizon. For over a century, camera designers have dreamed of achieving that level of clarity.
In a breakthrough that could transform photography, microscopy, and even smartphone cameras, researchers at Carnegie Mellon University have developed a new kind of lens that can bring an entire scene into sharp focus at once—no matter how far away or close different parts of the scene are.
The team, consisting of Yingsi Qin, an electrical and computer engineering Ph.D. student, Aswin Sankaranarayanan, professor of electrical and computer engineering, and Matthew O’Toole, associate professor of computer science and robotics, recently presented their findings at the 2025 International Conference on Computer Vision and received a Best Paper Honorable Mention recognition.
Traditional camera lenses can only bring one flat layer of a scene into perfect focus at a time. Anything in front of or behind that layer turns soft and blurry. Narrowing the aperture can help, but it also dims the image and introduces new kinds of optical fuzziness caused by diffraction.
“We’re asking the question, ‘What if a lens didn’t have to focus on just one plane at all?’” says Qin. “What if it could bend its focus to match the shape of the world in front of it?”
The researchers developed a “computational lens”—a hybrid of optics and algorithm—that can adjust its focus differently for every part of a scene. The system builds on a design known as a Lohmann lens, which uses two curved, cubic lenses that shift against each other to tune focus. By combining this setup with a phase-only spatial light modulator—a device that controls how light bends at each pixel—the researchers were able to make different parts of the image focus at different depths simultaneously.
The system uses two autofocus methods. The first is contrast-detection autofocus (CDAF), which divides the image into regions called superpixels. Each region independently finds the focus setting that maximizes its sharpness. The second is phase-detection autofocus (PDAF), which uses a dual-pixel sensor to detect not just whether something is in focus, but which direction to adjust. This makes it faster and better suited for moving scenes—the team achieved 21 frames per second with their modified sensor.
“Together, they let the camera decide which parts of the image should be sharp—essentially giving each pixel its own tiny, adjustable lens,” explains O’Toole.
Beyond its obvious appeal to photographers, the technology could have sweeping applications. Microscopes could capture every layer of a biological sample in focus at once. Autonomous vehicles might see their surroundings with unprecedented clarity. Even augmented and virtual reality systems could benefit, using similar optics to create more lifelike depth perception.
“Our system represents a novel category of optical design,” says Sankaranarayanan, “one that could fundamentally change how cameras see the world.”
More information: Yingsi Qin et al, Spatially-Varying Autofocus (2025)
Citation: A computational camera lens that can focus on everything all at once (2025, November 5) retrieved 5 November 2025 from https://techxplore.com/news/2025-11-camera-lens-focus.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.