Digital cameras typically require lenses to focus incoming light on an image sensor. While technology has continually improved, allowing for more compact camera systems, they are nonetheless limited by physics. A lens can only be so small, and the distance between the lens and a sensor so short. This is where ‘lensless’ cameras come in. Unburdened by the physical limitations of optical design, lensless cameras can be much smaller. Professor Masahiro Yamaguchi of the Tokyo Institute of Technology, a co-author of a research paper about a new approach to lensless camera design, said, ‘Without the limitations of a lens, the lensless camera could be ultra-miniature, which could allow new applications that are beyond our imagination.’
The idea for a lensless camera itself isn’t new. We’ve seen it before, including a single-pixel lensless camera in 2013 and, more recently, a much smaller lensless camera in 2017. A lensless camera, which comprises an image sensor and a thin mask in front of the sensor that encodes information from a given scene, requires mathematical reconstruction to produce a detailed image. While a traditional camera with an optical lens uses the glass inside its lens to achieve focus and immediately produce a sharp image, a lensless camera instead encodes light and must then reconstruct a blurry, out-of-focus image into something useful.
A group of researchers at Tokyo Tech, including professor Yamaguchi, have created a new reconstruction technique that promises improved image quality and significantly faster processing, two issues that have held back some other lensless cameras.
Earlier lensless cameras, like the one developed by Bell Labs in 2013 and CalTech’s camera in 2017, relied upon methods to control light hitting the image sensor and perform sophisticated measurements of how light interacts with the specific, physical mask and image sensor, to then reconstruct an image. Without a way to focus light, a lensless camera captures a blurry image, which must be reconstructed into a sharper image using an algorithm. By understanding how the light interacts with a thin mask in front of the image sensor, an algorithm can decode the light information and reconstruct a focused scene. However, the decoding process is extremely challenging and resource-intensive. Beyond requiring time, generating good image quality requires a perfect physical model. If an algorithm is based on an inaccurate approximation of how light interacts with the mask and sensor, the camera system will falter.
Instead of using a model-based decoding approach, the Tokyo Tech team developed a reconstruction method that relies upon deep learning. Existing deep learning methods using convolutional neural networks (CNN) aren’t efficient enough to solve the problem. As outlined by Phys.org, the issue is that a “CNN processes the image based on the relationships of neighboring ‘local’ pixels, whereas lensless optics transform local information in the scene into overlapping ‘global’ information on all the pixels of the image sensor, through a property called ‘multiplexing.”
The new research relies upon a novel machine learning algorithm. It’s based upon a technique called Vision Transformer (ViT), and it promises improved global reasoning. As Phys writes, “The novelty of the algorithm lies in the structure of the multistage transformer blocks with overlapped ‘patchify’ modules. This allows it to efficiently learn image features in a hierarchical representation. Consequently, the proposed method can well address the multiplexing property and avoid the limitations of conventional CNN-based deep learning, allowing better image reconstruction.”
The proposed method, using neural networks and a connected transformer, promises improved results. Further, reconstruction errors are reduced, and computing times are shorter. The team believes that the method can be used for real-time capture of high-quality images, something that has eluded previous lensless cameras.
The full research paper, ‘Image reconstruction with transformer for mask-based lensless imaging,’ is available to paid users at Optica. The paper’s authors are Xuixi Pan, Xiao Chen, Saori Takeyama and Masahiro Yamaguchi. You can read the abstract below. The referenced transformer is the ViT:
A mask-based lensless camera optically encodes the scene with a thin mask and reconstructs the image afterward. The improvement of image reconstruction is one of the most important subjects in lensless imaging. Conventional model-based reconstruction approaches, which leverage knowledge of the physical system, are susceptible to imperfect system modeling. Reconstruction with a pure data-driven deep neural network (DNN) avoids this limitation, thereby having potential to provide a better reconstruction quality. However, existing pure DNN reconstruction approaches for lensless imaging do not provide a better result than model-based approaches. We reveal that the multiplexing property in lensless optics makes global features essential in understanding the optically encoded pattern. Additionally, all existing DNN reconstruction approaches apply fully convolutional networks (FCNs) which are not efficient in global feature reasoning. With this analysis, for the first time to the best of our knowledge, a fully connected neural network with a transformer for image reconstruction is proposed. The proposed architecture is better in global feature reasoning, and hence enhances the reconstruction. The superiority of the proposed architecture is verified by comparing with the model-based and FCN-based approaches in an optical experiment.
Author:
This article comes from DP Review and can be read on the original site.