![]() ![]() With Portrait Mode on the Pixel 4, we extended this approach to estimate depth from both dual-pixels and dual cameras, using Tensorflow to train a convolutional neural network. We showed last year how machine learning can be used to estimate depth from dual-pixels. Having this complementary information allows us to estimate the depth of far objects and reduce depth errors for all scenes. On the Pixel 4, the dual-pixel and dual-camera baselines are perpendicular, allowing us to estimate depth for lines of any orientation. However, these pixels can still be seen by the dual pixel views, enabling a better estimate of depth in these regions.Īnother reason to use both inputs is the aperture problem, described in our previous blog post, which makes it hard to estimate the depth of vertical lines when the stereo baseline is also vertical (or when both are horizontal). Thus, it is not possible to measure the parallax to estimate the depth for these pixels when using only dual cameras. For example, the background pixels immediately to the man’s right in the primary camera’s image have no corresponding pixel in the secondary camera’s image. The larger the baseline, the more pixels that are visible in one view without a corresponding pixel in the other. While this makes it easier to estimate depth in the background, some pixels to the man’s right are visible in only the primary camera’s view making it difficult to estimate depth there.Įven with dual cameras, information gathered by the dual pixels is still useful. The dual-pixel views have only a subtle vertical parallax in the background, while the dual-camera views have much greater horizontal parallax. In the images below, the parallax between the dual-pixel views is barely visible, while it is obvious between the dual-camera views. The Pixel 4’s wide and telephoto cameras are 13 mm apart, much greater than the dual-pixel baseline, and so the larger parallax makes it easier to estimate the depth of far objects. The dual-pixels’ viewpoints have a baseline of less than 1mm, because they are contained inside a single camera’s lens, which is why it’s hard to estimate the depth of far scenes with them and why the two views of the man look almost identical.ĭual Cameras are Complementary to Dual-Pixels Parallax also depends on the length of the stereo baseline, that is the distance between the cameras (or the virtual cameras in the case of dual-pixels). Because parallax decreases with object distance, it is easier to estimate depth for near objects like the bulb. One can estimate parallax and thus depth by finding corresponding pixels between the views. This motion is called parallax and its magnitude depends on depth. The dual-pixel views of the bulb have much more parallax than the views of the man because the bulb is much closer to the camera. Alternating between these views, the subject stays in the same place while the background appears to move vertically. ![]() While these views come from a single camera with one lens, it is as if they originate from a virtual pair of cameras placed on either side of the main lens’ aperture. By reading out each of these half-pixel images separately, you get two slightly different views of the scene. Dual-pixels work by splitting every pixel in half, such that each half pixel sees a different half of the main lens’ aperture. The Pixel 2 and 3 used the camera’s dual-pixel auto-focus system to estimate depth. (Photos Credit: Alain Saal-Dalma and Mike Milne) Pixel 4’s Portrait Mode allows for Portrait Shots at both near and far distances and has SLR-like background blur. We have also improved our bokeh, making it more closely match that of a professional SLR camera. With the Pixel 4, we have made two more big improvements to this feature, leveraging both the Pixel 4’s dual cameras and dual-pixel auto-focus system to improve depth estimation, allowing users to take great-looking Portrait Mode shots at near and far distances. A critical component of this process is knowing how far objects are from the camera, i.e., the depth, so that we know what to keep sharp and what to blur. Launched on the Pixel 2 and then improved on the Pixel 3 by using machine learning to estimate depth from the camera’s dual-pixel auto-focus system, Portrait Mode draws the viewer’s attention to the subject by blurring out the background. Portrait Mode on Pixel phones is a camera feature that allows anyone to take professional-looking shallow depth of field images. ![]() Posted by Neal Wadhwa, Software Engineer and Yinda Zhang, Research Scientist, Google Research
0 Comments
Leave a Reply. |