Anyone involved in the spherical photography knows the bane of shooting and post is the dreaded p-word. “Parallax” or “something appears to move more when closer to camera and less when further from the camera”. This is also true for our eyes and are intrinsic to multi-camera systems. Although this makes our dynamic stitching workflows a dart throw and compositing session tedious, the information encoded in the parallax information gives us a very very valuable information into scene depth. Whether you use a synchronous multi-camera stereo capture or a synthetic motion based sensing device both will yield a ton of information about the z-depth in the world around you.
“So what about spherical depth construction”. A special issue is introduced when dealing with hyper focal lens, the lens distortion makes objects appear to have a greater distance over less time. So unless we can remove the lens deformation the reconstruction algorithm will get confused on the depth of the object.
Method 1: A rather intuitive method that I stumbled on out of necessity is the use of 6 discrete cameras to do the equirectangular capture. This leave you with 6 data image paths with less distortion (120 fov) and do not require the use of special polar reconstruction software.
“So how can we use it for art“.
Think about the sheer amount of data that Google has with spherical photographs of the worlds roads. Or imaging satellites circling the earth hundreds of times a week, capturing image data a rapid regular intervals.
We are dealing with outward looking capture or convex focal cones. This means the area we are trying to describe can be very large with long distances and complex spatial distributions.
Spherical Structure from Motion.
Multi-view 360 Stereo.