Work

Mobile Computational Imaging Systems for Appearance Modeling Based Surface Shape Recovery

Public

Downloadable Content

Download PDF

Surface appearance represents the sense impression of the surface. In visual art, the artists try to use the appearance of their artworks to express their mental state and philosophy. Researchers in the cultural heritage community has been trying to use different analysis approaches to interpret artworks. In Computer Graphics and Computational Imaging, surface appearance modeling and shape recovery has been the central challenge. Combining the optimized hardware (camera and illuminants) and reconstruction algorithm, it could estimate the surface geometry and material properties with the measured surface appearances. Unfortunately, the cultural heritage community hardly exploits these powerful tools due to the high requirement of sophisticated system. In this thesis, we introduce novel computational imaging frameworks that apply commodity hardware such as mobile phone/tablet system, digital single-lens reflex (DSLR) camera, and liquid-crystal display (LCD). Appearance modeling-based shape recovery techniques such as Photometric Stereo (PS) and Phase Measuring Deflectometry (PMD) has been widely used in academic research and industrial sophisticated applications. They have been intensively used for applications such as human digitization in visual effect and industrial inspection for high-quality geometry and material properties. However, conventional appearance modeling-based methods require complicated hardware systems and constrained environments, limiting its usage to different applications. In the original PS image formation model, it assumes the light source places at infinity far away. In order to obtain high-quality shape recovery, conventional PS techniques typically require a giant light dome or calibration equipment (e.g., mirror ball) with open operation space to carefully control the lighting condition. To break the limitation and improve the robustness, we propose a novel near-light PS framework by leveraging photogrammetry and unique portable dual cameras system to improve the lighting calibration. An uncalibrated photometric stereo setup is augmented by a synchronized secondary witness camera co-located with a point light source. By recovering the witness of the camera's position for each exposure with photogrammetry techniques, we estimate the precise 3D location of the light source relative to the photometric stereo camera. We have shown a significant improvement in both light source position estimations and normal map recovery compared to previous uncalibrated photometric stereo techniques. Besides, with the new configuration we propose, we benefit from improved surface shape recovery by jointly incorporating the corrected photometric stereo surface normal and a sparse 3D point cloud from photogrammetry. Although the proposed PS framework helps improve the robustness and quality of the surface shape, to further improve the portability and accessibility for PS to apply to different applications, we introduce mobile shape-from-shifting (SfS): a simple, low-cost and streamlined photometric stereo framework for scanning planar surfaces with a consumer mobile device coupled to a low-cost add-on component. Our free-form mobile SfS framework relaxes the rigorous hardware and other complex requirements inherent to conventional 3D scanning tools. This is achieved by taking a sequence of photos with the on-board camera and flash of a mobile device. The sequence of captures is used to reconstruct high-quality normal maps using near-light photometric stereo algorithms, which are of comparable quality to conventional photometric stereo. We demonstrate 3D surface reconstructions with SfS on different materials and scales. Moreover, the mobile SfS technique can be used” in the wild” so that 3D scans may be performed in their natural environment, eliminating the need for transport to a laboratory setting. PS techniques could welly handle most of the common surface material and provide high-quality surface geometry. However, it would fail with highly reflective surface which disobeyed the Lambertian reflectance used in PS. To cover specular material, we introduce a system that exploits the screen and front-facing camera of a mobile device to perform three-dimensional Deflectometry-based surface modeling. In contrast to current mobile deflectometry systems, our method can capture surfaces with large normal variation and wide field of view (FoV). We achieve this by applying automated multiview panoramic stitching algorithms to produce a large FoV normal map from a hand-guided capture process without the need for external tracking systems, like robot arms or fiducials. Lastly, we propose an inverse rendering based reflective surface shape reconstruction. Conventional multiview deflectometry techniques require multiview normal map stitching and normal field integration to obtain 3D surface geometry. Instead of trying to contain the noise that introduces in these two procedures, we adopt differential renderer to directly optimize the surface geometry from the appearance measurements, which produce high quality 3D surface information.

Creator
DOI
Subject
Language
Alternate Identifier
Keyword
Date created
Resource type
Rights statement

Relationships

Items