A Framework for Segmentation Using Physical Models of Image Formation
CARNEGIE-MELLON UNIV PITTSBURGH PA ROBOTICS INST
Pagination or Media Count:
Most approaches to computer image segmentation group sets of pixels according to visible features of an image such as edges, color, brightness, and curvature. Such approaches exploit specialized object properties to obtain satisfactory groupings, which can force those techniques to be domain specific. Furthermore, they do not provide a physical explanation for the image, nor do they group regions that have a single physical structure yet differing visible features. This paper presents a new approach to segmentation using explicit hypotheses about the physics that creates images. We propose an initial segmentation that identifies image regions exhibiting constant color, but possibly varying intensity. For each region, hypotheses are proposed that specifically model the illumination, reflectance, and shape of the 3-D patch which caused that region. An image region may have many hypotheses simultaneously, and each hypothesis represents a distinct, plausible explanation for the color and intensity variation of that patch. Hypotheses for adjacent patches can be compared for similarity and merged when appropriate, resulting in more global hypotheses for grouping elementary regions. This approach to segmentation has the potential to provide a list of possible explanations for a given image to group together regions with coherent physical properties and to provide a framework for applying specific operators such as shape-from-shading, color constancy, and roughness evaluation as part of the overall process of low- level vision. However, many profound unsolved problems are raised in determining the most plausible explanations for a given image region. In this paper, we present the approach, working through an example by hand, and discuss the implications of this approach for physics-based vision.