Biggers, Keith Edward (2011-05). Inference-based Geometric Modeling for the Generation of Complex Cluttered Virtual Environments. Doctoral Dissertation. Thesis uri icon

abstract

  • As the use of simulation increases across many diff erent application domains, the need for high- fidelity three-dimensional virtual representations of real-world environments has never been greater. This need has driven the research and development of both faster and easier methodologies for creating such representations. In this research, we present two diff erent inference-based geometric modeling techniques that support the automatic construction of complex cluttered environments. The fi rst method we present is a surface reconstruction-based approach that is capable of reconstructing solid models from a point cloud capture of a cluttered environment. Our algorithm is capable of identifying objects of interest amongst a cluttered scene, and reconstructing complete representations of these objects even in the presence of occluded surfaces. This approach incorporates a predictive modeling framework that uses a set of user provided models for prior knowledge, and applies this knowledge to the iterative identifi cation and construction process. Our approach uses a local to global construction process guided by rules for fi tting high quality surface patches obtained from these prior models. We demonstrate the application of this algorithm on several synthetic and real-world datasets containing heavy clutter and occlusion. The second method we present is a generative modeling-based approach that can construct a wide variety of diverse models based on user provided templates. This technique leverages an inference-based construction algorithm for developing solid models from these template objects. This algorithm samples and extracts surface patches from the input models, and develops a Petri net structure that is used by our algorithm for properly fitting these patches in a consistent fashion. Our approach uses this generated structure, along with a defi ned parameterization (either user-defi ned through a simple sketch-based interface or algorithmically de fined through various methods), to automatically construct objects of varying sizes and con figurations. These variations can include arbitrary articulation, and repetition and interchanging of parts sampled from the input models. Finally, we affim our motivation by showing an application of these two approaches. We demonstrate how the constructed environments can be easily used within a physically-based simulation, capable of supporting many diff erent application domains.
  • As the use of simulation increases across many diff erent application domains,

    the need for high- fidelity three-dimensional virtual representations of real-world environments

    has never been greater. This need has driven the research and development

    of both faster and easier methodologies for creating such representations. In this research,

    we present two diff erent inference-based geometric modeling techniques that

    support the automatic construction of complex cluttered environments.

    The fi rst method we present is a surface reconstruction-based approach that

    is capable of reconstructing solid models from a point cloud capture of a cluttered

    environment. Our algorithm is capable of identifying objects of interest amongst a

    cluttered scene, and reconstructing complete representations of these objects even in

    the presence of occluded surfaces. This approach incorporates a predictive modeling

    framework that uses a set of user provided models for prior knowledge, and applies

    this knowledge to the iterative identifi cation and construction process. Our approach

    uses a local to global construction process guided by rules for fi tting high quality

    surface patches obtained from these prior models. We demonstrate the application of

    this algorithm on several synthetic and real-world datasets containing heavy clutter and occlusion.

    The second method we present is a generative modeling-based approach that can

    construct a wide variety of diverse models based on user provided templates. This

    technique leverages an inference-based construction algorithm for developing solid

    models from these template objects. This algorithm samples and extracts surface

    patches from the input models, and develops a Petri net structure that is used by our

    algorithm for properly fitting these patches in a consistent fashion. Our approach uses

    this generated structure, along with a defi ned parameterization (either user-defi ned

    through a simple sketch-based interface or algorithmically de fined through various

    methods), to automatically construct objects of varying sizes and con figurations.

    These variations can include arbitrary articulation, and repetition and interchanging

    of parts sampled from the input models.

    Finally, we affim our motivation by showing an application of these two approaches.

    We demonstrate how the constructed environments can be easily used

    within a physically-based simulation, capable of supporting many diff erent application

    domains.

publication date

  • May 2011