The catchment feature model for multimodal language analysis
Conference Paper
Overview
Additional Document Info
View All
Overview
abstract
The Catchment Feature Model (CFM) addresses two questions in multimodal interaction: how do we bridge video and audio processing with the realities of human multimodal communication, and how information from the different modes may be fused. We discuss the need for our model, motivate the CFM from psycholinguistic research, and present the Model. In contrast to 'whole gesture' recognition, the CFM applies a feature decomposition approach that facilitates cross-modal fusion at the level of discourse planning and conceptualization. We present our experimental framework for CFM-based research, and cite three concrete examples of Catchment Features (CF), and propose new directions of multimodal research based on the model.