DETECTION OF SIGN-LANGUAGE CONTENT IN VIDEO THROUGH POLAR MOTION PROFILES Conference Paper uri icon

abstract

  • Locating sign language (SL) videos on video sharing sites (e.g., YouTube) is challenging because search engines generally do not use the visual content of videos for indexing. Instead, indexing is done solely based on textual content (e.g., title, description, metadata). As a result, untagged SL videos do not appear in the search results. In this paper, we present and evaluate a classification approach to detect SL videos based on their visual content. The approach uses an ensemble of Haar-based face detectors to define regions of interest (ROI), and a background model to segment movements in the ROI. The two-dimensional (2D) distribution of foreground pixels in the ROI is then reduced to two 1D polar motion profiles by means of a polar-coordinate transformation, and then classified by means of an SVM. When evaluated on a dataset of user-contributed YouTube videos, the approach achieves 81% precision and 94% recall. 2014 IEEE.

name of conference

  • 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

published proceedings

  • 2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)

author list (cited authors)

  • Karappa, V., Monteiro, C., Shipman, F. M., & Gutierrez-Osuna, R.

citation count

  • 7

complete list of authors

  • Karappa, Virendra||Monteiro, Caio DD||Shipman, Frank M||Gutierrez-Osuna, Ricardo

publication date

  • May 2014