Knowledge-based video compression for search and rescue robots and multiple sensor networks
Conference Paper
Overview
Research
Identity
Additional Document Info
View All
Overview
abstract
Robot and sensor networks are needed for safety, security, and rescue applications such as port security and reconnaissance during a disaster. These applications rely on real-time transmission of images, which generally saturate the available wireless network infrastructure. Knowledge-based compression is a method for reducing the video frame transmission rate between robots or sensors and remote operators. Because images may need to be archived as evidence and/or distributed to multiple applications with different post processing needs, lossy compression schemes, such as MPEG, H.26x, etc., are not acceptable. This work proposes a lossless video server system consisting of three classes of filters (redundancy, task, and priority) which use different levels of knowledge (local sensed environment, human factors associated with a local task, and relative global priority of a task) at the application layer of the network. It demonstrates the redundancy and task filters for a realistic robot search scenario. The redundancy filter is shown to reduce the overall transmission bandwidth by 24.07% to 33.42%, and, when combined with the task filter, reduces overall transmission bandwidth by 59.08% to 67.83%. By itself, the task filter has the capability to reduce transmission bandwidth by 32.95% to 33.78%. While knowledge-based compression generally does not reach the same levels of reduction as MPEG, there are instances where the system outperforms MPEG encoding.