Real Time Observation Analysis for Healthcare Applications via Automatic Hardware Adaptation
Multimodal Computing and Interaction
By Alexander Hauptmann
The continuing growth of unstructured digital media content drives the need for more effective and efficient methods for indexing, searching, categorizing and organizing video, audio and sensor data. But doing so as quickly as possible with finite resources is challenging. This project investigates novel machine learning algorithms that enable real-time video and sensor data analysis on large data streams given limited computational resources. We focus on healthcare as our application domain, where real-time video analysis can prevent user errors in operating medical devices or provide immediate alerts to caregivers about dangerous situations. We propose to develop machine learning theories and algorithms that automatically adapt to hardware limitations, with the aim of learning a prediction function that accurately makes effective predictions and can be run efficiently on a specified computer system to deliver time-critical results..