Summarization from Multiple User Generated Videos in Geo-Space
Supervisor(s) and Committee member(s): Roger Zimmermann (supervisor), Mohan Kankanhalli (advisor), Michael Brown (advisor), Wei Tsang Ooi (rapporteur).
In recent years, we have witnessed an overwhelming number of user-generated videos being captured on a daily basis. An essential reason is the rapid development in camera technology and hence videos are easily recorded on multiple portable devices, especially mobile smartphones. Such flexibility encourages the modern videos to be tagged with additional various sensor properties. In this thesis, we are interested in geo-referenced videos whose meta-data is closely tied to geographic identifications. These videos have great appeal for prospective travelers and visitors who are unfamiliar with a region, an area or a city. For example, before someone visits a place, a geo-referenced video search engine can quickly retrieve a list of videos that are captured in this place so the visitors could obtain an overall visual impression, conveniently and quickly. However, users face the prospect of an ever increasing viewing burden if the size of these video repositories keeps increasing and as a result more videos are relevant to a search query. To manage these video retrievals and provide viewers with an efficient way to browse, we introduce a novel solution to automatically generate a summarization from multiple user generated videos and present their salience to viewers in an enjoyable manner.
In this thesis, we investigate how to formulate, display and improve a multi-video summarization with the following contributions. The first three works propose solutions to detect video salience among multiple videos according to their geographic properties and convert the summarization to a graph-analysis problem with a dynamic programming-based solution to achieve the optimized informativeness, quality and coherency. The fourth work proposes an interactive and dynamic video exploration system where people can conduct personalized summary queries through direct map-based manipulations. Lastly, we investigate whether external crowdsourcing databases contribute to improving the summary quality by recommending a list of photography spots which are of potential to capture appealing photos for a landmark.
Media Management Research Lab
Media Management Research Lab focuses on Geo-referenced video management (GeoVid), streaming media architectures, spatio-temporal information management, and mobile location-based services. The GeoVid project explores the concept of sensor-rich video tagging. Specifically, recorded videos are tagged with a continuous stream of extended geographic properties that relate to the camera scenes. This meta-data is then utilized for storing, indexing and searching large collections of community-generated videos. By considering video related meta-information, more relevant search results can be returned and advanced searches, such as directional and surround queries, can be executed.