Media Data Management

Overview

Recently, new forms of multimedia data (such as text, numbers, tags, signals, geo-tag, 3D/VR/AR and sensor data, etc.) has emerged in many applications on top of conventional multimedia data (image, video, audio). Multimedia has become the “biggest of big data” as the foundation of today’s data-driven discoveries. Moreover, such new multimedia data is increasingly involved in a growing list of science and engineering domains, such as driverless cars, drones, smart cities, biomedical instruments, and security surveillance. Multimedia data has also has embedded information that can be mined for various purposes. Thus, storing, indexing, searching, integrating, and recognizing the vast amounts of data create unprecedented challenges. IMSC is working on solutions for acquisition, management, and analysis of a large multimedia data.

Selected Research in Media

Multimedia Information Processing and Retrieval

Large-scale spatial-visual search faces two major challenges: search performance due to the large volume of the dataset and inaccuracy of search results due to the image matching imprecisions. First, the large scale of geo-tagged image datasets and the demand for real-time response make it critical to develop efficient spatial-visual query processing mechanisms. Towards this end, we focus on designing index structures that expedite the evaluation of spatial-visual queries. Second, retrieving relevant images is challenging due to two types of inaccuracies: spatial (due to camera position and scene location mismatch) and visual (due to dimensionality reduction). We propose a set of novel hybrid index structures based on R*-tree and LSH, i.e., a two-level index structure consisting of one primary index associated with a set of secondary structures. In particular, there are two variations to this class: using R*-tree as a primary structure (termed Augmented Spatial First Index) or using LSH as primary (termed Augmented Visual First Index). We experimentally showed that all hybrid structures greatly outperform the baselines with the maximum speed-up factor of 46.

Selected Publications in Multimedia Information Processing and Retrieval

  • Ying Lu, Cyrus Shahabi, Seon Ho Kim. Efficient Index and Retrieval of Large-scale Geo-tagged Video Databases. Geoinformatica, Vol. 20, Issue 4, pp. 829-857, Springer, Oct. 2016.
  • Abdullah Alfarrajeh, Seon Ho Kim, Cyrus Shahabi. Hybrid Indexes for Spatial-Visual Search. ACM International Conference on Multimedia (ACM MM), Oct. 2017.
Geospatial Multimedia Sentiment Analysis

Even though many sentiment analysis techniques have been developed and available, there are still limitations in reliably using sentiment analysis since there is no dominantly accepted technique. Taking advantage of existing state-of-the-art sentiment classifiers, we propose a novel framework for geo-spatial sentiment analysis of disaster-related multimedia data objects. Our framework addresses three types of challenges: the inaccuracy and discrepancy associated with various text and image sentiment classifiers, the geo-sentiment discrepancy among data objects in a local geographical area, and observing diverse sentiments from multimedia data objects (i.e., text and image). To overcome these challenges, we proposed a novel framework composed of three phases: sentiment analysis, spatial-temporal partitioning, and geo-sentiment modeling. To estimate the aggregated sentiment score for a set of objects in a local region, our geo-sentiment model considers the sentiment labels generated by multiple classifiers in addition to those of geo-neighbors. To obtain sentiment with high certainty, the model measures the disagreement among correlated sentiment labels either by entropy or variance metric. We used our framework to analyze the disasters of Hurricane Sandy and Napa Earthquake based on datasets collected from Twitter and Flickr. Our analysis results were analogous to FEMA, and USGS reports.

Selected Publications in Multimedia Information Processing and Retrieval

  • Abdullah Alfarrajeh, Sumeet Agrawal, Seon Ho Kim, Cyrus Shahabi. Geo-spatial Multimedia Sentiment Analysis in Disasters.  IEEE International Conference on Data Science and Advanced Analysis (DSAA 2017), Oct. 2017.
  • Hien To, Sumeet Agrawal, Seon Ho Kim, Cyrus Shahabi. On Identifying Disaster-Related Tweets: Matching-based or Learning-based? IEEE International Conference on Multimedia Big Data (IEEE BigMM), April 2017.

Selected Research in Smart Cities

Big Data in Disasters

In a disaster, fast initial data collection is critical for first response. With the wide availability of smart mobile devices such as smartphones, a dynamic and adaptive crowdsourcing on disaster and after disaster has been getting attention in disaster situation awareness. We have been working on maximizing visual awareness in a disaster using smartphones, especially with constrained bandwidth resources. Specifically, We are currently performing a federally funded international joint project (US NSF and Japan JST joint) about data collection in disasters using MediaQ.

Selected Publications in big data in disaster

  • Sasan Tavakkol, Hien To, Seon Ho Kim, Patrick Lynett, Cyrus Shahabi. An Entropy-based Framework for Efficient Post-disaster Assessment based on Crowdsourced Data. ACM SIGSPATIAL Workshop on Emergency Management using GIS in conjunction with ACM GIS 2016, Oct. 31, 2016.
  • Hien To, Seon Ho Kim, Cyrus Shahabi. Effectively Crowdsourcing the Acquisition and Analysis of Visual Data for Disaster Response. IEEE International Conference on Big Data (IEEE Big Data 2015), pp. 697-706, Oct. 2015.

Selected Research in Transportation

Realtime Traffic Flow Data Extraction

Vision-based traffic flow analysis is getting more attention due to its non-intrusive nature. However, real-time video processing techniques are CPU-intensive so accuracy of extracted traffic flow data from such techniques may be sacrificed in practice. Moreover, the traffic measurements extracted from cameras have hardly been validated with real dataset due to the limited availability of real world traffic data. This study provides a case study to demonstrate the performance enhancement of vision-based traffic flow data extraction algorithm using a hardware device, Intel video analytics coprocessor, and also to evaluate the accuracy of the extracted data by comparing them to real data from traffic loop detector sensors in Los Angeles County (from our transportation project).


Selected Publications in traffic flow data extraction

  • Wonhee Cho and Seon Ho Kim. Multimedia Sensor Dataset for the Analysis of Vehicle Movement. ACM International Conference on Multimedia Systems (ACM MMSys), June 2017.
  • Seon Ho Kim, Junyuan Shi, Abdullah Alfarrarjeh, Daru Xu, Yuwei Tan, and Cyrus Shahabi. Real-Time Traffic Video Analysis Using Intel Viewmont Coprocessor. Databases in Networked Information Systems (DNIS 2013), LNCS 7813, pp. 150–160. Springer, Mar. 2013.

Seon Ho Kim

Research Scientist
Integrated Media Systems Center (IMSC)

MediaQ

Mobile Media Management Platform

Visit MediaQ

IMSC is a research center that focuses on data-driven solutions for real-world applications by applying multidisciplinary research in the area of data science.