MediaQ: Big Data for Social Media

MediaQ is a novel online media management framework that includes functions to collect, organize, share, search, and trade user-generated mobile images and videos using automatically tagged geo-spatial metadata.

MediaQ consists of a MediaQ server and a mobile app for smartphones and tablets using iPhone or Android. User-generated-videos (UGV) can be uploaded to the MediaQ portal from users’ smartphones and they are then displayed accurately on a map interface according to their automatically collected geo-tag and other metadata information such as the recorded real time and the specific direction the camera was pointing. Media content can be collected in a casual or on-demand manner. Logged in participants can post specific content requests that will automatically generate an alert with other participants who are near a desired content assignment location to entice them to get out their phones and start recording.

The MediaQ system provides the following distinct features:

  • Individual frames of videos (or any meaningful video segments) are automatically annotated by objective metadata which capture four dimensions in the real world: the capture time (when), the camera location and viewing direction (where), several keywords (what) and people (who). We term this data W4-metadata and it is obtained by utilizing geospatial and computer vision techniques. 
  • A new approach of collecting videos and images from the public has been implemented using geo-crowdsourcing, which allows user-generated media content to be collected in a coordinated manner for a specific purpose. 
  • Flexible video search functions are implemented using W4 metadata such as directional queries for selecting videos and images with a specific viewing direction. 

Our implementation and extensive real world experiments demonstrate that MediaQ is an effective and comprehensive solution for many video repository services. More information can be found in the followings:

  • MediaQ web: http://mediaq.usc.edu
  • MediaQ Demo (2013):
  • PBS Newshour Inauguration Coverage using our system (2013):
  • NATO Summit Coverage (2012):


MediaQ has been partly supported by NIJ and Google. A part of MediaQ is a result from the research collaboration with Prof. Roger Zimermann at the National University of Singapore, Singapore. Publications about MediaQ can be found here.


Screenshot of MediaQ Web


 MediaQ Architecture


Subprojects

 

Year 2012 - 2013

Constructing a Dynamic Ontology for Streaming Big Data in a Specific Domain

  • Lead by Prof. Dennis McLeod (Computer Science)
  • Build ontologies for dynamically changing data, particularly big data streams. One essential characteristic of such big data streams is the unpredictability of their semantic relationships, which requires an ontology that can automatically evolve. By analyzing data streams, the proposed research expects to capture the varying semantics in time so that the dynamic semantics can be used to update the ontology.

Year 2011 - 2012

Energy Literacy Platform for USC Campus

  • Lead by Dr. Burcin Becerik Gerber (Civil and Environmental Eng.)
  • Develop a BMS-occupant commuication module which provides building ambient factor feedback to participants for reducing energy consumption


Year 2010 - 2011

Targeted Trojan Alerts

  • Lead by Dr. Daniel W. Goldberg (Spatial Sciences Institute)
  • Provide customized alert system using user location information 

Sensing Occupancy and Location

  • Lead by Dr. Bhaskar Krishnamachari (Electrical Eng.)
  • Develop an occupancy sensing system for buildings and rooms

AmbientFactors for iCampus

  • Lead by Dr. Burcin Becerik Gerber (Civil and Environmental Eng.)
  • Record, organize, and visualize the spatiotemporal indoor ambient information voluntarily provided by the USC campus users

 

Related Educational Activities


Year 2010 - 2011


Android Projects under CSCI587: Geospatial Information Managements


External Collaboration


Year 2010 - 2012

GeoVid

  • Team: Dr. Roger Zimmermann (National University of Singapore), Dr. Seon Ho Kim (IMSC), Dr. Sakire Arslan Ay (National University of Singapore)
  • The GeoVid project explores the concept of sensor-rich video tagging. Specifically, recorded videos are tagged with a continuous stream of extended geographic properties that relate to the camera scenes. This meta-data is then utilized for storing, indexing and searching large collections of community-generated videos. By considering video related meta-information, more relevant and precisely delimited search results can be returned.
  • Demo video: