Janus: Big Data for Security

With the Janus Project we envision developing “more” intelligent surveillance system for a variety of applications, such as security surveillance, public health surveillance, traffic surveillance, etc., with a large amount of videos, texts, and sensor data.

Main Objectives

  1. Data Collection: Collecting fine-grain and heterogeneous spatiotemporal surveillance data
  2. Data Analysis: Exploring/analyzing numerous correlated data streams (e.g., video feeds, GPS, traffic, text/tweet) and “connecting-the-dots” to identify events of interest efficiently
  3. Information Communication: Responding in real time by providing personalized information to the target community


Research Team

Our team is an interdisciplinary team with various complimentary expertise, including computer vision, computer graphics and visualization, data management and data mining, mobile computing, and body area computing, safety & security, and health care. A list of the current members of the team follows (in alphabetic order): Murali Annavaram (EE Department), Farnoush Banaei-Kashani (IMSC), Dave Beeler (DPS), Carol Hayes (DPS), Brenda Jones (Keck School of Medicine), Seon Ho Kim (IMSC), Andy Lee (Keck School of Medicine), Gerard Medioni (Computer Science Department), Ulrich Neumann (Computer Science Department), Ram Nevatia (Computer Science Department), Pia Pannaraj (CHLA), and Cyrus Shahabi (IMSC). Our funding comes from various governmental organizations as well as local communities with interest in intelligent surveillance (such as USC).


Focus Research Areas

We focus our research in two areas corresponding to two types of surveillance systems: 1) infrastructure-based surveillance systems which leverage existing infrastructures for data collection and information communication (e.g., home security systems), and 2) infrastructure-less surveillance systems that rely on mobile computing for data collection and information communication. Below, we briefly elaborate on two subprojects, dubbed iWatch Core and iWatch Mobile , which focus on research in the context of these two types of surveillance systems.


“iWatch Core ”: Scaling up Human-in-the-Loop Surveillance in Infrastructure-based Surveillance Systems

In this area, our focus is on Objective 2 of the iWatch project (see above) by developing techniques to scale up human-in-the-loop surveillance. Toward that end, our main design approach is to leave the tasks that are best done by humans (namely, directing and decision-making) to human and the tasks best done by machines (i.e., rapid processing of large amounts of data) to computers. In particular, we are developing novel data analytics for automated detection of isolated incidents of interest from numerous and heterogeneous data streams (including video, text, and sensor data streams) as well as efficient spatiotemporal data indexing solutions that enable connecting the incident using the common fabric of time and space to derive the more abstract events of interest (the big picture) in real-time.

The following exemplary (visionary) scenario depicts a security surveillance use-case where a public safety surveillance officer can potentially use such an intelligent surveillance system for regular area monitoring, for example, for campus monitoring. Imagine the officer is watching the 3D campus map on a screen in the surveillance room. Real-time vehicle and human traffic (which is estimated on-the-fly based on the live data received from traffic sensors and/or the GPS data collected from mobile devices of the travelers) is overlaid as a congestion heat-map on top of the campus map. Being aware of the typical traffic trends for the time of the day, the traffic analytics module of the system automatically identifies and reports one specific location on the map as a potential problem area due to its “ abnormal traffic” . The officer uses system tools to quickly cross-reference the identified area with relevant tweet data feed as well as public safety event data stream (events reported as text messages by public safety officers on duty in real-time), and applies the text analytics module of the system to detect if an incident is recently reported in the area. The result being negative, the officer cross-references the area and time with CCTV cameras to view the live video feed of the area and uses the video analytics module of the system to explore the archived and live video data for suspicious activities. As a result, he observes that even though there is no accident, two unidentified individuals have deliberately positioned a few large items on the road to slow down the traffic. Meanwhile, the officer cross-references the area with a campus location database and recognizes that there is a biochemical lab in a close-by building. Given the sensitivity of the area, while dispatching patrol officers to investigate, the officer sends a personalized alert to the campus community which gives each individual customized evacuation directions to stay away from the trouble zone until it becomes safe.


“iWatch Mobile ” : Crowdsourcing Surveillance in Infrastructure-less Surveillance Systems

The iWatch Mobile subsystem is a mobile computing framework with three distinct and complementary functionalities corresponding to the three objectives of the iWatch project (see above):

1.Participatory Surveillance : iWatch Mobile enables any member of the target community collect multi-modal , geo-tagged and time-tagged surveillance data (including video, text, audio, etc.) using their own mobile devices (mobile phones, smart-phones, PDAs, etc.).

2.Integrated Event Analysis : iWatch Mobile also enables the surveillance authorities to efficiently search, query, analyze and visualize the collected participatory surveillance data (by extending and customizing iWatch Core capabilities to be applicable to participatory data).

3.Coordinated Response with Personalized Instructions : iWatch Mobile closes the loop by enabling surveillance authorities to stage coordinated response/intervention by allowing for direct and personalized communication with the target community members.

The above figure shows the architecture of the iWatch Mobile subsystem which is designed as a client-server event-driven system. Note that the functionalities/capabilities discussed above enable implementing a variety of applications; for example, Personalized Alert (where alerts are customized per individual given her/his location), Geofence (that allows defining a virtual fence and monitoring all trespassing users), or simply a participatory data visualizer (for effective display of the collected participatory data).



Year 2013-2014

Facial Expression Recognition

  • Faculty lead: Prof. Gerard Medioni
  • Description: This project develops key algorithms for a realtime facial expression recognition of consumers as a part of the customer feedback analysis framework in B2C (Business to Consumer).

Year 2012-2013

Mining Events from Multi-Source Multi-Modal Data

  • Faculty lead: Prof. Yan Liu (Computer Science)
  • In a multi-INT/multi-source environment, integration of the readings collected from multiple data sources/sensors (possibly of different modalities) allows for compensation of each source’s inherent deficiencies by utilizing the strengths of other sources. In particular, such multi-source integration enables more effective surveillance for activities of interest. In this project, we use novel data mining techniques to automatically detect events given mults-source multi-modal data.

Year 2011-2012

Contact Network Analysis


Human Mobility-Pattern Analysis

Year 2010-2011

Rapid Forensic Analysis