Sandia wants to help security analysts see better
A research agreement with EyeTracking Inc. could benefit intelligence analysts working to identify security threats.
Sandia National Laboratories is working with a private company to develop a way to help track data in visual feeds and other real-life security applications that use more than just static images.
The lab said Aug. 13 it has signed a cooperative research and development agreement with EyeTracking Inc., a San Diego small business that specializes in eye-tracking data collection and analysis.
The deal could benefit intelligence analysts working to identify security threats in war zones, airports or elsewhere.
In the course of their work, analysts often flip through multiple images to create a video-like effect, or toggle between images at lightning speed, pan across images, zoom in and out or view videos or other moving records.
Eye tracking measures the eyes’ activity by watching where a viewer is looking on a computer screen, what they ignore and timing when they blink. Sandia’s work with EyeTracking takes current eye-tracking capabilities beyond static images. Current tools work well analyzing those static images, like the children’s picture book “Where’s Waldo,” and for video images where researchers can anticipate where content of interest will appear. But more complex tasks, driven by cues an analyst might come across on screen, or “information foraging,” can’t be effectively addressed with current eye-tracking technology, according to Sandia.
Under the agreement, researchers are working with EyeTracking to figure out how to capture within tens of milliseconds the content beneath the point on a screen where a viewer is looking, hoping to get a better handle on how to anticipate what might trigger an analyst to look at other places in an image.
Until now, Sandia said, eye-tracking research has shown how viewers react to stimuli on the screen. For example, a bare, black tree against a snow-covered scene will naturally attract attention. This type of bottom-up visual attention, where the viewer is reacting to stimuli, is well understood, according to the labs’ researchers.
They want to see how viewers look at a particular scene with a task in mind, like finding a golf ball in the snow. They might glance at the tree quickly, according to the lab, but then their gaze goes to the snow to search for the golf ball. This type of top-down visual cognition is not well understood and Sandia said it hopes to develop models that predict where analysts will look.