By Zach Schepis
Image courtesy of Wikimedia Commons.
The car crawls to a stop at the US-Mexico border. Rather than the expected booth that houses an armed guard or officer, there sits a piece of machinery resembling an oversized ATM.
After rolling down his window, the driver finds himself staring face-to-face with a disembodied head floating across a screen.
“Please state your name and current address,” the digital image says.
The driver provides his information in a slow and steady voice.
The machine processes the response: Pupil dilation–normal. Voice modulation–nominal.
The talking head proceeds.
“Have you ever been convicted of a felony?”
This time the driver is a bit more uncomfortable. He stammers a moment before answering.
Before he can even begin second guessing his response, the computer is already at work. Pupils dilated. Voice agitated, raise in pitch at the end of statement. Body fidgeting with irregular movements.
Image courtesy of the University of Arizona.
Suspicious behavior detected.
“Please wait for an agent before proceeding,” the digital head directs the driver.
He sits in the desert heat, sweating and anxious, as a living, breathing guard approaches the vehicle for further interrogation.
What you just read isn’t a scene from a film or science fiction story. It’s what a future scenario might look like at our nation‘s border.
The National Center for Border Security and Immigration (BORDERS) exists to explore scientific knowledge, create new technologies, and evaluate policies to tackle challenges of border security and immigration.
The organization has been developing a new technology for the past few years that could forever change the way we go about our travels. Judee Burgoon, a Professor at the University of Arizona, is one of the BORDERS project’s Principal Investigators. She alongside director, Jay Nunamaker.
Burgoon tells BTR that when dealing with border-crossing scenarios, many situations arise where authorities need to accurately access credibility. However, because people deceive for assorted reasons, she realizes it can be difficult to effectively use deceptive communication practices in order to determine potential risk.
To make detection easier, BORDERS created a screening system called the Automated Virtual Agent for Truth Assessments in Real-Time (AVATAR). The AVATAR is designed to accurately detect and assess suspicious behavior utilizing artificial intelligence and non-invasive sensor technologies.
In its current form, the kiosk analyzes clues through sensors in body movement, vocalics, pupillometry, and eye tracking. There are between 400-500 psycho physiological and behavioral cues that can be utilized to assess detection. Many of the behavioral cues–such as eye tracking–are completely involuntary.
Out of the hundreds of different cues, the AVATAR is programmed to look for 15. Humans can control two or three of these cues simultaneously, but not all 15. It becomes inevitable that telltale clues can leak out during interrogation.
Burgoon is quick to defend the system’s role as a defense mechanism, claiming it’s a “mistaken perception” that AVATAR can detect lies.
“Its purpose is to provide a risk assessment and possible areas for further inquiry by human agents or interviewers,” she tells BTR. “The objective is to reduce the amount of error that occurs with agents who must manage multiple tasks at once, must try not to be distracted by all that is going on around them, who are at risk of fatigue, and also of making erroneous judgments that inconvenience innocent travelers.”
The system was already put to use in the field, with varying results. In the fall of 2009, Nunamaker traveled to Warsaw, Poland, with a prototype, and ran an experiment with a participating group of European Union border guards. Half of these guards tried to pass through the AVATAR kiosk while carrying mock bombs in their suitcases.
The AVATAR succeeded in detecting all of the mock bombers, but also captured some false positives.
“We know deceivers are often missed. The question is how many layers of protection can be provided for people to protect them without hassling them,” says Burgoon.
The system was tested again at an airport in Bucharest, Romania, last year, and results were slightly more mixed. Some passengers took the challenge as an opportunity to try and fool the system while others were simply confused. In all, Burgoon concludes that most users found it “favorable.”
While this impressive new technology is the culmination of nearly 42 years of research in the areas of deception detection conducted by Nunamaker and Burgoon, other experts remain skeptical about the project.
Charles H. Honts is a former Department of Defense polygrapher and currently a Boise State University psychologist. Despite his rigorous training utilizing and effectively administering polygraph examinations, Honts is the first to admit the limitations through this method of lie detection.
“First of all, these machines are very expensive in terms of the amount of time needed to administer the tests–you need an hour and a half minimum just for a simple one,” he tells BTR.
“You need to ask a limited number of things about which the person being tested actually knows the answers to. The test does a really good job when you ask something like, ‘did you shoot John Doe,’ but the further you get away from specific acts the less accurate it becomes. For instance, questions dealing with intentions end up becoming limitations.”
Honts is one of a small group of individuals who are actually qualified to run the examination. He maintains that you need someone with formal training to properly administer the test, despite the fact that are no requirements to do so, and most administrators are unqualified.
The AVATAR operates on its own accord, independent of an administrator.
“Essentially, we’d like to augment human intelligence,” reasons Burgoon. “Let the devices do the drudgery.”
Honts, however, is not convinced by the new program, and tells BTR there is an absence of respective peer-reviewed articles.
“They’ve already put something out in the field and haven’t even published any accounts that show how it works. This is outrageous. You do trials in the field after a database in the laboratory.”
The AVATAR is currently part of the Trusted Traveler Program, which is testing the new kiosk on travelers passing through the Arizona-Mexico border.
While the “non-invasive” sensor technologies are certainly cutting-edge, Honts remains wary on their accuracy in determining deception. Pupil diameter can be useful as a factor, however, body language is not a strong enough piece of evidence. Throughout the vast volume of literature on these kinds of movements, the accuracy rates for reading them are barely more than half.
Honts also expresses sincere doubts in the voice analyzers that the AVATAR uses, saying that so far, only a couple of studies have been carried out on the subject.
“You use a technique called spectral analysis, which is when you take a recording of a voice and break it down into intervals of frequency,” says Honts. “You see how much power, for instance, there is between 300HZ and 400HZ. There is some indication that when people lie, their voice tends to shift to a higher pitch, but the studies say that you can’t hear it without a spectral analyzer. Without one of these, BORDERS efforts are essentially useless.”
Despite any doubts from critics, the AVATAR program is going ahead at full speed. Testing is to continue well on into this year, and more field experiments will be carried out to help hone the precision of this new interface.
Burgoon forecasts additional applications for the kiosk beyond mere threat assessment, citing the example of how well consumers have received self-checkout lines at grocery stores. She also discusses some prospects for the medical world, in which patients could answer initial interview questions via kiosks.
Though, what if certain individuals become anxious by interacting with a digital interviewer rather than another human being? Burgoon says that contrary to this notion, people (such as those tested in Romania) actually prefer the AVATAR to an officer.
In terms of forming an appropriate AI personality, the digital image is currently made to resemble a male, as BORDERS’ studies concluded that subjects found an image of a woman to be friendlier but less credible. It has also been more difficult to accurately program the vocal inflections of the female AVATAR, although Burgoon claims they are at work to put versions resembling both genders out into the field for testing this year.
As these bizarre new technologies begin to become more and more integrated into our daily lives, certain members of the public may become increasingly uncomfortable by interacting with such a cold and calculated interface–especially if there remains room for error.
“The government wants to buy hardware, but they don’t want to spend money on basic research concerning how to actually execute what they are trying to do,” Honts tells BTR. “Someone needs to start spending money on basic science. It’s not supposed to be this way.”