Why Facial Recognition In Schools Seems To Be An Aimless Recipe For Disaster
Get started on your Homeland Security degree at American Military University.
At 2.19pm, on Wednesday, February 14, 2018, Nikolas Cruz exited an Uber near the Marjory Stoneman Douglas High School in Parkland, Florida. He had told the driver on the thirteen-minute ride that he was heading to a music lesson, and she assumed the bag he carried was his guitar. But less than two minutes after leaving the car, Cruz entered Building 12 at the school, pulled an AR-15 from his bag, and started shooting. The incident lasted six minutes. Seventeen people were killed.
Fear in US schools is real and widespread: a Pew Research Center survey this year found that 57% of US teens and 63% of their parents worried about a school shooting. This fall, the Lockport school district in New York State became one of the first to deploy a grant-funded facial recognition system with such incidents in mind, a trend that will see many such deployments across the US and beyond. Some states approved budgets for increased school security spending in this week’s elections.
According to AP, the consultant who specified the Lockport system claimed it “would have identified [Nikolas Cruz] as not being able to be in that building,” and administrators referenced its potential to thwart a ‘Parkland’ attack. The company behind the system explains on its website: “In the US, there have been well-publicized shootings at schools which resulted in the deaths and injuries to hundreds of children. Many State Governments have set aside funding for technology for a safer school environment.”
The Fear Bandwagon
Few subjects are more emotive than the safety of our kids. And few technologies are more polarizing than facial recognition. The two are currently on a collision course, with vendors of facial recognition technology eyeing the $2.7 billion spent annually on school security in the US alone. Not only will this collision fuel the debate between privacy and safety, and the issue of biometric enrollment of minors, but it risks overstating the technology’s potential to thwart mass-casualty events. With armed guards only too aware of past incidents, an expectation that a system can alert to a threat in real-time could be a recipe for disaster.
The education sector has now attracted a raft of technology companies, some well established and some new to the surveillance space. There is no consistent or concerted approach. The technology is marketed as an analytic, with schools and universities expected to populate databases (read watch lists) from which their systems will seek to match passersby or would-be visitors. Most of these systems will be simple opt-in access control solutions. But some will attempt to populate a database with known sex offenders, excluded pupils and former staff members.
There is a quite clearly a case for identifying potential rogue elements on school grounds. Such people can be approached. Data on repeat visitors can be captured, with information followed-up or shared with the authorities. But there are providers inferring that such systems can go beyond this: “Help prevent mass shootings,” says one’s vendor’s website, which claims to “instantly alert school security when unauthorized individuals enter school grounds.”
Prevention Vs Containment: Lessons From Counter-Terrorism
If you want to study the art of the possible in prevention (ideally) or containment (less ideally), look to the front line in combating such attacks. Counter-terrorism, by its definition, is charged with the detection, interception and prevention of mass-casualty events. On military operations and for the protection of the most heavily secured people and locations, facial recognition might be deployed against specific watch lists to direct the efforts of security operatives. Beyond that, the technology supports operations and investigations in the field, enabling the monitoring of more people than is possible with teams alone.
Where potential adversaries are known or inferred through their connections, their activities can be monitored, facilitating preemptive action. Facial recognition can be critical in general or directed surveillance, especially in tracking networks. But facial recognition is not deployed in public spaces to intercept and challenge attacks in real-time, particularly where targets are indiscriminately selected; the best that can be achieved is containment, and this is triggered by the event itself. Exactly how a video analytic could detect and intercept an incident moments before it begins, in a real-world situation, is wholly unclear. But that hasn’t stopped the suggestion being made.
With intensive security around an event or a highly sensitive location, the perimeter can be extended and facial recognition used to trigger alerts with time to verify and intercept. These are highly focused operations. In crowded public spaces, including schools, this approach becomes unrealistic. The threat level in any single location does not justify the level of co-ordinated security or heightened awareness required. Clearly, with enough time on school grounds before an attack, a would-be assailant might be intercepted, although this is more likely to come from a guard noticing unusual behavior as from biometric matching,
Let’s assume Nikolas Cruz was enrolled in a facial recognition system, and that the stored imagery of him was of good-quality. Let’s assume the system picked him up quickly, and that operators did not dismiss the alert as a potential false positive; which also assumes the overall quality of system imagery to be good, as poor data results in false positives. Let’s assume those operators did not take time to look up details on Cruz or seek secondary verification, and that armed security guards were stationed in the vicinity. And let’s assume those guards were well-trained and reacted immediately, without hesitation.
They had 120 seconds from Cruz exiting the Uber to firing his first shot.
An Opportunity Missed
Facial recognition, used in tandem with other data analytics, can help build an early-warning intelligence picture with prevention in mind. Law enforcement agencies at federal, state and local levels have access to watch lists that are not available to schools. A segmented system, with stove-piped data, can monitor for multiple people from different sources simultaneously, alerting the right agency to a match. For example, a school system could monitor against a state law enforcement watchlist without the school having access to the data itself. For an early-warning system to work, or for a real-time system to reduce response times, the data is the key. Realistically, this also needs to include potential threats, individuals known to law enforcement but yet to break the law.
An alert against an individual on a restricted list would be received by law enforcement, not by the school’s control room. It is reported that Cruz was twice broached to the authorities before his attack. With this type of system, school grounds can be monitored for individuals judged to be a potential threat. If an assailant on a watchlist does not visit the school grounds until the time of the attack, no system in the world can do more than record events as they unfold. But if there is any rehearsal or reconnaissance, any detectable patterns, then it can flag with the authorities for the possibility of a follow-up.
Such data can be linked with behavioral monitoring and early warning tip-offs that are far more likely to prevent attacks. Again, the lesson from counter-terrorism is that prevention comes from pulling threads in linked data, from ‘threatprints’ of behaviors that might be precursors to an attack. This also means reaching out into the school community, to friends and families, to encourage the reporting of tell-tale signs and concerns. The US Secret Service’s National Threat Assessment Center’s report on enhancing school safety, released earlier this year, concentrates on prevention and intervention through just such monitoring of behavior and patterns of risk.
Other AI technologies have also been introduced into the school sector recently, particularly around video object classification. In essence, analytics are trained to detect the shapes of guns in video streams, raising the alarm in real-time. Again, the point at which an attack can be prevented is not the moment of its execution. That said, it is unarguably useful to reduce any delays in alerting law enforcement to an attack, with accurate and reliable intelligence around numbers of assailants, types of weapons and exact locations. The same offering, without the benefit of video footage, has been available for some time through the acoustic gunshot detection systems deployed across multiple cities.
Perception Vs Reality
Perception often trumps reality in security. And whether or not facial recognition vendors claim that their technologies can detect and contain an incident in real-time, the context is clear. As the Washington Post pointed out earlier this year: “an expanding web of largely unknown security contractors is marketing face recognition directly to school and community-center leaders, pitching the technology as an all-seeing shield against school shootings like those at Parkland and Santa Fe.”
Invaluable lessons can and must be learned from counter-terrorism and from the protection of public spaces, critical infrastructure and transportation networks. Here the focus is always on detection and interception as the first line of defense. And this means community outreach, targeted monitoring, vigilance, and a shared responsibility for addressing the challenge of individuals seeking harm to others at scale. There is also, inevitably, more experience here in the evaluation and acquisition of cutting-edge surveillance technologies, not an everyday purchase for a school district.
The marketing of such systems currently bears the hallmarks of the hype cycle. If a sector embarks on a security upgrade program built around overstated or misplaced expectations, no-one benefits. Standoff biometrics is not CCTV. It is not a trivial video analytic to be downloaded and double-clicked. If a database is populated with the inference that an alert might be a matter of life and death, then you had better be very sure about the quality of the data, the performance of the system and the training of those charged with responding.