More than 117 million Americans, representing more than half of the US adult population, are currently in a police facial recognition system, raising questions about privacy and security, according to a report from the Georgetown Law Center on Privacy & Technology.
The report, released this week, is “the most comprehensive survey to date of law enforcement face recognition and the risks that it poses to privacy, civil liberties, and civil rights,” the authors stated. Researchers found that while the FBI has one facial recognition system, state and local police departments nationwide have built their own, often unregulated, software.
The FBI has used facial recognition in investigations since at least 2011, which has successfully helped detain violent criminals, the report stated. At least 26 states allow law enforcement to perform searches in their databases of driver’s license and ID photos, researchers found.
Comparatively, FBI fingerprint and DNA databases are primarily made up of information from criminal arrests or investigations. Allowing law enforcement agencies to run facial recognition searches using driver’s license photo databases means the FBI, for the first time in history, is tapping a biometric network composed of Americans who have never been in trouble with the law, the report stated.
“Innocent people don’t belong in criminal databases,” Alvaro Bedoya, executive director of the Center on Privacy & Technology at Georgetown Law and co-author of the report, said in a press release. “By using face recognition to scan the faces on 26 states’ driver’s license and ID photos, police and the FBI have basically enrolled half of all adults in a massive virtual line-up. This has never been done for fingerprints or DNA. It’s uncharted and frankly dangerous territory.”
How does it work? Facial recognition is “the automated process of comparing two images of faces to determine whether they represent the same individual,” the report stated. An algorithm detects the person’s face, and then “normalizes” it by scaling, rotating, and aligning the face so it fits in with the others in the database, to make comparisons easier. The algorithm identifies facial features such as eye position or skin texture. It can then compare two faces, and will create a numerical score on how well the two faces match.
Lack of regulation
Researchers examined 90 law enforcement agencies. Of those, 52 reported using facial recognition currently or in the past. And of those 52, only one offered evidence of auditing officers’ searches for misuse. None required warrants, and many did not require an officer to suspect a person of committing a crime before using the system to identify that person.
No states have passed any in-depth laws regulating law enforcement’s use of facial recognition systems, the report found.
“Face recognition is a powerful technology that requires strict oversight,” said Clare Garvie, a Georgetown Law Center on Privacy & Technology associate and co-author of the report, in a press release. “But those controls by and large don’t exist today. With only a few exceptions, there are no laws governing police use of the technology, no standards ensuring its accuracy, and no systems checking for bias. It’s a wild west.”
Advanced forms of facial recognition allow the government to identify multiple people continuously, from afar, using public security cameras. The report calls this Remote Biometric Identification, which does not require the government to notify or get permission from the people they chose to track.
Police departments in Chicago, Dallas, Los Angeles, and other metro areas reported experimenting with real-time face recognition using live surveillance cameras, or an interest in doing so, the report said.
Fully-automated facial recognition algorithms were first developed in the early 1990s. Police now use the algorithm for two purposes: Face verification (confirming a person’s identity) and face identification (identifying an unknown face). An officer might use the software if they encounter a person who refuses or is unable to identify him- or herself, after an arrest, during an investigation, or while searching for a suspect in real-time.
In the case of the latter, if the police are looking for an individual or a small group of people, they can upload images of those people to a “hot list.” The facial recognition program can identify faces in live video feeds from security cameras and compare them in real-time to the faces of the people on the hot list, and alert a nearby officer, the report said.
The report cites several researchers and industry experts who agreed that while real-time face recognition is technologically possible, “computational limitations, video quality, and poor camera angles constrain its effectiveness and sharply limit its accuracy.”
Police officers using facial recognition technology will likely have the largest impact on African-Americans, the report stated. “A 2012 study, co-authored by an FBI expert, found that face recognition is less accurate on African Americans, women and young people,” according to a press release. “African Americans are also likely overenrolled in mug shot-based systems as a result of racial disparities in arrest rates. Yet the report reveals that police advertise the technology as being blind to race—and that two major face recognition companies do not test for bias.”
The same day the report was released, the American Civil Liberties Union, the Leadership Conference for Civil and Human Rights, and other civil rights groups wrote a letter to the US Department of Justice’s Civil Rights Division to investigate police use of facial recognition technology for potential racial bias.
The 3 big takeaways for TechRepublic readers
- More than half of all American adults are now in a police facial recognition system, as 26 states allow law enforcement to use the technology to search their databases of driver’s license and ID photos, according to a report from the Georgetown Law Center on Privacy & Technology.
- This is the first time that law enforcement agencies have created biometric databases with information primarily from law-abiding citizens, raising privacy concerns and questions of racial bias in the system.
- A number of civil rights groups have asked the US Department of Justice to investigate police use of this tech for potential racial bias.