In 2019, the Staten Island District Attorney’s Office quietly purchased software from Clearview AI, a controversial facial recognition company. Unlike traditional facial recognition tools, which draw on government databases of mugshots or driver’s licenses, Clearview’s expansive program allows users to search for potential face matches from over three billion photos scraped from sites such as Facebook, LinkedIn, Venmo, and YouTube. Users upload the image of a person of interest, and the program returns photos deemed to look similar with links to the corresponding websites.
According to Freedom of Information Law documents obtained by the Legal Aid Society, Staten Island prosecutors agreed to a one-year $10,000 contract for Clearview AI’s facial recognition software in May of 2019. The records also show that the DA’s office created protocols designed to ensure documentation and subsequent review of individual facial recognition searches.
Listen to George Joseph's report on WNYC:
In an email, Hoan Ton-That, CEO of Clearview AI, said his company is “honored” to provide “state-of-the-art” technology to the DA’s office.
“It is used by law enforcement agencies nationwide for after-the-crime investigations only. It is not a surveillance tool,” Ton-That said. “The specific usage in each case is determined by the highly trained law enforcement professionals who use it, including those at the Staten Island DA’s office.”
But Diane Akerman, a staff attorney with the Legal Aid Society’s Digital Forensics Unit raised concerns about putting such powerful tools in the hands of law enforcement.
“Facial recognition technology, especially Clearview AI, poses a direct threat to New Yorkers’ basic privacy and civil rights,” Akerman said. “Use of the technology threatens to increase surveillance of historically overpoliced communities—communities of color, Black and Brown communities, and activists—who have long disproportionately shouldered the harmful effects of surveillance by law enforcement.”
Across the country, privacy advocates have criticized police departments’ covert use of Clearview AI arguing that the software has dramatically expanded the surveillance net without oversight or independent evaluations of accuracy. But its use by prosecutors has come under less scrutiny.
Of its 2,400 law enforcement clients nationwide, Clearview AI claims more than one hundred are prosecutors’ offices. According to the Staten Island District Attorney documents, the program’s uses could include identifying perpetrators and victims, exonerating the innocent, and developing leads in cold cases.
In New York City, the DA’s acquisition of the technology appears to be an outlier. In response to Gothamist’s inquiries, representatives for the District Attorneys of the Bronx, Brooklyn, Manhattan and Queens confirmed that their offices do not have Clearview AI’s software.
Akerman, the public defender from Legal Aid, said that Staten Island residents should worry about being wrongfully charged because of Clearview AI’s program. “If their photos are being included in this database and if their own prosecutor’s office is using it, they can easily be dragged into a criminal prosecution based on a misidentification from this kind of technology that has absolutely no oversight,” Akerman said.
Akerman argued this danger is higher for Black residents, who are far more likely to be misidentified by facial recognition technology than white residents.
Mark Fonte, a defense attorney and former Staten Island prosecutor, said the technology could be abused, but said he is also confident that Staten Island DA McMahon would not rush to bring charges based on facial recognition matches alone.
“Mr. McMahon is extremely professional and respectful of people’s privacy,” Fonte said. But, he added, McMahon, "won’t be in office forever.”
Brad Hoylman, a Manhattan state senator, said the Staten Island DA’s secretive acquisition shows why the legislature needs to come up with a regulatory framework for facial recognition. “We can’t have lone wolves going off and using potent tech without the approval or understanding of elected officials, experts in criminal justice, and New Yorkers,” he said.
Senator Hoylman has introduced a bill that would halt law enforcement’s use of facial recognition and other biometric technologies for several years until a task force could come up with guidelines for its use. Other jurisdictions nationwide, including Portland, San Francisco, and Boston, have banned the use of facial recognition by law enforcement outright. In New Jersey, the state’s Attorney General has ordered all local police departments to cease using Clearview AI’s program specifically.
Ton-That, the Clearview AI CEO, said that having “non-biased technology” was important to him “as a person of mixed race” and that his company’s program could actually cut down on wrongful detentions.
“For example, it's much preferable to have law enforcement accurately identify someone, as opposed to looking for a general description, where wrongful detention, apprehension, and arrests are more likely,” he said in an email. “To this date, we know of no instance where Clearview AI's technology has resulted in a wrongful arrest.”
Ton-That also referred to “an independent study” that “indicated that Clearview AI has no racial bias.” The study was commissioned by the company itself. The CEO said his company also plans to have its program tested by the National Institute of Standards and Technology, a federal laboratory that has tested the accuracy of other facial recognition programs.
In a statement, the Legal Aid Society called on McMahon to disclose how his office used Clearview’s program, and to cease any current use of facial surveillance technology.