Clearview AI, the controversial facial-recognition startup that scans people’s photos without permission, agrees to limits to settle lawsuit
Clearview AI is a controversial facial-recognition AI app used by more than 600 law enforcement agencies, including the FBI and Department of Homeland Security to collect photos on social media sites for inclusion into a massive facial-recognition database. Today, Clearview is a surveillance tool of choice for governments around the world.
In March, Ukraine started using the tool after CEO Hoan Ton-That had offered the tool to the country. Ukraine’s defense ministry also confirmed the report that it has begun to use Clearview AI’s facial recognition technology to identify Russian assailants, combat misinformation, and the dead.
Despite bans on Clearview technology in cities like San Francisco, Clearview continues to scrape and collect publicly billions of photos of faces across social media and other websites to build out its biometric database. Clearview then sells access to the database using a proprietary search engine to law enforcement agencies and private companies. Since then, the 5-year-old has faced a series of lawsuits.
In a win for privacy advocacy groups, Clearview on Monday settled a lawsuit brought by the American Civil Liberties Union. As part of the settlement, the Peter Thiel-backed facial recognition agreed to “limit its face database in the United States primarily to government agencies and not allow most American companies to have access to it,” according to a report from The New York Times.
The startup also agreed to restrictions on how businesses can use its database of billions of facial images. As part of the settlement filed on Monday in an Illinois state court in Chicago, Clearview AI will stop granting paid or free access to its database to most private businesses and individuals.
The use of Clearview technology has raised concerns among privacy advocacy groups. Critics of the startup said its technology violated people’s privacy. Meanwhile, Clearview AI did not admit liability, negligence or fault in agreeing to settle. The settlement requires court approval.
In a statement, Nathan Freed Wessler, deputy director of the ACLU Speech, Privacy and Technology Project, said the settlement “demonstrates that strong privacy laws can provide real protections against abuse.”
Clearview first made headlines in January 2020 after the same New York Times ran a story titled: “The Secretive Company That Might End Privacy as We Know It.”
Founded in 2017 by Hoan Ton-That, an Australian techie and onetime model, Clearview is a new research tool used by law enforcement agencies to identify perpetrators and victims of crimes. Clearview AI’s technology has helped law enforcement track down hundreds of at-large criminals, including pedophiles, terrorists, and sex traffickers. It is also used to help exonerate the innocent and identify the victims of crimes including child sex abuse and financial fraud. With Clearview AI, law enforcement is able to catch the most dangerous criminals, solve the toughest cold cases and make communities safer, especially the most vulnerable among us.
Ton-That, 33, grew up in Australia and moved to the U.S. at 19 years old. He worked in app development and as a part-time model before founding Clearview AI four years ago. “There’s a lot of crimes and cases that are being solved,” Ton-That told New York Times. “We really believe that this technology can make the world a lot safer.”