This startup’s facial-recognition app lets strangers invade your privacy—and the FBI Uses It
Think about this for a moment. What if a stranger could snap a quick picture of you in a public place (which is perfectly legal) and use an app to quickly find your name, address and other details? A small startup called Clearview AI has made it possible. To find the match, Clearview app uses over three billion images sourced from social media sites and even apps like Venmo. Just like other well-intentioned technology, Clearview app’s is a privacy disaster waiting to happen.
Clearview is unknown to many until New York Times ran a story over the weekend, titled: “The Secretive Company That Might End Privacy as We Know It.” According to the report, more than 600 law enforcement agencies, including the FBI, are already using this facial recognition technology, despite bans on the technology in cities like San Francisco. The report has already raised concerns among privacy advocacy groups: “If a picture of you exists somewhere online, and you participate in a protest or a rally, then it’s plausible law enforcement could upload a picture of you at the rally, run it through the Clearview system and easily find out who you are,” NY Times said.
Clearview app is so dangerous that even Google said it wouldn’t build it. Speaking at The All Things Digital Conference in 2011, former Google Chairman Eric Schmidt said Google decided not to implement facial recognition technology because of privacy concerns. He said he thought it’s something that can be used in a “very bad way as well as a very good way.”
Other organizations are also coming out to express their oppositions to the app. Fight for the Future tweeted: “We can’t fix this with gimmicky jewelry or sunglasses we’re supposed to wear when we leave our homes. We can’t fix it with industry-friendly regulations. We need to meet surveillance capitalism head on. We need an outright ban on AI-powered surveillance.”
We can't fix this with gimmicky jewelry or sunglasses we're supposed to wear when we leave our homes
We can't fix it with industry-friendly regulations
We need to meet surveillance capitalism head on
We need an outright ban on AI-powered surveillancehttps://t.co/lQjYPHHs9U
— Fight for the Future (@fightfortheftr) January 18, 2020
However, Clearview sees it differently. According to a mission statement on its website, Clearview dubbed its app as “a new research tool used by law enforcement agencies to identify perpetrators and victims of crimes.” The startup claimed its “technology has helped law enforcement track down hundreds of at-large criminals, including pedophiles, terrorists and sex traffickers. It is also used to help exonerate the innocent and identify the victims of crimes including child sex abuse and financial fraud. Using Clearview, law enforcement is able to catch the most dangerous criminals, solve the toughest cold cases and make communities safer, especially the most vulnerable among us.”
So, who’s behind this secretive startup? Clearview was founded by Hoan Ton-That, an Australian techie and onetime model. Ton-That, 31, grew up in Australia and moved to the US at 19 years old. He worked in app development and as a part-time model before founding Clearview AI four years ago. “There’s a lot of crimes and cases that are being solved,” Ton-That told New York Times. “We really believe that this technology can make the world a lot safer.”
US law enforcement agencies are not the only users of Clearview’s technology. The app is also used by police departments in Australia. “We have a few customers in Australia who are piloting the tool, especially around child exploitation cases,” Ton-That said.