The company has yet to divulge how the security breach occurred, or the name of the person responsible. Clearview AI claims that its own servers and other internal networks remain secure; that they have fixed the vulnerability that caused the breach; and that the intruder wasn't able to obtain the search histories of law enforcement employees.
"Security is Clearview's top priority," said Tor Ekeland, an attorney working for the company, in a statement. "Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security."
David Forscey, managing director of Aspen Cybersecurity Group, is concerned about the breach. "If you're a law enforcement agent, it's a big deal," he said, "because you depend on Clearview as a service provider to have good security, and it seems like they don't." The Aspen Cybersecurity Group is an arm of the Cyber & Technology Program of the Aspen Institute, a nonprofit think tank.
Clearview AI's services are used by at least 600 law enforcement agencies nationwide. This includes the Chicago Police Department, the FBI and even the Department of Homeland Security. (Related: ALL major U.S. cellphone companies guilty of privacy violations: Bombshell report reveals they track you and sell your location data to bounty hunters.)
According to The New York Times, Clearview AI scraped over three billion images from the internet, including on Facebook, Twitter and YouTube, violating the terms of service of many websites. However, the company has claimed that they have a "First Amendment right to collect public photos" and have compared their practices to Google's search engine.
These images helped the company form a facial recognition resource that their clients use. These tools have even helped law enforcement agencies identify criminals. Federal and state law enforcement officers have stated that they've used Clearview to solve a variety of crimes, including credit card fraud, identity theft, shoplifting and even murder and child sexual abuse cases.
All these law enforcement officers have to do is use a Clearview AI app on their phone to upload a picture of someone they want to identify. Clearview's proprietary AI will scan their database of images to try and match the photo to a name.
A 2016 report from the Center on Privacy and Technology of the Georgetown University Law Center found that over half of adults in the United States, over 117 million individuals, have had their faces captured in a "virtual, perpetual lineup" in facial recognition databases. Even without Clearview AI's services, dozens upon dozens of law enforcement agencies already have "real-time face recognition technology" that allows them to scan the faces of people they see through city surveillance cameras.
Authorities must stop and have a proper conversation about the extent to which facial recognition companies like Clearview AI are allowed to breach personal privacy in this manner. Earlier this year, New Jersey already took the first step when the Attorney General for the Garden State, Gurbir S. Grewal, banned the use of Clearview AI's services from all of the state's 21 counties.
Grewal's announcement comes after Clearview published a promotional video that featured the images of Grewal and two state troopers, as well as footage that, according to the Attorney General, may reveal too much about law enforcement investigative techniques. In fact, he even sent a cease-and-desist letter to Clearview, ordering them to take down the video.
Even Twitter has sent the company a cease-and-desist letter. According to a Twitter spokesperson, the letter includes a demand for Clearview to "delete all data" and "return or destroy any Twitter material" that the facial recognition company has shared with its clients.
"I'm not categorically opposed to using any of these types of tools or technologies that make it easier for us to solve crimes, and to catch child predators or other dangerous criminals," said Grewal. "But we need to have a full understanding of what is happening here and ensure there are appropriate safeguards."
Sources include: