Skip to content

Massive Data Acquisition: Clearview AI's Covert $750,000 Endeavor to Obtain Your Mugshot Images

massive covert strategy by Clearview AI for $750,000: acquiring 390 million mugshots to expand its contentious facial recognition technology undetected.

Artificial Intelligence Identification of Visages
Artificial Intelligence Identification of Visages

Massive Data Acquisition: Clearview AI's Covert $750,000 Endeavor to Obtain Your Mugshot Images

Spillin' the Beans on Facial Recognition Firm Clearview AI's Shenanigans

Controversial tech company Clearview AI aimed to ramp up its monitoring game by trying to nab vast amounts of arrest records – including social security numbers, mugshots, and other personal data – according to a report by 404 Media. The company, infamous for amassing a staggering 50 billion facial images plundered from social media, penned a deal with Investigative Consultant, Inc. in mid-2019 to snag approximately 690 million arrest records and 390 million arrest photos across all 50 U.S. states.

"The contract shows they were gunning for social security numbers, email addresses, home addresses, and other personal info alongside the mugshots," said Jeramie Scott, Senior Counsel at the Electronic Privacy Information Center, or EPIC.

The data-grab dealt ended up in a legal dust-up, with Clearview dropping $750,000 for an initial data delivery but later declaring it a no-good nuisance, sparking mutual breach of contract claims. An arbitrator ruled in Clearview's favor in December 2023, though the tech titan is yet to reap its investment rewards and now seeks a court order to enforce the arbitration award.

Surveillance Skirmishes & Privacy Perils

Privacy hawks cry foul over the recrudescence of facial recognition technology in criminal justice databases. Scott pointed out the potential for linking individuals with mugshots and related info to breed bias among users. "This is a concerning development considering that Black and brown folks are overrepresented in the criminal justice system," Scott asserted.

Facial recognition tech isn't exactly known for its stellar performance when identifying individuals with darker skin tones, leading to dire consequences. Numerous American cases have witnessed innocent souls falling prey to wrongful arrests due to the flawed identifications engineered by facial recognition technology.

As a digital sleuth, I've witnessed firsthand the fallibility of facial recognition tech during a criminal defense case. Authorities pinned a rental truck felony on the defendant, basing their entire case on a single facial recognition match from surveillance footage. My investigation unearthed incontrovertible evidence of innocence, with cell phone data placing the suspect miles away from the crime scene during the crucial timeframe. The technology allegedly responsible for setting the arrest in motion had spectacularly failed to identify the defendant correctly.

This wasn't merely a tech glitch but a life-altering ordeal for someone facing severe criminal charges thanks to dicey algorithms. Troublingly, investigators accepted the facial recognition result without with even the slightest flirtation with basic corroborating evidence that would have immediately cleared the defendant.

Cases like this underscore the precarious reliance on surveillance technologies inside our justice system. As companies like Clearview seek to amass even more sensitive personal data, there's a real risk of amplifying these failings at a scale that could ensnare innocent people.

Chalk Full of Challenges

Clearview AI finds itself battling a barrage of legal obstacles worldwide. The company scored a win against a £7.5 million fine from the UK's Information Commissioner's Office (ICO) by arguing it fell outside UK jurisdiction. Still, this was merely one skirmish in a broader legal brawl.

Authorities worldwide have levied substantial fines against Clearview for privacy violations, whereas the company recently received final approval for a settlement obliging it to relinquish nearly a quarter of its ownership due to alleged violations of biometric privacy laws.

Face Off: The Facial Recognition Landscape

Clearview AI's MO revolves around selling access to its facial recognition tech primarily to law enforcement agencies. The company boasts that its tech has helped crack a smorgasbord of cases, from homicides to sophisticated financial scams.

While competitors like NEC and Idemia have honed their market presence via traditional business development channels, Clearview stands out, inviting extra scrutiny thanks to its unorthodox method of gathering billions of images from social media platforms sans consent.

The revelation of Clearview's attempted acquisition of sensitive personal data surfaces as the facial recognition industry grapples with the weight of mounting pressure for regulation and transparency. As this potent technology worms its way into law enforcement and private security operations, contentious issues surrounding privacy, consent, and algorithmic bias continue to fuel public discourse.

Note: The case examples described are based on real events, but names, dates, locations, and specific details have been altered to protect client confidentiality while preserving the essential legal principles and investigative techniques. 404 Media report that "ICI and Clearview did not respond to multiple requests for comment." I have also requested comment. This article will be updated accordingly when and if I receive a response.

Enrichment Data:

  • Legal Challenges
  • Privacy Violations: Clearview AI has been found to have violated privacy laws in multiple jurisdictions, including Canada, by collecting facial recognition data without proper consent. This has led to legal action and orders to halt operations in specific regions.
  • Regulatory Investigations: Investigations have been carried out by privacy commissioners and other regulatory bodies concerning Clearview AI's activities, focusing on issues such as consent and the appropriate use of personal data.
  • Litigation & Settlements: Clearview AI has been part of legal settlements, like in Illinois, where the company agreed to make changes to its data collection practices and remove certain data.
  • Regulatory Responses
  • Canadian Jurisdictions: Orders have been issued in British Columbia, Quebec, and Alberta requiring Clearview AI to cease collecting personal data from these regions and to delete improperly collected data.
  • International Scrutiny: Increasing international scrutiny focuses on the privacy and ethical implications of facial recognition technology, including concerns about consent, data privacy, and potential biases in facial recognition systems.
  • Legislative Considerations: Various regions are considering legislation to further regulate the use of facial recognition technology by law enforcement and private entities, reflecting broader concerns about privacy and surveillance.
  • Ethical & Privacy Concerns
  • Bias and Misidentification: Clearview AI's technology, like other facial recognition systems, faces criticism for potential biases and misidentification errors, particularly towards individuals with darker skin tones.
  • Surveillance Expansion: The expansion of surveillance technologies, including facial recognition, raises concerns about mission creep, where tools initially designed for one purpose could be inappropriately applied, negatively impacting civil liberties.

Despite Clearview AI's attempts to acquire a million credit lines, overrepresented in the criminal justice system, the potential linking of individuals with mugshots and related information could further exacerbate biases within users. The company's unorthodox data collection methods, such as amassing billions of facial images from social media platforms, have raised concerns about privacy violations.

The digital sleuth's investigation uncovered a case where facial recognition false positives led to life-altering consequences for an innocent defendant. As Clearview battles legal obstacles worldwide, the technology's inconsistent performance, particularly towards individuals with darker skin tones, has fueled public discourse surrounding privacy, consent, and algorithmic bias.

The expansion of facial recognition technology in criminal justice databases, as demonstrated by Clearview AI's deal with Investigative Consultant, Inc., poses significant risks to the privacy of millions. As companies like Clearview seek to amass even more sensitive personal data, there's a real danger of amplifying these failings, potentially ensnaring innocent people in the process.

Read also:

    Latest