Google labels parents as sex offenders for photos of own naked toddlers – NYT
Google’s algorithm tags parents as child abusers for photos of their kids and can destroy lives, the New York Times found
Google labels parents as sex offenders for photos of own naked toddlers – NYT
Google’s stepped-up crusade against child sexual exploitation on its platforms risks destroying the lives of the very children it purports to save – as well as innocent parents – by leaving their reputations, their freedom and perhaps even their family unit’s survival in the hands of an overzealous algorithm, a New York Times investigation published on Sunday has revealed.
Two completely unconnected fathers of young children experienced almost identical Kafkaesque nightmares, labeled as child molesters for nothing more sinister than trying to get their toddlers medical help. San Francisco dad Mark and Houston dad Cassio thought nothing of sending over photos of their sons’ swollen genitals on the request of their pediatricians, a practice which has become totally normal in the post-pandemic paradigm.
Thinking nothing further of it as his son recovered, Mark was rudely awakened by his Google Fi phone informing him that “harmful content” – potentially illegal – had been discovered. Attempting to appeal the decision elicited a rejection and no further communication from Google. Mark lost access to his phone, email, contacts, and everything else Google has taken over as one of the largest corporations in the world. All his data, including photos and videos, was locked up in the Cloud.
Google found another video that set off its alarms while scanning these files. While the technology for detecting photos of abused and exploited children initially relied on a database of known images of sexual exploitation, Google’s relatively recent contribution to the crusade is an AI tool that claims to be able to recognize never-before-seen images of child exploitation based on their similarity to existing images.
Rather than bringing in a human moderator to verify the photo is indeed abusive before moving forward, Google’s process once it thinks it spots such an image is to lock down the account, scan every other image they have, and call the National Center for Missing and Exploited Children, which prioritizes what it believes are new victims. Meanwhile, the wrongly-flagged images are added to the database of exploited children, so even more innocuous images of children like Mark’s and Cassio’s risk setting off red flags.
San Francisco police, who’d opened an investigation following Google’s flagging of Mark’s video, secured a copy of literally everything in his Google accounts, from his internet searches and location history, messages and documents he had sent and received, and photos and videos stored on the cloud. Equipped with the same data Google had used to declare him a child molester, they instead determined no crime had been committed and closed the case.
Google was not so understanding. Even when Mark asked the lead detective on his case to intercede on his behalf, the officer said there was nothing he could do. He still cannot access his account.
Cassio’s case unfolded almost the exact same way, with Houston police dropping their case once he produced communications from the pediatrician. Google will not give back his data.
Despite the police exonerating the parents, Google stood by its decision to still flag the parents as child molesters and block all their data. “Child sexual abuse material is abhorrent and we’re committed to preventing the spread of it on our platforms,” the company said in a statement, according to the New York Times. It is not known how many Child Protective Services cases have been opened on the basis of such “mistakes,” nor how many have risked ending with a child removed from the home or a parent arrested, as even the wrongfully-accused of child abuse generally keep silent about it.