Google has refused to reinstate the account of a man who falsely flagged medical images he took of his son’s groin as child sexual abuse material (CSAM). New York Times first reported. Experts say trying to implement a technological solution to a societal problem is an inevitable trap.
Experts have long warned of the limitations of automated child sexual abuse image detection systems, especially as companies face regulatory and public pressure to help address the presence of sexual exploitation material.
“These companies have access to an extremely invasive amount of data about people’s lives. They still don’t have the context for what people’s lives really are,” said Daniel Kahn Gillmor, a senior staff technologist at the ACLU. “There are all kinds of things where the reality of your life isn’t that legit for these information giants.” He added that the use of these systems by tech companies “proxy” for law enforcement puts people at risk of being “swept away” by “state power”.
The man, identified only as Mark by the New York Times, took pictures of his son’s groin to send to a doctor after realizing it was inflamed. The doctor used this image to diagnose Mark’s son and prescribe antibiotics. When the photos were automatically uploaded to the cloud, Google’s system identified them as CSAM. Two days later, Mark’s Gmail and other Google Citing a message on his phone, the Times reported that accounts, including Google Fi, which provides the phone service, have been disabled for “harmful content” that “severely violates company policies and may be illegal”. He later learned that Google had flagged another video on his phone and the San Francisco police department was investigating him.
Mark has been cleared of any crime, but Google has said it will stand by his decision.
“We comply with US law when defining what constitutes CSAM, and we use a combination of hash matching technology and artificial intelligence to identify it and remove it from our platforms,” said Google spokesperson Christa Muldoon.
Muldoon added that Googlers who review CSAM are trained by medical professionals to look for rashes or other problems. He said, however, that they are not medical professionals and that medical professionals were not consulted when examining each case.
According to Gillmor, this is the only way these systems can do harm. For example, to address the limitations that algorithms may have in distinguishing between harmful sexual abuse images and medical images, companies often have a human in the loop. However, their expertise is inherently limited, and getting the appropriate context for each situation requires greater access to user data. Gillmor said this is a much more intrusive process and can still be an ineffective method of detecting CSAM.
“These systems can cause real problems for humans,” he said. “And not only do I think these systems can’t catch every case of child abuse, they have really dire consequences for humans in terms of false positives. Human lives can really be turned upside down by machines, and people in the loop simply make a bad decision because they have no reason to try to fix it.”
Gillmor argued that technology is not the solution to this problem. In fact, he said, it could introduce several new problems, including creating a robust surveillance system that could disproportionately harm those at the margins.
“There is a dream about some kind of techno-solvers, [where people say]He said, ‘Oh, you know, there’s an app for finding cheap lunches, why not an app to find a solution to a thorny social problem like child sexual abuse?’ “Well, you know, they might not be solved by the same kind of technology or skill set.”