www.nytimes.com
A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal.
Kashmir Hill
16 - 20 minutes
Mark with his son this month.
Credit...Aaron Wojack for The New York Times
Google has an automated tool to detect abusive images of children. But the system can get it wrong, and the consequences are serious.
Mark with his son this month.Credit...Aaron Wojack for The New York Times
Aug. 21, 2022
Mark noticed something amiss with his toddler. His son’s penis looked swollen and was hurting him. Mark, a stay-at-home dad in San Francisco, grabbed his Android smartphone and took photos to document the problem so he could track its progression.
It was a Friday night in February 2021. His wife called an advice nurse at their health care provider to schedule an emergency consultation for the next morning, by video because it was a Saturday and there was a pandemic going on. The nurse said to send photos so the doctor could review them in advance.
Mark’s wife grabbed her husband’s phone and texted a few high-quality close-ups of their son’s groin area to her iPhone so she could upload them to the health care provider’s messaging system. In one, Mark’s hand was visible, helping to better display the swelling. Mark and his wife gave no thought to the tech giants that made this quick capture and exchange of digital data possible, or what those giants might think of the images.
With help from the photos, the doctor diagnosed the issue and prescribed antibiotics, which quickly cleared it up. But the episode left Mark with a much larger problem, one that would cost him more than a decade of contacts, emails and photos, and make him the target of a police investigation. Mark, who asked to be identified only by his first name for fear of potential reputational harm, had been caught in an algorithmic net designed to snare people exchanging child sexual abuse material.
Because technology companies routinely capture so much data, they have been pressured to act as sentinels, examining what passes through their servers to detect and prevent criminal behavior. Child advocates say the companies’ cooperation is essential to combat the rampant online spread of sexual abuse imagery. But it can entail peering into private archives, such as digital photo albums — an intrusion users may not expect — that has cast innocent behavior in a sinister light in at least two cases The Times has unearthed.
Jon Callas, a technologist at the Electronic Frontier Foundation, a digital civil liberties organization, called the cases canaries “in this particular coal mine.”
“There could be tens, hundreds, thousands more of these,” he said.
Given the toxic nature of the accusations, Mr. Callas speculated that most people wrongfully flagged would not publicize what had happened.
“I knew that these companies were watching and that privacy is not what we would hope it to be,” Mark said. “But I haven’t done anything wrong.”
The police agreed. Google did not.
‘A Severe Violation’
After setting up a Gmail account in the mid-aughts, Mark, who is in his 40s, came to rely heavily on Google. He synced appointments with his wife on Google Calendar. His Android smartphone camera backed up his photos and videos to the Google cloud. He even had a phone plan with Google Fi.
Two days after taking the photos of his son, Mark’s phone made a blooping notification noise: His account had been disabled because of “harmful content” that was “a severe violation of Google’s policies and might be illegal.” A “learn more” link led to a list of possible reasons, including “child sexual abuse & exploitation.”
Mark was confused at first but then remembered his son’s infection. “Oh, God, Google probably thinks that was child porn,” he thought.
In an unusual twist, Mark had worked as a software engineer on a large technology company’s automated tool for taking down video content flagged by users as problematic. He knew such systems often have a human in the loop to ensure that computers don’t make a mistake, and he assumed his case would be cleared up as soon as it reached that person.
Image
Mark, a software engineer who is currently a stay-at-home dad, assumed he would get his account back once he explained what happened. He didn’t.
Credit...Aaron Wojack for The New York Times
Mark, a software engineer who is currently a stay-at-home dad, assumed he would get his account back once he explained what happened. He didn’t.
He filled out a form requesting a review of Google’s decision, explaining his son’s infection. At the same time, he discovered the domino effect of Google’s rejection. Not only did he lose emails, contact information for friends and former colleagues, and documentation of his son’s first years of life, his Google Fi account shut down, meaning he had to get a new phone number with another carrier. Without access to his old phone number and email address, he couldn’t get the security codes he needed to sign in to other internet accounts, locking him out of much of his digital life.
“The more eggs you have in one basket, the more likely the basket is to break,” he said.
In a statement, Google said, “Child sexual abuse material is abhorrent and we’re committed to preventing the spread of it on our platforms.”
A few days after Mark filed the appeal, Google responded that it would not reinstate the account, with no further explanation.
Mark didn’t know it, but Google’s review team had also flagged a video he made and the San Francisco Police Department had already started to investigate him.
How Google Flags Images
The day after Mark’s troubles started, the same scenario was playing out in Texas. A toddler in Houston had an infection in his “intimal parts,” wrote his father in an online post that I stumbled upon while reporting out Mark’s story. At the pediatrician’s request, Cassio, who also asked to be identified only by his first name, used an Android to take photos, which were backed up automatically to Google Photos. He then sent them to his wife via Google’s chat service.
Cassio was in the middle of buying a house, and signing countless digital documents, when his Gmail account was disabled. He asked his mortgage broker to switch his email address, which made the broker suspicious until Cassio’s real estate agent vouched for him.
“It was a headache,” Cassio said.
Images of children being exploited or sexually abused are flagged by technology giants millions of times each year. In 2021, Google alone filed over 600,000 reports of child abuse material and disabled the accounts of over 270,000 users as a result. Mark’s and Cassio’s experiences were drops in a big bucket.
The tech industry’s first tool to seriously disrupt the vast online exchange of so-called child pornography was PhotoDNA, a database of known images of abuse, converted into unique digital codes, or hashes; it could be used to quickly comb through large numbers of images to detect a match even if a photo had been altered in small ways. After Microsoft released PhotoDNA in 2009, Facebook and other tech companies used it to root out users circulating illegal and harmful imagery.
“It’s a terrific tool,” the president of the National Center for Missing and Exploited Children said at the time.
A bigger breakthrough came along almost a decade later, in 2018, when Google developed an artificially intelligent tool that could recognize never-before-seen exploitative images of children. That meant finding not just known images of abused children but images of unknown victims who could potentially be rescued by the authorities. Google made its technology available to other companies, including Facebook.
When Mark’s and Cassio’s photos were automatically uploaded from their phones to Google’s servers, this technology flagged them. Jon Callas of the E.F.F. called the scanning intrusive, saying a family photo album on someone’s personal device should be a “private sphere.” (A Google spokeswoman said the company scans only when an “affirmative action” is taken by a user; that includes when the user’s phone backs up photos to the company’s cloud.)
“This is precisely the nightmare that we are all concerned about,” Mr. Callas said. “They’re going to scan my family album, and then I’m going to get into trouble.”
A human content moderator for Google would have reviewed the photos after they were flagged by the artificial intelligence to confirm they met the federal definition of child sexual abuse material. When Google makes such a discovery, it locks the user’s account, searches for other exploitative material and, as required by federal law, makes a report to the CyberTipline at the National Center for Missing and Exploited Children.
The nonprofit organization has become the clearinghouse for abuse material; it received 29.3 million reports last year, or about 80,000 reports a day. Fallon McNulty, who manages the CyberTipline, said most of these are previously reported images, which remain in steady circulation on the internet. So her staff of 40 analysts focuses on potential new victims, so they can prioritize those cases for law enforcement.
“Generally, if NCMEC staff review a CyberTipline report and it includes exploitative material that hasn’t been seen before, they will escalate,” Ms. McNulty said. “That may be a child who hasn’t yet been identified or safeguarded and isn’t out of harm’s way.”
Ms. McNulty said Google’s astonishing ability to spot these images so her organization could report them to police for further investigation was “an example of the system working as it should.”
CyberTipline staff members add any new abusive images to the hashed database that is shared with technology companies for scanning purposes. When Mark’s wife learned this, she deleted the photos Mark had taken of their son from her iPhone, for fear Apple might flag her account. Apple announced plans last year to scan iCloud Photos for known sexually abusive depictions of children, but the rollout was delayed indefinitely after resistance from privacy groups.
In 2021, the CyberTipline reported that it had alerted authorities to “over 4,260 potential new child victims.” The sons of Mark and Cassio were counted among them.
‘No Crime Occurred’
Image
A police investigator was unable to get in touch with Mark because his Google Fi phone number no longer worked.
Credit...Aaron Wojack for The New York Times
A police investigator was unable to get in touch with Mark because his Google Fi phone number no longer worked.
In December 2021, Mark received a manila envelope in the mail from the San Francisco Police Department. It contained a letter informing him that he had been investigated as well as copies of the search warrants served on Google and his internet service provider. An investigator, whose contact information was provided, had asked for everything in Mark’s Google account: his internet searches, his location history, his messages and any document, photo and video he’d stored with the company.
The search, related to “child exploitation videos,” had taken place in February, within a week of his taking the photos of his son.
Mark called the investigator, Nicholas Hillard, who said the case was closed. Mr. Hillard had tried to get in touch with Mark but his phone number and email address hadn’t worked.
“I determined that the incident did not meet the elements of a crime and that no crime occurred,” Mr. Hillard wrote in his report. The police had access to all the information Google had on Mark and decided it did not constitute child abuse or exploitation.
Mark asked if Mr. Hillard could tell Google that he was innocent so he could get his account back.
“You have to talk to Google,” Mr. Hillard said, according to Mark. “There’s nothing I can do.”
Mark appealed his case to Google again, providing the police report, but to no avail. After getting a notice two months ago that his account was being permanently deleted, Mark spoke with a lawyer about suing Google and how much it might cost.
“I decided it was probably not worth $7,000,” he said.
Kate Klonick, a law professor at St. John’s University who has written about online content moderation, said it can be challenging to “account for things that are invisible in a photo, like the behavior of the people sharing an image or the intentions of the person taking it.” False positives, where people are erroneously flagged, are inevitable given the billions of images being scanned. While most people would probably consider that trade-off worthwhile, given the benefit of identifying abused children, Ms. Klonick said companies need a “robust process” for clearing and reinstating innocent people who are mistakenly flagged.
“This would be problematic if it were just a case of content moderation and censorship,” Ms. Klonick said. “But this is doubly dangerous in that it also results in someone being reported to law enforcement.”
It could have been worse, she said, with a parent potentially losing custody of a child. “You could imagine how this might escalate,” Ms. Klonick said.
Cassio was also investigated by the police. A detective from the Houston Police department called in the fall of 2021, asking him to come into the station.
After Cassio showed the detective his communications with the pediatrician, he was quickly cleared. But he, too, was unable to get his decade-old Google account back, despite being a paying user of Google’s web services. He now uses a Hotmail address for email, which people mock him for, and makes multiple backups of his data.
You Don’t Necessarily Know It When You See It
Image
Mark was frustrated at Google’s refusal to reinstate his account after he explained what had happened.
Credit...Aaron Wojack for The New York Times
Mark was frustrated at Google’s refusal to reinstate his account after he explained what had happened.
Not all photos of naked children are pornographic, exploitative or abusive. Carissa Byrne Hessick, a law professor at the University of North Carolina who writes about child pornography crimes, said that legally defining what constitutes sexually abusive imagery can be complicated.
But Ms. Hessick said she agreed with the police that medical images did not qualify. “There’s no abuse of the child,” she said. “It’s taken for nonsexual reasons.”
In machine learning, a computer program is trained by being fed “right” and “wrong” information until it can distinguish between the two. To avoid flagging photos of babies in the bath or children running unclothed through sprinklers, Google’s A.I. for recognizing abuse was trained both with images of potentially illegal material found by Google in user accounts in the past and with images that were not indicative of abuse, to give it a more precise understanding of what to flag.
I have seen the photos that Mark took of his son. The decision to flag them was understandable: They are explicit photos of a child’s genitalia. But the context matters: They were taken by a parent worried about a sick child.
“We do recognize that in an age of telemedicine and particularly Covid, it has been necessary for parents to take photos of their children in order to get a diagnosis,” said Claire Lilley, Google’s head of child safety operations. The company has consulted pediatricians, she said, so that its human reviewers understand possible conditions that might appear in photographs taken for medical reasons.
Dr. Suzanne Haney, chair of the American Academy of Pediatrics’ Council on Child Abuse and Neglect, advised parents against taking photos of their children’s genitals, even when directed by a doctor.
“The last thing you want is for a child to get comfortable with someone photographing their genitalia,” Dr. Haney said. “If you absolutely have to, avoid uploading to the cloud and delete them immediately.”
She said most physicians were probably unaware of the risks in asking parents to take such photos.
“I applaud Google for what they’re doing,” Dr. Haney said of the company’s efforts to combat abuse. “We do have a horrible problem. Unfortunately, it got tied up with parents trying to do right by their kids.”
Cassio was told by a customer support representative earlier this year that sending the pictures to his wife using Google Hangouts violated the chat service’s terms of service. “Do not use Hangouts in any way that exploits children,” the terms read. “Google has a zero-tolerance policy against this content.”
As for Mark, Ms. Lilley, at Google, said that reviewers had not detected a rash or redness in the photos he took and that the subsequent review of his account turned up a video from six months earlier that Google also considered problematic, of a young child lying in bed with an unclothed woman.
Mark did not remember this video and no longer had access to it, but he said it sounded like a private moment he would have been inspired to capture, not realizing it would ever be viewed or judged by anyone else.
“I can imagine it. We woke up one morning. It was a beautiful day with my wife and son and I wanted to record the moment,” Mark said. “If only we slept with pajamas on, this all could have been avoided.”
A Google spokeswoman said the company stands by its decisions, even though law enforcement cleared the two men.
Guilty by Default
Ms. Hessick, the law professor, said the cooperation the technology companies provide to law enforcement to address and root out child sexual abuse is “incredibly important,” but she thought it should allow for corrections.
“From Google’s perspective, it’s easier to just deny these people the use of their services,” she speculated. Otherwise, the company would have to resolve more difficult questions about “what’s appropriate behavior with kids and then what’s appropriate to photograph or not.”
Mark still has hope that he can get his information back. The San Francisco police have the contents of his Google account preserved on a thumb drive. Mark is now trying to get a copy. A police spokesman said the department is eager to help him.
Nico Grant contributed reporting. Susan Beachy contributed research.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.