TikTok Failed to Prevent 90 Percent of Election Misinformation: Report

Social media giant TikTok approved most political advertisements containing false and misleading information about U.S. elections despite assurances from the company that it had a robust mechanism for detecting such content, a new report has found.TikTok approved 90 percent of advertisements that featured misleading and outright false information about the upcoming 2022 U.S. midterm elections, according to a joint report by nonprofit Global Witness and the Cybersecurity for Democracy (C4D) team at New York University. The report’s results were based on an experiment conducted to determine how well social media platforms live up to their promises to stop disinformation capable of destabilizing democratic processes. The experiment posted 20 phony ads with misleading or false claims across multiple platforms in both English and Spanish, specifically targeting audiences in battleground states including Arizona, Colorado, and Georgia. TikTok has banned political ads but nevertheless approved the ads containing inaccurate claims, including ads stating patently false claims that voting days would be extended, that votes in primaries would automatically be counted in the midterms, and that social media accounts could be used as voter verification. TikTok also approved ads that dismissed the integrity of the election, suggested the results could be hacked or were otherwise already pre-decided, and discouraged voters from turning out. “This is no longer a new problem,” said Global Witness Senior Advisor Jon Lloyd in an associated statement. “For years we have seen key democratic processes undermined by disinformation, lies, and hate being spread on social media platforms. The companies themselves even claim to recognize the problem.” “It is high time they got their houses in order and started properly resourcing the detection and prevention of disinformation before it’s too late. Our democracy rests on their willingness to act.” The release of the report follows the publication of two other analyses in recent weeks that found China-based elements were using social media to spread disinformation on social media ahead of the midterm elections. One report by intelligence firm Recorded Future found that China-based elements were attempting to interfere in the midterm elections by promoting extremism and sowing discord among American voters. This was done by posting misinformation about hot-button topics like abortion, gun rights, and fascism, which was then spread by a group of related accounts operated from China. Another report by tech giant Meta Platforms detailed the company’s own efforts to dismantle a similar covert influence operation conducted from China. That effort also targeted users in the United States with political content in an apparent effort to polarize voters. In both cases, it appeared that the China-based sources of the disinformation sought to increase polarization and sow discord by posting intentionally inflammatory or false information online. The Global Witness-NYU report also targeted Meta-owned Facebook and Google-owned YouTube in its experiment. While YouTube successfully weeded out all of the bad ads and suspended the dummy account that posted them, Facebook let some 20 percent of the English language ads and 50 percent of the Spanish ads pass. The authors of the experiment said that such results could bear real consequences for democratic processes had they spread. “So much of the public conversation about elections happens now on Facebook, YouTube, and TikTok,” said Damon McCoy, co-director of NYU’s Cybersecurity for Democracy team. “Disinformation has a major impact on our elections, core to our democratic system.” For its part, TikTok reaffirmed that the company’s policies do not allow political advertising and prohibit content including election misinformation. The company also claims that all advertising content passes through multiple levels of verification before receiving approval. “TikTok is a place for authentic and entertaining content which is why we prohibit and remove election misinformation and paid political advertising from our platform,” a TikTok spokesperson said in an email. “We value feedback from NGOs, academics, and other experts which helps us continually strengthen our processes and policies.” This latest report is likely to come as a blow for TikTok, however, as the company has come under fire in recent years for its ties to China-based parent company ByteDance, and its admission that the company previously censored accounts at the request of the Chinese Communist Party, raising questions as to its control and safety for U.S. citizens. The Epoch Times has requested comment from Meta. Follow Andrew Thornebrooke is a reporter for The Epoch Times covering China-related issues with a focus on defense, military affairs, and national security. He holds a master's in military history from Norwich University.

TikTok Failed to Prevent 90 Percent of Election Misinformation: Report

Social media giant TikTok approved most political advertisements containing false and misleading information about U.S. elections despite assurances from the company that it had a robust mechanism for detecting such content, a new report has found.

TikTok approved 90 percent of advertisements that featured misleading and outright false information about the upcoming 2022 U.S. midterm elections, according to a joint report by nonprofit Global Witness and the Cybersecurity for Democracy (C4D) team at New York University.

The report’s results were based on an experiment conducted to determine how well social media platforms live up to their promises to stop disinformation capable of destabilizing democratic processes.

The experiment posted 20 phony ads with misleading or false claims across multiple platforms in both English and Spanish, specifically targeting audiences in battleground states including Arizona, Colorado, and Georgia.

TikTok has banned political ads but nevertheless approved the ads containing inaccurate claims, including ads stating patently false claims that voting days would be extended, that votes in primaries would automatically be counted in the midterms, and that social media accounts could be used as voter verification.

TikTok also approved ads that dismissed the integrity of the election, suggested the results could be hacked or were otherwise already pre-decided, and discouraged voters from turning out.

“This is no longer a new problem,” said Global Witness Senior Advisor Jon Lloyd in an associated statement. “For years we have seen key democratic processes undermined by disinformation, lies, and hate being spread on social media platforms. The companies themselves even claim to recognize the problem.”

“It is high time they got their houses in order and started properly resourcing the detection and prevention of disinformation before it’s too late. Our democracy rests on their willingness to act.”

The release of the report follows the publication of two other analyses in recent weeks that found China-based elements were using social media to spread disinformation on social media ahead of the midterm elections.

One report by intelligence firm Recorded Future found that China-based elements were attempting to interfere in the midterm elections by promoting extremism and sowing discord among American voters. This was done by posting misinformation about hot-button topics like abortion, gun rights, and fascism, which was then spread by a group of related accounts operated from China.

Another report by tech giant Meta Platforms detailed the company’s own efforts to dismantle a similar covert influence operation conducted from China. That effort also targeted users in the United States with political content in an apparent effort to polarize voters.

In both cases, it appeared that the China-based sources of the disinformation sought to increase polarization and sow discord by posting intentionally inflammatory or false information online.

The Global Witness-NYU report also targeted Meta-owned Facebook and Google-owned YouTube in its experiment. While YouTube successfully weeded out all of the bad ads and suspended the dummy account that posted them, Facebook let some 20 percent of the English language ads and 50 percent of the Spanish ads pass.

The authors of the experiment said that such results could bear real consequences for democratic processes had they spread.

“So much of the public conversation about elections happens now on Facebook, YouTube, and TikTok,” said Damon McCoy, co-director of NYU’s Cybersecurity for Democracy team. “Disinformation has a major impact on our elections, core to our democratic system.”

For its part, TikTok reaffirmed that the company’s policies do not allow political advertising and prohibit content including election misinformation. The company also claims that all advertising content passes through multiple levels of verification before receiving approval.

“TikTok is a place for authentic and entertaining content which is why we prohibit and remove election misinformation and paid political advertising from our platform,” a TikTok spokesperson said in an email.

“We value feedback from NGOs, academics, and other experts which helps us continually strengthen our processes and policies.”

This latest report is likely to come as a blow for TikTok, however, as the company has come under fire in recent years for its ties to China-based parent company ByteDance, and its admission that the company previously censored accounts at the request of the Chinese Communist Party, raising questions as to its control and safety for U.S. citizens.

The Epoch Times has requested comment from Meta.


Follow

Andrew Thornebrooke is a reporter for The Epoch Times covering China-related issues with a focus on defense, military affairs, and national security. He holds a master's in military history from Norwich University.