NEW DELHI: Political ads containing misinformation, religious hate speech and inflammatory content were published on WhatsApp, Instagram and other social media platforms during the ongoing general elections, civil rights groups have reported. The risk of election manipulation by malicious actors is increasing.
Scanners will once again be used for political ads across social media platforms, allowing Big Tech companies to detect misinformation, a report released Tuesday by US-based Eco and India Civil Watch International says. We insist that you follow our policies.
The organizations claimed to have collected 22 politically inflammatory ads on Meta’s advertising platform. The organizations claimed that 14 of these passed Meta’s quality filters. However, Eko said the ad in question was removed before it was published on Meta’s platform.
“Meta does not have the ability to detect and label AI-generated ads and is committed to eradicating hate speech and incitement to violence, despite promises to do so in new policies.” Totally negligent and in direct violation of their own policies…These (advertisements) exploited violent insurrection targeting Muslim minorities and communal and religious conspiracy theories prevalent in India’s political landscape. Spreading blatant disinformation, inciting violence through Hindu supremacist rhetoric, etc. One of the approved ads also included a message that mimicked a recently doctored video of Union Home Minister Amit Shah. ” the report claims.
Meta is not alone
Meta isn’t the only platform under the scanner. On April 2, a report from human rights group Global Witness found that 48 ads depicting violence and voter suppression on YouTube, the world’s largest video distribution platform, passed the latter’s election quality check filters. insisted.
As Eko’s research explains, the report also highlights the use of generated AI content, demonstrating how “this new technology can be quickly and easily deployed to amplify harmful content.” Masu”.
Eko researcher Maen Hammad told Mint that the remains “uncovered a vast network of bad actors using Meta’s ad library to spread hate speech and disinformation.” Our question was about detecting and labeling his AI-generated images in the ad library. ”
Hamad shared a copy of Meta’s response to Eko, dated May 13. In the response, Meta stressed that it has taken multiple “actions” and “enforcement” measures against abusive ad content. “We have investigated the 38 ads listed in the report and found that their content does not violate our advertising standards,” the response read.
However, a Meta India spokesperson told Mint on Wednesday that the company had not received any details from Echo for its investigation. “As part of our ad review process, which includes both automated and human reviews, we perform several layers of analysis and detection both before and after an ad is published. I can’t comment on this claim as it has been removed.”
YouTube responded to the Global Witness investigation by saying in a statement that none of the questionable ads ran on its platform, demonstrating “inadequate protections against election misinformation.”
“Just because an ad passes our initial technical checks does not mean it won’t be blocked or removed by our enforcement systems if it violates our policies. However, the advertiser removed the offending ad before our routine enforcement review occurred,” a YouTube spokesperson said.
The company had not yet responded to Mint’s request for a statement as of press time.
Promoting third-party audits
Despite Big Tech’s defense, industry insiders and policy evangelists said there’s a clear need for third-party audits of Big Tech’s policy enforcement, especially during election season.
“There are gaps that are being exploited by many bad actors. Political content is present across social platforms, but many policy gaps are repeatedly exploited. Enforcement gaps in quality control are exposed as a range of ads that violate big tech policies end up being featured. Ad content in India is multilingual, but we are not sure how good big tech’s quality classification is across most languages,” said Prateek Wagle, executive director at public policy think tank India Internet Freedom Foundation (IFF).
In its response to Eko quoted above, Mehta claimed that its content moderation is carried out in 20 Indian languages and third-party human fact-checking is done in 16 languages.
Additionally, most Big Tech companies publish their own self-audit “transparency reports” to support policy enforcement. For example, Mehta’s latest India Transparency Report, published on April 30, claimed that the company had “taken action” against 5,900 cases of “organized hatred,” 43,300 cases of “hate speech,” and 106,000 cases of “violence or incitement.” However, the report failed to define what “actions” were or what steps were taken against the perpetrators.
Isha Suri, research director at the Centre for Internet and Society (CIS), the same think tank, said, “Europe’s Digital Services Act mandates transparency in policy implementation. In India, such systems are largely ununderstood. We need external, third-party audits that go beyond Big Tech’s own filtering, and such independent studies could help clarify which filters are working and which are not.”
You are here: Mint! India’s No.1 news destination (Source: Press Gazette). To know more about our business scope and market insights click here!