Unsealed court filings in a Northern California federal lawsuit reveal a stark disparity in Meta’s content enforcement policies.
According to the plaintiffs, accounts reportedly involved in human trafficking of minors faced far looser moderation than users accused of misgendering others on Instagram or Facebook.
The complaint alleges that Meta prioritized political correctness and platform growth over child safety, exposing young users to serious risks online.
Elon Musk highlighted the inconsistency on X, noting that while misgendering a user could trigger immediate account suspension, accounts linked to child sex exploitation were reportedly allowed up to 17 violations before any action was taken.
Internal documentation referenced in the filings supports the so-called “17-strike” policy.
The Blaze reports that Vaishnavi Jayakumar, Instagram’s former head of safety and well-being, testified that the threshold for accounts engaged in sexual exploitation was “very, very high” compared with industry norms.
Jayakumar explained that users could commit 16 violations for sexual solicitation or trafficking without facing suspension, with enforcement only triggered on the 17th strike.
She emphasized that this level of leniency sharply contrasted with Meta’s strict policies for other categories of behavior, including misgendering, reflecting a selective approach to content moderation.
Plaintiffs assert that Meta was aware of the serious dangers its platforms posed to minors, including millions of adult strangers attempting to contact children.
While some content depicting child sexual abuse was detected, it was reportedly rarely removed.
Meanwhile, minor violations related to gender identity or politically sensitive topics faced swift action, highlighting a troubling imbalance in enforcement priorities, according to The Post Millennial.
Attorney Previn Warren, representing plaintiffs, likened Meta’s practices to marketing addictive products to children, comparing it to tobacco companies.
“Meta has designed social media products that it knows are addictive to kids, and they understand that these addictions lead to serious mental health issues,” Warren said. “They pursued growth regardless, because higher usage translated into increased profits.”
The lawsuit involves more than 1,800 plaintiffs, including children, parents, schools, and state attorneys general.
The defendants also include TikTok, Snapchat and YouTube.
The complaint alleges that the companies relentlessly pursued growth at the expense of user safety, disregarding the mental and physical well-being of minors while maintaining strict enforcement for politically sensitive issues.
Court documents indicate that Jayakumar raised internal concerns in 2020 about the 17-strike policy, but her warnings were reportedly dismissed as too difficult to address.
Meanwhile, enforcement for other policy violations—including spam, intellectual property or promotion of firearms—remained consistent, further highlighting selective moderation practices.
Meta has publicly denied the claims, asserting that the allegations rely on “cherry-picked quotes and misinformed opinions” and emphasizing that the company has implemented measures to protect teens, including Teen Accounts with built-in protections and parental controls.
Google similarly rejected claims regarding YouTube, stressing that the platform provides safety tools for young viewers and functions primarily as a streaming service, not a social network.
Legal analysts say the case could have wide-ranging implications for social media moderation and regulatory oversight.
A favorable ruling for the plaintiffs may establish new standards for balancing free expression, politically sensitive speech and the protection of minors online.
Experts predict the outcome could influence future regulations, potentially requiring platforms to adopt consistent enforcement policies that prioritize child safety over ideological preferences.
The case raises fundamental questions about the responsibilities of social media companies in protecting vulnerable users while managing free expression and political sensitivities.
Courts and regulators may soon be asked to clarify how these platforms can be held accountable when selective enforcement disproportionately endangers minors.
