Meta says the rollout of these new content enforcement policies will be gradual, giving creators time to adjust.
Meta Cracks Down on Unoriginal Content & Fake Accounts on Facebook. Meta has announced new measures to curb the spread of “inauthentic” content on Facebook, targeting accounts that routinely repost others’ videos, photos and text without significant changes or attribution.
In a statement released Monday, the social media giant said it had already removed nearly 10 million accounts this year that were impersonating popular content creators. Another 500,000 accounts have faced punitive actions for engaging in spam-like behavior or generating fake engagement.
“These accounts will face reduced content reach and will be blocked from accessing Facebook’s monetization programs,” Meta said, adding that repeat violators could lose distribution privileges altogether.
The announcement comes shortly after YouTube clarified its stance on recycled and AI-generated content, amid growing concern over the proliferation of low-effort, mass-produced videos on digital platforms. Known as “AI slop,” these videos often feature stitched images, clips, or computer-generated voiceovers, contributing to a flood of low-quality media.
Focus on intent, not interaction
Meta clarified that its policy is not aimed at engaging users creatively with content, such as through reaction videos, comments, or participating in online trends. Instead, the focus is on accounts that repost others’ work without meaningful contribution or originality.
To address this, Facebook will begin demoing duplicate videos in users’ feeds to ensure that the original creators receive proper credit and visibility. The company is also testing a system that would include links to duplicate posts that would direct users to the original content.
Push for authenticity amid AI proliferation
While the company’s latest announcement doesn’t explicitly mention artificial intelligence, it does hint at AI-generated content by urging creators to avoid stitching clips together or simply watermarking others’ content.
Meta’s guidelines advise creators to focus on “authentic storytelling” and high-quality video captions — a potential criticism of the growing use of unedited AI-generated subtitles. It also reiterated its long-standing rule discouraging cross-posting of content from other platforms without adaptation.
Users express concerns over the implementation.
The move comes amid intense criticism of Meta’s content moderation policies, particularly on Instagram, where users claim that the wrong account is being removed due to algorithmic errors and a lack of human cooperation. A petition calling for improvements to Meta’s enforcement system has garnered nearly 30,000 signatures, highlighting the frustration among small business owners and content creators.
While Meta has yet to publicly address these concerns, the company said that the new post-level insights will help users understand if and why their content is being deprived. Creators can now track the performance of their content and receive alerts about potential penalties through a professional dashboard.
Tackling fake accounts at scale
In its latest transparency data, Meta reported that 3% of Facebook’s global monthly active users are fake accounts. The company took action against a billion fake profiles from January to March 2025 alone.
The firm has also moved away from internal fact-checking and is instead piloting a version of Community Notices in the United States, similar to a similar feature used by X (formerly Twitter). This crowdsourced system allows users to review whether content meets Meta’s standards and verify its accuracy.
Meta says the rollout of these new content enforcement policies will be gradual, giving creators time to adjust.
Meta Limits Teen-Targeting
Meta, the parent company of Facebook and Instagram, announced in 2023 that it would stop allowing advertisers to target teenagers based on gender, following growing criticism over the impact of its platforms on young users. Starting in February, advertisers were only permitted to use age and location to target ads to teens globally, marking a significant shift in Meta’s ad practices. Additionally, the company stated that a teen’s past activity on its apps would no longer influence the ads shown to them.
The move came amid mounting legal and public pressure, including a €390 million fine from European regulators who rejected Meta’s justification for using personal data in targeted advertising. The decision reflected feedback from experts and parents, and aimed to align with new international regulations focused on youth protection. Meanwhile, Meta also faced a lawsuit from the Seattle public school district, accusing tech companies of contributing to mental health issues among students.