Australia’s Under-16 Social Media Ban Faces Scrutiny Over Weak Platform Checks
The Chronify
Australia’s world first social media age restriction is facing fresh scrutiny after age verification providers said enforcement problems are being driven by weak platform implementation, not by limits in the technology itself.
The law, in effect since December 10, 2025, requires age restricted platforms to take reasonable steps to stop Australians under 16 from creating or keeping accounts. Platforms that fail to comply face court imposed penalties of up to A$49.5 million.
The Age Verification Providers Association said early problems show a need for stronger enforcement and clearer expectations from regulators. Its executive director, Iain Corby, said the issue is not whether the tools work, but whether platforms apply them properly.
Australia’s eSafety Commissioner is investigating Facebook, Instagram, YouTube, TikTok and Snapchat over suspected breaches of the rules. Regulators say millions of suspected under age accounts have been removed since the law started, but gaps remain in age checks at account sign up, repeated verification attempts until users pass, and continued reliance on self declared ages.
The industry body said age assurance products have shown they work at scale when deployed correctly. It warned that over reliance on internal age guessing models, limited checks on existing accounts, and inconsistent use of verification tools are weakening the impact of the ban.
The rollout has become a test case for governments worldwide as more countries consider tighter rules for children online. Supporters say the law puts responsibility on platforms to protect young users, while critics argue that enforcement remains difficult and that many children still find ways to stay online. A recent parent survey found around 31 percent of children still had at least one social media account after the ban, down from 49 percent before the law took effect.
For Australia, the next stage will depend on whether regulators move from warnings to court action. For global technology companies, the message is clear: removing under 16 users is no longer only a policy promise, it is now a legal test.
The Age Verification Providers Association said early problems show a need for stronger enforcement and clearer expectations from regulators. Its executive director, Iain Corby, said the issue is not whether the tools work, but whether platforms apply them properly.
Australia’s eSafety Commissioner is investigating Facebook, Instagram, YouTube, TikTok and Snapchat over suspected breaches of the rules. Regulators say millions of suspected under age accounts have been removed since the law started, but gaps remain in age checks at account sign up, repeated verification attempts until users pass, and continued reliance on self declared ages.
The industry body said age assurance products have shown they work at scale when deployed correctly. It warned that over reliance on internal age guessing models, limited checks on existing accounts, and inconsistent use of verification tools are weakening the impact of the ban.
The rollout has become a test case for governments worldwide as more countries consider tighter rules for children online. Supporters say the law puts responsibility on platforms to protect young users, while critics argue that enforcement remains difficult and that many children still find ways to stay online. A recent parent survey found around 31 percent of children still had at least one social media account after the ban, down from 49 percent before the law took effect.
For Australia, the next stage will depend on whether regulators move from warnings to court action. For global technology companies, the message is clear: removing under 16 users is no longer only a policy promise, it is now a legal test.
🏷️ Tags:
#Technology
Related News
📚 Categories
You may like