Australia Bans Social Media for Under-16s: What You Need to Know
Australia's December 2025 ban on social media for under-16s is now in effect, requiring platforms to implement age verification systems. The law imposes fines up to $49.5 million AUD for non-complianc

Australia Bans Social Media for Under-16s: What You Need to Know
Australia became the first country to enforce a blanket social media ban for children under 16 on December 10, 2025. The law requires Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, Twitch, and Kick to block roughly 1 million minors from using their platforms. Platforms that fail to comply face fines up to $49.5 million AUD (about $34.4 million USD).
The law itself—the Online Safety Amendment (Social Media Minimum Age) Bill 2024—did something important: it shifted responsibility for enforcement to the platforms themselves, not parents or users. Platforms must develop age verification systems within 12 months that go beyond simply asking a user to enter a birth date. The Australian eSafety Commissioner oversees whether platforms are actually following the rules.
How Platforms Are Responding
The major tech companies moved quickly. Google announced that anyone under 16 in Australia would be logged out of YouTube, losing access to personalized features like playlists. Meta did the same for Facebook, Instagram, and Threads, removing accounts it suspects belong to underage users.
The law does have exceptions. Messaging apps like WhatsApp, online games, educational tools like Google Classroom, and health services including Kids Helpline are still allowed. YouTube Kids remains available too. Each platform must report monthly on how many underage accounts it closes, though the government is careful to protect the privacy data used in the verification process.
The Australian government has been realistic about one thing: accurate age checking takes time—sometimes weeks—so it does not expect instant enforcement across the board.
Legal Challenges Are Already Appearing
Reddit sued to block the ban in December 2025, and by March 2026, Australia's government was investigating Meta, TikTok, YouTube, and Snapchat for potential violations. Communications Minister Anika Wells said the government would defend the law in the country's highest court if needed.
The timing matters. Around the same time Australia's law took effect, a U.S. jury ordered Meta to pay $375 million for safety failures that allowed child exploitation on Facebook, Instagram, and WhatsApp. A separate U.S. court also found Meta and Google liable for designing platforms in ways that harmed young people. These cases reflect a broader shift in how courts and regulators are holding platforms accountable for child safety.
Similar Bans Are Spreading Globally
Australia's move has sparked a wave of similar proposals worldwide. Malaysia announced plans for an under-16 ban starting in 2026. Spain's prime minister announced comparable restrictions in early February 2026. Greece is planning an under-15 ban beginning January 2027, and Slovenia is drafting its own version. Denmark is also preparing restrictions for under-15s.
At the European level, France, Spain, and Greece pressed the EU in May 2025 to coordinate restrictions on child access to social media across member states. The UK is running a pilot program with 300 young people to test how age restrictions might work before deciding on a nationwide policy.
In Canada, Prime Minister Mark Carney noted in March 2026 that restricting social media access for minors deserved serious consideration, and the government reopened discussions with its children's online safety advisory group.
The speed and breadth of these efforts suggest something broader is shifting in how governments view platform accountability.
The Technical Challenge Ahead
Australia's law requires platforms to use multiple verification methods—document scanning, biometric checks, or third-party identity services are all options. Each method involves trade-offs: they improve accuracy, but they also collect sensitive personal data, which raises privacy concerns.
The 12-month window gives platforms time to build systems that verify age without storing excessive information about users. Regulatory oversight from the eSafety Commissioner and the Australian Information Commissioner is meant to ensure these systems actually protect privacy while screening out underage users.
The broader policy context supports this push. Australia also announced a Digital Duty of Care framework in November 2025, which imposes legal obligations on platforms to actively protect all their users—not just minors. That framework sits alongside the age ban as part of a comprehensive approach to holding platforms responsible for safety.
This is worth flagging: we have seen something similar before. When the European Union introduced GDPR (the General Data Protection Regulation) in 2018, other countries and regions eventually adopted comparable rules because platforms found it efficient to align globally rather than maintain multiple systems. Australia's age verification standards may follow the same pattern—other countries watching and adopting similar approaches rather than fragmenting into incompatible rules.
The Australian model places the burden squarely on platforms. They must verify age, they face large fines if they do not, and they have a clear deadline. This liability structure creates strong incentives for robust systems. As other jurisdictions advance their own age restrictions, the technical standards Australia is establishing will likely become a reference point for how platforms globally architect their age-assurance systems.
What emerges is a shift toward what we might call a "platform accountability era." For three decades, tech companies have largely policed themselves. This pattern—Australia moving first, other countries following with similar rules, global platforms adapting to the strictest standard rather than maintaining separate systems—suggests that era may be ending.


