Government threatens legal action as under-16 social media ban falls short on enforcement

Comments Comments

The government says it will step up pressure on major tech platforms including Meta, TikTok and Google, warning it could take legal action over failures to enforce the country’s ban on social media use for under-16s.

The “world-first” laws, introduced in December, require platforms such as Instagram, Facebook, Snapchat, TikTok and YouTube to prevent children from creating or maintaining accounts.

But fresh findings from the eSafety Commissioner suggest a significant number of young users remain active online.

A survey of nearly 900 Australian parents found that 31% said their children still had at least one social media account after the ban, down from 49% before the legislation came into force. Among those who were already using platforms like Instagram, Snapchat and TikTok, around 70% were still able to access their accounts.

Pre-ban, Australia's internet regulator had estimated there were 150,000 Facebook users and 350,000 teens on Instagram in the 13-15 age bracket.

Now, authorities have confirmed that several platforms are now under investigation for potential non-compliance, with communications minister Anika Wells accusing companies of falling short in enforcing the rules.

“None of this is impossible. None of this is even difficult for big tech, who are innovative billion dollar companies,” Wells said.

“If these companies want to do business in Australia, they must obey Australian laws.”

The government says it will decide by mid-2026 whether to pursue penalties, with companies facing fines of up to A$50 million for systemic failures.

Regulators have pointed to weaknesses in current enforcement tools, particularly age verification systems.

The eSafety office claims some platforms allow repeated attempts at verification or fail to act even when users declare they are underage. It also raised concerns about facial age estimation technology, noting higher error rates for users close to the 16-year-old threshold.

According to the report, one of the most common reasons children retained access was simply that they had not yet been asked to verify their age, highlighting gaps in how consistently the rules are being applied.

Meta said it is working with regulators but argued that accurately determining age online remains a challenge, particularly around the 16-year cut-off, and suggested stronger controls at the app store or operating system level.

While the government has previously highlighted that millions of accounts were removed or restricted in the early days of the ban, anecdotal evidence and the latest data indicate enforcement remains uneven.

Officials and campaigners continue to defend the policy as necessary to protect young people from harmful content and addictive platform design, with the approach now drawing international attention as other countries consider their own bans.

comments powered by Disqus