Australia recently passed a strict new law that aims to prevent under-16s from accessing certain social media platforms, citing health and safety concerns. The government requires the tech companies that own these sites to take “reasonable steps” to bar under-16s from gaining access to their platforms’ content. How will these companies ensure the age of their users, and what exactly is expected of them according to the law?
What Does the Social Media Law in Australia Dictate About Age Verification?
The law that was passed is called the Online Safety Amendment (Social Media Minimum Age) Bill 2024, and it’s due to go into effect on 10 December 2025.
Right now, the law does not mandate any specific age verification methods that companies must use. Instead, it simply sets a goal that companies must meet: platforms must take “reasonable steps” to stop Australians under age 16 from having accounts.
The eSafety Commissioner has expressed that these age verification steps should be as “minimally invasive” as possible. But, where the expectation is to identify and remove thousands of users, it’s unclear how “minimal” the techniques can be while still being effective.
Learn more about how the law will impact kids, parents, and schools.

In practice, platforms are predicted to combine different techniques and technologies to estimate or uncover users’ true ages and block their accounts when applicable.
The law leaves a lot of questions unanswered, but there are a few firm points we know for certain:
- Platforms designated as “age-restricted social media” must take “reasonable steps” to bar under-16s from having accounts on their sites beginning 10 December 2025. This means that existing accounts will need to be frozen or deleted, and no new accounts by under-16s can be made.
- If platforms do not adhere to the current guidance and future guidance published by the eSafety Commissioner, they can be fined for non-compliance. The fine can be as high as AUD$49.5 million, and it’s not clear if there can be more than one fine per platform.
- The list of platforms that must adhere to these rules is dynamic; platforms can be added and removed as determined by the eSafety Commissioner.
- There is no mandate on which technology and procedures platforms may use for verification technology. They are free to choose which methods they think will work best, as long as they count as “reasonable steps.”
- Under-16s who do access these platforms after 10 December will not face any penalty. Neither will any adults or institutions that allow under-16s to access platforms (knowingly or unknowingly). The law put the onus on the platforms to prevent unauthorised users.
Overall, the desired effect is clear: stop under-16s from accessing social media platforms. However, the methods and procedures companies will use, as well as what constitutes a “reasonable step,” are not clear.
What Do Social Media Companies Plan To Do About Age Verification in Australia?
Since there is no mandated process, each company is left to determine digital ID implementation for social media age restriction methods they will use.
Platforms will likely use a multifactorial approach for better accuracy. There are three main methods that companies are expected to draw from when developing their age verification plans:
- Self-declared age: The least reliable method is still valuable. When users sign up for an account, they are requested to input a birthdate or age. Under the new law, platforms will use this as a point of reference and check other markers against it to see if the activity on an account matches the expected activity for a given age group based on statistics.
- Behavioural and signal-based inference: This is the part where platforms will analyse a user’s activity and compare it to what they expect from a certain age group. They’ll likely look at what content an account posts and engages with, their friend network, and other signals that can indicate if an account is being run by a teen. This type of analysis is typically run by AI.
- Identity checks for contested accounts: If an account that is owned by an over-16 is flagged, suspended, or frozen, platforms are expected to ask the account holder for proof of age to regain access. This can include providing government ID (which cannot be the only option; users must have an alternate choice available), using a third-party age-checking software or program, or providing a video selfie.
This multi-layered approach is meant to prevent a situation where every user is suspended and then needs to verify their age. Such a blanket demand is unreasonable to platforms and users, and has been deemed unreasonable by Australian regulators. The goal is to be precise in identifying underage users, not to apply the age ID rules to every user indiscriminately.

There are still a lot of unanswered questions about exactly how these platforms plan to handle the law in terms of identifying under-16s, verifying ages, and handling account information. Here is what we know so far:
Meta (Facebook, Instagram, Threads)
Although the law doesn’t go into effect until December 10, Meta has already started notifying users it believes are under age 16 that their accounts will be suspended. They will stop allowing under-16s to create new accounts beginning December 4.
Meta plans to use a multi-layered approach in identifying under-16 users and in age verification. They’ll likely use behavioural pattern analysis to target underage accounts. For digital ID, they plan to work with Yoti, a third-party application that can analyse video selfies. Users will also be able to provide a government ID for verification.
Meta states that under-16 accounts will be frozen, but can be reclaimed by the user when they come of age.
Read more about Facebook’s digital ID policies on their website.
TikTok
TikTok has said it will comply, but indicates it will remain cautious about intrusive checks. The company already uses multiple layers of age analysis and has trialled AI-driven age estimation in other jurisdictions (for example, in the UK). The company says it will rely on layered checks and will use additional verification only when an account is flagged.
Read more about TikTok’s age verification information on their site.

Snapchat
Snap has publicly declared it will comply with the law and will disable accounts for Australian users who are under age 16. The company states that it wants to complete the task with the least intrusive methods possible. According to Snapchat, many users will be asked to verify their age using ConnectID (verify with a bank account), or provide a photo ID or a video selfie to third-party software k-ID.
You can read more about Snapchat’s SMMA obligation plan on their website.
X (formerly Twitter)
X has been critical of the law, even requesting a delay months ago. The company argues the rules may raise legal and human-rights questions. However, the bottom line is the company needs to comply with Australia’s rules as written, else they’ll face hefty fines. As of writing this article, X’s policy page states they’ll still allow under-16s to make accounts and access the platform, but they’ll have a “Protected” status until age 18 (which was already an existing policy). Will this violate Australia’s law, or does the “Protected” status do enough to satisfy eSafety’s goals? Time will tell.
YouTube (Google)
YouTube is another outlet that contests the law, but its stance is that YouTube is not a social media company; it’s a video-sharing company. Google has warned that it may seek legal action over the site being included in the ban. As of writing this article, YouTube doesn’t have a page specifically talking about age verification in the context of the SMMA law. They will likely expand on techniques they started testing in the US earlier this year.
Read more about YouTube’s age verification strategies on their blog.

Reddit & Kick
Both Reddit and Kick are expected to follow the SMMA law, but neither currently has any explicit information in their user agreements about the obligation. It’s unknown as of the writing of this article what steps they plan to take regarding identifying underage accounts and verifying user ages.
It’s expected they will likely rely heavily on account-level flags and user activity, perhaps implementing a third-party age-verification system before December 10.
Read about Reddit’s policies on their website.
Read about Kick’s policies and safety recommendations on their website.
Twitch
Twitch was recently added to the list of age-restricted platforms. Though a statement isn’t yet available on their website, a Twitch spokesperson has told TechCrunch that no new users under age 16 will be able to make an account starting on December 10, and that all accounts for users under 16 will be deactivated by January 9. How they plan to determine users’ ages has not yet been disclosed.
You can read more about Twitch’s age policies and look for updates on their website.
Possible Futures of Age-Verification Technology for Online Platforms
In light of this event and others around the world, age-verification technology is at an all-time high in demand. Right now, most of the platforms in Australia that must adhere to the SMMA obligation that have already stated their plans are using previously-tested methods. However, it’s reasonable to believe that this change in legislation will lead to a push for new tech and services in the coming months and years.
Here are some of the new and improving digital age verification technologies companies are expanding and inventing.
- AI-based behavioural inference: With custom AI, companies can utilise machine learning to identify patterns in posting, followers/followings, and other types of activity. These patterns can be used to suggest what age or age group an account owner belongs to. So far, accuracy varies depending on the platform and the age groups.
- Facial age estimation: Services like k-ID and Yoti analyse photo IDs and video selfies to determine a person’s age based on appearance. Technology is advancing, but accuracy varies, especially across different ethnic groups, raising concerns.
- Document verification and Digital IDs: Matching a scan of an ID to a selfie can be accurate. However, in Australia, the SMMA forbids companies from requiring a government ID. While it can be an accurate and simple way to prove age, it also raises data privacy concerns.
- Telecommunications and payment signals: Using the data contained within a mobile phone as a whole, and payment history for other services, can be proof of age. However, it also raises pretty severe privacy concerns. It would also exclude people without such a history due to being newly-16 or newly-18, or due to a new phone, etc.
- Decentralised and reusable digital ID wallets: The idea of having an ID wallet specifically for online use can be an attractive system. They would allow users to have a credential that proves them to be part of certain groups, like an attribute that declares them “over 16”, for example. Of course, privacy concerns are still present.
In practice, it’s expected that these tools will be used in combination, with the least invasive being the first line of defence.

Safety and Privacy Concerns About Digital ID and Age Verification
Age verification raises real privacy and safety risks. While it would be grand if companies really did just want to verify your age like a bouncer at a nightclub, the reality is that tech companies are known and notorious for trying to squeeze every bit of data from every user and use it to make money however they can. This leads to concerns about what data is harvested and stored within platforms’ databases, what companies do with the data on purpose, and what happens when a data leak occurs.
Here are the main concerns you should know.
Accuracy and Bias
AI age estimators make mistakes. Facial and behavioural models can misidentify people, especially those from underrepresented groups, including those with facial differences. There is a huge bias in certain ethnic groups that can misidentify people and misidentify their ages.
It’s an ongoing issue where AI frequently misidentifies over-16s as being too young, and under-16s as being older than they really are. It’s not a cohesive way to protect children right now.
Data Privacy and Safety
Collecting IDs or biometrics creates new sensitive datasets. If platforms store too much data, they increase the risk of misuse or breach. The Australian regulator has explicitly asked platforms to avoid blanket, invasive checks and to limit data collection to what is strictly necessary.
Learn more about the anticipated pros and cons of the ban.
Scope Creep and Mission Drift
Tools developed for age verification could be repurposed for profiling or targeted advertising if safeguards fail. It’s imperative that legislation keeps up with limiting what companies can do with any data collected. The relationship between tech companies and users has always been contentious because users want to utilise the tech company’s services, but tech companies always want to take advantage of any data they can get.

Exclusion
Not everyone has access to IDs, smartphones, or stable internet. Heavy reliance on ID checks could exclude marginalised young people from the mainstream internet or educational resources. Even adults who don’t want to overload their lives with tech may be cut off from resources. Regulators and platforms must plan for other pathways that allow those who cannot or choose not to have an overly online presence a way to rightfully access the information and services they are qualified for.
Increased Risk to Children
One of the unintended, yet foreseeable, consequences of the new law is that under-16s may migrate to smaller, less-regulated apps and sites. This will actually increase children’s risk of exposure to harmful content, including online predators and personal data violations. Now more than ever, it is imperative to talk to children about online safety, including what to do if they see something dangerous or disturbing, how to stay safe when interacting with strangers, and how to keep their identity private.
Australia passed this law with the intention of being a leader in the charge for online safety for children. They know it’s not a perfect law, and they know it will likely require lots of changes after launch. Policies across the world regarding social media, the internet, and specifically how it can interact with children, need to change. It will be a rough and confusing road ahead as society and lawmakers figure out how to make the future better.
References
- Age verification tools for online customers and custom-built apps · Yoti. (2025). In Yoti. Yoti. https://www.yoti.com/business/age-verification/
- Illegal and Harmful Content; Regulation. https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/roadmap-to-regulation
- Illegal and Harmful Content; Age Assurance. https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/age-assurance
- Jose, R. (2025). Australia wants ‘minimally invasive’ age checks under teen social media ban. In Reuters. Reuters. https://www.reuters.com/business/media-telecom/australia-wants-minimally-invasive-age-checks-under-teen-social-media-ban-2025-09-16/
- MCGUIRK, R. (2025a). Australia warns social media platforms against age verification for all ahead of a ban on children. In AP News. AP News. https://apnews.com/article/australia-chldren-banned-social-media-2bbc1f2921af4f008215c16d5e8b3506
- MCGUIRK, R. (2025b). Australia adds Reddit and Kick to social media platforms banning children under 16. In AP News. AP News. https://apnews.com/article/australia-social-media-ban-reddit-kick-e6ae0be8c6b2571edd94d0318f47cb14
- Australia is quietly rolling out age checks for search engines like Google. ABC News. https://www.abc.net.au/news/2025-07-11/age-verification-search-engines/105516256
- Park, K. (2025). Australia adds Twitch to teen social media ban, Pinterest exempted. In TechCrunch. TechCrunch. https://techcrunch.com/2025/11/21/australia-adds-twitch-to-teen-social-media-ban-pinterest-exempted/
- Pernice, E. (2025). AI Age Verification: Big Tech’s Risky Fix for GDPR Violations - TechGDPR. In TechGDPR. TechGDPR. https://techgdpr.com/blog/ai-age-verification-big-techs-risky-fix-for-gdpr-violations/









