Social media bans for children: do they make sense?
Governments are beginning to reach for big levers to deal with the perceived harms of social media. Australia’s decision to bar under 16 year olds from major platforms is one of the boldest examples so far. The intention is clear enough: protect children from an environment that looks increasingly hostile to their attention, self esteem and mental health.*
This essay looks at three questions:
What does the evidence actually say about social media and youth mental health.
How different philosophical traditions would think about this type of regulation.
Whether a blanket under 16 ban is a sensible policy instrument, even if one accepts the aims.
My conclusion is that the evidence for serious average harm is weak, the harms are concentrated in subgroups, and that this specific policy is an extremely blunt instrument compared with the alternatives.
1. The evidence: how harmful is social media for adolescents?
The political story is simple. The data is not.
There are now several big reviews and meta analyses of the link between social media or “screen time” and adolescent mental health.
A widely cited review by Candice Odgers and Michaeline Jensen concludes that most studies find “a mix of often conflicting small positive, negative and null associations.” The most rigorous large preregistered studies show small associations between daily digital use and well being that are “unlikely to be of clinical or practical significance” and do not distinguish cause from effect. (PubMed)
Amy Orben and Andrew Przybylski use specification curve analysis on three large datasets of around 355,000 adolescents and find that digital technology use explains at most about 0.4 percent of the variance in well being. (PubMed) That is trivially small. You can detect it in huge samples, but no clinician would treat it as a major driver.
A broad 2022 review by Patti Valkenburg and colleagues sums up the field in almost comically cautious terms. They note that most reviews interpret the association between social media use and mental health as “weak” or “inconsistent”, a few call it “substantial” and “deleterious”, and they spend the rest of the paper explaining why the same underlying numbers are being described in such different ways. (ScienceDirect)
More recent meta analyses tighten the picture without really changing it:
A 2024 meta analysis by Ahmed et al finds weak but statistically significant associations between social media use and depression, anxiety and sleep problems in young people, and stronger associations for “problematic social media use” that looks more addictive. (PubMed)
A 2024 JAMA Pediatrics meta analysis by Fassi et al covers 143 studies and more than one million adolescents. In clinical samples they find a correlation between time spent on social media and internalising symptoms of about r = 0.08, and between engagement based metrics and symptoms of about r = 0.12. Community samples look very similar. (JAMA Network)
Translate that: r = 0.08 means that about 0.6 percent of the variation in symptoms is associated with time spent. r = 0.12 gives around 1.4 percent for engagement. That is measurable, but tiny.
There is stronger evidence that teens with existing mental health conditions use social media more and experience it differently. A 2025 paper by the same group shows that adolescents with diagnosed conditions spend more time on social media and are less satisfied with their online friendships than peers without conditions. (Nature) That is compatible with a story where vulnerable teens are drawn into patterns of use that do not help them.
On top of the correlational work there are a handful of randomised experiments, mostly in young adults, where people are asked to deactivate or sharply cut back on social media for a few weeks. These tend to find small improvements in life satisfaction, loneliness or depressive symptoms, especially for heavy or distressed users, but the effects are modest, the time frames are short and the participants are volunteers. (sarahdomoff.com)
Across this literature, there are consistent methodological problems:
Almost everything is correlational, not causal. Depressed kids may be more likely to spend time online.
Measures of “time on social media” are mostly self report and inaccurate.
“Social media use” lumps together very different behaviours and content.
Many studies are low quality by basic survey standards and are not preregistered. (NC DOCKS)
The one place where the signal looks meaningfully stronger is for clearly “problematic” or “addictive” use. Systematic reviews of problematic social media use in adolescents and young adults find much larger and more consistent associations with depression, anxiety and stress than for simple time based measures. (mental.jmir.org)
So a fair reading of the evidence is something like:
For most adolescents, total time on social media has a small, probably trivial association with mental health.
For a minority of adolescents who use social media in compulsive, distress driven ways, the association with poor outcomes is larger and may be clinically relevant.
We have some experimental hints that cutting use can help, but not enough to support confident large scale causal claims.
By contrast, the evidence linking alcohol, drugs and gambling to harm in adolescents is overwhelming. There is no serious debate about whether heavy drinking, early drug use or adolescent gambling increase risk of bad outcomes. The argument is about the best mix of taxation, regulation and treatment, not about whether the harms exist.
If you are ranking risk factors purely on evidence, “time spent on social media” is not in the same league as alcohol, drugs or gambling.
2. Philosophical lenses on social media bans
Given that empirical backdrop, how should we think about something as heavy handed as an under 16 social media ban?
Different philosophical traditions put the emphasis in different places. Three questions keep coming back:
How bad is the harm, and for whom.
How much do we trust the state compared with parents and platforms.
Do we want to train citizens for autonomy inside a flawed attention economy, or shield them and hope to fix the system before they enter.
Classical liberalism.
On a Mill style harm principle view, the state should step in only to prevent serious harm to others, not to protect people from themselves. Children are a partial exception, since their autonomy is limited and the state already sets age limits for alcohol, sex, driving and so on. A classical liberal could accept targeted rules that deal with clear harms, but a blanket ban on whole platforms interferes with freedom of expression and association, and overrides parental judgment. Given that the evidence points to small average harms, it is hard to argue that the high bar for coercive state action has been met.
Child centred rights.
Taking the UN Convention on the Rights of the Child seriously means treating children as rights holders, not simply as passive objects of protection. They have rights to seek information, express views and associate with others, as well as rights to protection. A ban can be cast as a right to protection from manipulative platform design. It can also be criticised as sacrificing the rights of competent 14 or 15 year olds to participate in online culture and politics, and as cutting off lifeline communities for queer or otherwise marginalised teens. On this view, the state should be very reluctant to impose blanket restrictions that do not distinguish between different young people.
Strong paternalism.
A more paternalistic philosophy is comfortable with the idea that the state should sometimes override individual or parental preferences “for their own good”, particularly for vulnerable groups. If you see social media as the new tobacco, the temptation is obvious: just keep children away until they are 16, whatever clever workarounds they manage. This view puts less weight on fine grained evidence and more on a general sense that the environment is toxic and that firms will not self regulate. The cost is a long list of knock on restrictions, since once you admit broad paternalism it is hard to draw principled lines.
Consequentialism.
A consequentialist asks only: what policy optimises overall welfare. If you assume that the ban sharply reduces harmful use, reduces anxiety and self harm, shifts kids toward healthier activities and does not create serious new harms, then you support it. If you accept the small average effect sizes, the likely workarounds, the privacy costs of age verification, and the loss of positive online communities for some groups, the picture flips. On realistic assumptions, the net impact of this particular ban is at best unclear and quite plausibly negative compared with more targeted design regulation plus mental health support.
Virtue and autonomy.
A virtue ethics angle is interested in the kind of character we are helping to form. Constant algorithmic stimulation is not a neutral backdrop. It shapes attention, patience and self control. You can argue that a period of protection from the worst attention factories during adolescence gives young people room to develop better habits. You can also argue that a pure ban denies them practice in managing temptations and learning to live well with technology. Either way, if the policy simply delays exposure until 16 with no education or scaffolding, it is an odd approach to cultivating responsible digital citizens.
Across these perspectives, there is no single correct answer. But they sharpen what the real disputes are about. This is less a fight over whether children deserve protection, and more a fight over how much power the state should wield, how much respect we give to child and parental agency, and whether we think design level interventions are preferable to blunt prohibitions.
3. How well does an under 16 ban actually work?
Even if you think the harms are serious and the state should lean in, you still have to ask whether the instrument works.
There are several obvious problems.
It is easy to work around.
Any enforcement scheme relies on age verification. Platforms will use self declaration, age inference, selfies, third party age assurance services or some combination. Tech literate teenagers will route around this with VPNs, burner accounts, older friends’ details, or by moving to smaller, less regulated platforms. You end up in the familiar pattern where compliant families follow the law, non compliant ones exploit loopholes, and the most vulnerable teens are often the most determined to find side doors.
It has privacy costs for everyone.
To keep under 16s out, platforms have to get better at knowing who is under 16. That pushes the system toward more pervasive data collection and profiling. The state can prohibit mandatory use of government ID, but the pressure to build or contract age verification infrastructure is obvious. There is a real risk that a policy justified as “protecting children” accelerates a wider surveillance trend.
It undermines parental agency.
Parents already have direct control over phones, devices, app stores and home rules. They can deny smartphones entirely, delay social media for their own kids, or allow carefully supervised access. A ban removes the option where a thoughtful parent allows their mature 15 year old to use locked down social media for specific purposes. It also encourages a culture of quiet evasion, where parents and children collude in breaking rules they consider unreasonable.
It ignores the design problem.
The deeper issue is not simply that teenagers exist on social media. It is the recommender systems, infinite scroll, notifications and engagement incentives that shape what users see and how long they stay. The ban hardly touches the design of the system. It simply says “come back when you are 16.” That might delay exposure for some. It does nothing for 16 to 25 year olds, who are also vulnerable, and it does not reward platforms that redesign their products to be less predatory.
It has opportunity costs.
Political and regulatory bandwidth is finite. A big symbolic ban that dominates the headlines can crowd out quieter but more effective reforms: algorithmic transparency, ad restrictions for minors, design standards for feeds and notifications, funding for school based digital literacy, and expanded youth mental health services. Once the government has “done something big”, the temptation to declare victory is strong.
None of this means the policy has no upsides. It probably will reduce usage for a segment of younger teens whose parents are not particularly engaged. It sends a clear signal to platforms that governments are willing to take tough action. It may shift norms so that parents feel more confident delaying smartphones or social media and can say “the law is on my side.”
But if the underlying empirical signal is small and highly heterogeneous, and if the harms are concentrated in subgroups with problematic use, then a policy that is both crude and easily circumvented looks poorly matched to the problem.
4. Where does that leave us?
Start from the evidence. The association between social media use and adolescent mental health is real but small for most, more serious for a minority with addictive patterns of use, and entangled with a long list of other social and economic changes. It is nowhere near as clear or as strong as the evidence for harms from alcohol, drugs or gambling in adolescents.
Layer on the philosophy. If you care about child rights and liberal freedoms, you are naturally wary of blanket bans that override parental judgment and erase the difference between a competent 15 year old and an 11 year old. If you are strongly paternalist, you may be more comfortable, but you still need a policy that works in practice and does not generate worse side effects.
Finally, look at the instrument. A national under 16 ban on social media is easy to explain and politically attractive. It is also leaky, privacy heavy and tangential to the core design problems of the attention economy.
A more proportionate response would focus on:
Serious enforcement of under 13 rules that already exist.
Tiered protections for 13 to 17 year olds that change how platforms can target, recommend and notify, rather than simply whether they may exist.
Binding design standards for addictive features when minors are present.
Investment in digital literacy and mental health support rather than only constraint.
Personal addendum
On first principles, I lean toward child centred rights and liberal freedoms. Children are not simply passive objects to be protected. They are emerging citizens with their own interests in information, expression and association. Parents are not simply obstacles to be bypassed by the state. They are usually the best placed to calibrate what their particular child can handle.
From that perspective, I am sceptical of any blanket ban that excludes all under 16 year olds from major online spaces, regardless of maturity, context or parental consent. When you add in the weakness of the average evidence, and the practical workarounds, this particular policy looks like a bad fit: rhetorically strong, empirically underpowered and structurally clumsy.
There is a serious problem to solve. But if we care about both children and freedom, we should spend less time congratulating ourselves for simple bans, and more time doing the harder work of reshaping the digital environment they are growing up in.
*Note. Drafted with input from GPT 5. I have not double checked the evidence review but aligns with what I have read. Also see Peter Gray for a child rights centred view and critique of evidence.
