Teens, Technology Panics, Social Media Regulation, and Crime in the Digital Age
Teenagers are increasingly affected by the rise of smartphones and social media, with research linking these technologies to rising anxiety, depression and self‑harm.
Recent legal scrutiny, including Mark Zuckerberg’s 2026 court questioning, highlights concerns that platforms like Instagram intentionally engage young users.
Governments are responding: the UK’s Online Safety Act tightens platform responsibilities, while Australia now bans under‑16s from social media entirely.
At the same time, criminals exploit these platforms to target vulnerable youths, raising urgent questions about whether regulation can keep pace with digital risks.
By Steve Rick, CEO
Coming to terms with new communications technology has been a cornerstone of the “new normal” across the economy. While this has forced welcome modernisations in work and connectivity, it has also brought online harms into sharper focus. Youth have been placed at the centre of debates about addiction, mental health, crime, and digital wellbeing.
Jonathan Haidt’s 2024 book “The Anxious Generation: How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness” brings this to the forefront of all our lives whether as parents, grandparents, teachers, clinicians or politicians and policy maker.
The term "Anxious Generation" refers to the rising levels of anxiety and mental health issues among youth, largely attributed to the impact of smartphones, social media and overprotective parenting. Haidt’s research explores the decline in adolescent mental health since the early 2010’s. This decline is characterised by increased rates of anxiety, depression, self-harm and suicide among young people, particularly those of Generation Z (those born after 1995).
These questions have not remained academic. On Wednesday 18 February 2026, Meta CEO Mark Zuckerberg was cross-examined in Los Angeles Superior Court as part of a civil lawsuit alleging Instagram and other Meta platforms were knowingly designed to attract and “hook” young users and contribute to youth mental health harms. Zuckerberg acknowledged that children under the age of 13 are not permitted on Instagram but admitted enforcement of age limits has been “very difficult,” and internal documents suggested many underage users still access the platform. Contemporary testimony also highlighted Meta’s internal focus on time-spent engagement metrics that may incentivise extended scrolling among youth.
This high-profile trial forms part of a broader global pushback against social media companies and intersects with regulatory reforms in the UK and Australia aimed at protecting young people online.
Regulating Social Media in the United Kingdom
In the UK, contentious debates over social media harms have translated into binding legislation. The Online Safety Act 2023 — passed as part of the government’s long-running attempt to regulate harmful online content — imposes a duty of care on large social media platforms.
Under the Act:
Platforms must use “highly effective age assurance” methods to prevent access by children to age-inappropriate content (e.g., pornography, self-harm, bullying) and ensure age limits are properly enforced in their terms of service and practices.
Services must carry out risk assessments focused on how their design and algorithms expose children to harm, and they must offer tools for users to control their experience.
Ofcom has broad powers to regulate compliance and fine companies up to £18 million or 10% of global turnover for systemic failures.
Regulations also include provisions for quick removal of non-consensual intimate images and other harmful material where notified.
This framework reflects a shift from voluntary safety tools toward legally enforceable obligations requiring platforms to demonstrate they are protecting minors online rather than simply offering optional controls to users.
The focus on age assurance and child safety duties intersects with debates about how and whether platform design choices contribute to harm. While the Online Safety Act does not ban children from social media outright, it does require platforms to proactively mitigate harm and enforce age restrictions more robustly than in the past.
Ongoing UK Policy Discussion
Since enactment, policymakers have continued to consider whether additional protections such as minimum age limits like Australia’s approach, might be introduced. Proposals have been floated to tighten age assurance requirements or otherwise mitigate exposure of younger teens to algorithm-driven feeds and engagement mechanisms, reflecting political and public concern about adolescent online wellbeing.
Australia’s Minimum Age Social Media Law
Australia has taken a notably more interventionist approach by legally restricting social media access for younger adolescents.
In late 2024, the Online Safety Amendment (Social Media Minimum Age) Act 2024 was passed, amending the Online Safety Act 2021 to require “reasonable steps” by platforms to prevent users under 16 from creating or maintaining accounts on popular social media services accessed in Australia. These include Instagram, Facebook, TikTok, Snapchat, X (formerly Twitter), Reddit, Threads, Twitch, and YouTube — among others (Australian Parliament, 2024).
The law came into force on 10 December 2025, making Australia the first country to impose a statutory minimum age of 16 for social media account holders. Under the legislation:
Platforms must implement age assurance systems that reliably determine whether a user is under 16 and act to block accounts if necessary.
The responsibility lies with companies, not parents or teens, and non-compliance risks significant fines.
There are no criminal penalties for minors who happen to access platforms in violation of the age rule — enforcement targets the platforms themselves (eSafety Commissioner, 2025; Infrastructure Department, 2025).
Australian regulators have also published guidance on how age assurance and verification should be implemented and enforced, acknowledging practical challenges in balancing safety with privacy.
This law has generated debate over implications for digital rights and privacy, the effectiveness of age verification technologies, and whether prohibitions simply push youth toward less regulated corners of the internet (Parliamentary Education Office, 2026).
Adolescents, Crime, and Digital Regulation
Earlier debates around teens, technology, and crime often lacked clear causal evidence linking online activity to behavioural outcomes. Developmental psychologists remain divided on whether technology use itself reliably causes harm. Although Jonathan Haidt’s research, chimes with many, including legislators who have taken steps to mitigate the harmful effects of social media on the young.
Social media’s role in facilitating both coordination of lawful activity and the planning or glorification of criminal behaviour is well documented, from the London riots to more recent protest movements.
Forensic Analytics work closely with law enforcement providing our CSAS software and expertise to help combat serious and organised crime. We see at close quarters the impact that county lines drug gangs have on individuals and communities. We are privileged to support the UK National County Lines Coordination Centre (NCLCC) and the Regional Organised Crime Units (ROCU’s), who do tremendous work in both combating the gangs and through partners supporting those exploited. Their ‘Lived Experience’ training sessions in collaboration with the Ivison Trust, aimed at police officers, social workers, probation and the prison service bring into sharp focus real “lived experience” of those who have been exploited and their families.
Sima Kotecha’s recent BBC investigation sets how vulnerable girls as young as 14 in London are being exploited and forced into sex by gangs, with some raped as “payment” for debts bringing their “lived experience” into sharp focus. Sadly, this is not restricted to London, it is endemic and is happening throughout the UK and internationally.
As Kotecha sets out, survivors and police describe how gangs target girls from diverse backgrounds, grooming them for sex, drugs, and criminal activities, often exploiting their vulnerabilities.
Working closely with policing we see how much of the grooming is conducted using social media. What regulators increasingly focus on is platform design and business incentives rather than mere access. By imposing structural duties such as age assurance, swift content takedown, and minimum age requirements legislation attempts to align commercial incentives with public safety outcomes. But is it enough, how does society “put the Genie back in the bottle”?
Conclusion: From Panic to Policy
It is easy to fall into familiar “technology panics” when confronted with social media’s cultural impact. What distinguishes the current moment is that many governments have moved beyond abstract fear to concrete legislative action.
The UK’s Online Safety Act enshrines duties of care, age assurance, and harm mitigation into law, while Australia’s minimum age law represents an unprecedented statutory restriction on youth access. These regulatory innovations reflect global concern about youth mental health, addiction, and online harms, and intersect directly with legal challenges companies like Meta now face in court.
As that litigation unfolds and as policymakers continue to refine and expand legal frameworks debates around social media, youth wellbeing, and crime will increasingly be shaped by law and policy, not just academic theory or cultural alarm. In an age where criminals have weaponised technology and are early adopters, are we doing enough to protect the young and vulnerable?