More countries are tightening social media access for adolescents, and India is now testing the same idea. A trend set by Australia, where under-16 account restrictions took effect on 10 December 2025, is being picked up elsewhere, with Austria, the Czech Republic, Denmark, Greece, Indonesia, Malaysia and Norway among those considering restrictions.
India has begun discussions with social media intermediaries on a blanket, age-based restriction for children, reflecting a growing view that voluntary safeguards have failed. Across jurisdictions, the policy toolkit is converging: age verification, parental-consent frameworks, and sharper accountability for platforms that profit from attention but externalise harm.
READ I India’s social media ban debate is really about enforcement
Adolescent social media bans spreading fast
Australia’s nationwide restrictions signalled that digital childhood protections can be enforced if the state treats it as a compliance problem, not a moral appeal. Europe is tightening expectations under broader online safety regimes, including the United Kingdom’s approach to platform duties. Spain and Italy are also pushing for tougher compliance within their own regulatory contexts.
What is changing is not just the geography of the debate, but its direction. Malaysia’s participation matters because it undercuts the argument that child online safety is a Western cultural panic. Countries with very different political systems are arriving at a similar diagnosis: addictive product design, weak guardrails, and limited accountability have created an environment that is hard for adolescents to navigate without harm.
READ I Europe’s far-right find happy hunting grounds in social media
Age restriction needs credible implementation
India’s Digital Personal Data Protection framework already recognises differentiated norms for children: platforms must obtain verifiable parental consent to process personal data of those under 18, which is an indirect but real compliance lever. The current consultations suggest a shift from principle to enforcement. That shift will mean little without an implementation architecture that can stand up in court, in the market, and against routine circumvention.
Age verification is the first fault line. Overly intrusive identity checks create justified fears of surveillance and data misuse. Light-touch methods are easily bypassed, producing compliance theatre that burdens platforms but protects no one. India’s challenge is to design a system that verifies age credibly while minimising data collection, retention and misuse.
This is where the state must use the real chokepoints, not just issue advisories. If India moves to an under-16 restriction, enforceability will depend less on promises from platforms and more on app-store and operating-system controls that can prevent account creation and installs by age cohort, backed by a clear statutory rule, audit rights, and a measurable penalty regime; otherwise, age gates will be bypassed while smaller Indian intermediaries absorb disproportionate compliance costs. Australia’s model is explicit about placing the burden on platforms to take “reasonable steps” to block under-16 accounts, not on families or children.
READ I Gen Z politics: How social media is rewriting strategy
Privacy-preserving age assurance is the hard part
The European Union’s evolving model offers one direction: third-party age assurance providers that verify eligibility without revealing full identity data to platforms. In principle, that approach aligns with data-minimisation: confirm “over/under” a threshold, not “who” the person is.
India could adapt this logic to its own digital identity ecosystem, but the design has to be tight. Any age-assurance layer must be interoperable across platforms, auditable, resistant to fraud, and subject to clear limits on data use. Without those guardrails, the cure becomes a parallel pipeline for profiling and surveillance, which will erode public legitimacy and invite litigation.
Platform accountability beyond access restrictions
Even a well-designed age gate does not address the incentives driving harmful experiences. The deeper problem is that platforms optimise for engagement, and adolescents are especially vulnerable to recommendation systems built to maximise time spent.
So regulation must move from “who can access” to “what is being served” and “how it is being served”. That implies design obligations: reducing algorithmic amplification of harmful content, disabling targeted advertising for minors, and default time-use controls that are hard to switch off and easy for parents to manage.
The United Kingdom’s online safety approach has pushed the idea of safety by design, requiring companies to anticipate risk rather than plead ignorance after harm occurs. India can borrow that discipline. If young users do go online, the environment they enter should not be engineered to keep them there.
Harmful content takedowns need speed and proof
Union minister for electronics and information technology Ashwini Vaishnaw has also flagged deepfakes and the need for global platforms to function within India’s constitutional framework and cultural context. His remarks followed a tightening of India’s content rules, including a requirement to remove unlawful content within three hours of notification.
The instinct is correct: in modern distribution systems, delay is damage. Minors are particularly vulnerable to reputational harm and psychological distress caused by manipulated content. Many jurisdictions are now considering or mandating watermarking standards, disclosures for AI-generated media, and rapid takedown mechanisms for impersonation content.
But speed mandates also raise operational questions. What qualifies as “harmful” versus merely offensive? Who verifies the notice is valid? What is the appeal route? If a three-hour rule is enforced without procedural clarity, it invites over-removal, selective enforcement, and arbitrary outcomes. If it is enforced weakly, it becomes another headline policy that platforms treat as a manageable cost.
India will need a system that can prove compliance, measure response times, and penalise repeated failure. Without metrics and consequences, deadlines are just slogans.
AI investment and online safety must move together
India is tightening regulation while also making a large investment push in artificial intelligence infrastructure. The projection that AI investments could cross $200 billion over the next two years underlines the political economy of the moment: India wants to be a builder of digital ecosystems, not only a user of global technologies.
Doing both at once is sensible. Countries that pursued growth alone often found themselves reacting to social consequences too late. Countries that overregulated early sometimes struggled to build competitive technology sectors. India is attempting the harder path: building capacity while also trying to set behavioural and market boundaries.
That will require coherence across ministries, regulators, and enforcement agencies. An age-based ban cannot sit in isolation from rules on advertising, recommender systems, content moderation, data protection, and grievance redressal. Fragmented enforcement will be gamed by platforms and punished by courts.
Digital literacy is the missing pillar
Regulation alone cannot carry the load. India will also need school-level digital literacy campaigns that teach adolescents how algorithms shape feeds, how misinformation spreads, and how manipulation works. Countries that invested in structured awareness programmes reduced dependence on enforcement and built behavioural resilience among young users.
This is not soft policy. It changes outcomes. A teenager who understands how a feed is engineered is less likely to treat it as neutral reality, and more likely to disengage from harmful loops. Literacy also strengthens families’ ability to use parental controls and recognise risky patterns early.
The debate on children’s access to social media is no longer abstract. As governments try to draw boundaries around algorithmic influence, India’s choices will be watched for governance precedent across the Global South.
If India chooses a ban without enforceable design duties, it risks symbolic policy and predictable bypass. If it chooses surveillance-heavy verification, it risks undermining trust and creating a larger civil-liberties problem. The durable route is narrower and harder: privacy-preserving age assurance, measurable platform obligations, fast but accountable takedown systems, and literacy that reduces dependence on policing alone.

