PETALING JAYA: Governments worldwide are taking measures to protect children in the digital age, introducing regulations aimed at reducing exposure to harmful content and empowering parents with tools to monitor online activities.
From China to the European Union, these measures represent a growing recognition of the risks posed to minors in the rapidly evolving digital landscape.
In China, effective from Jan 1 this year, the Regulations on the Protection of Minors in Cyberspace require online platforms to prioritise child safety.
The regulations saw the introduction of a mandatory “Juvenile Mode”, which provides age-appropriate content and features for young users.
Platforms must also impose strict time limits for gaming, restricting minors to 90 minutes of play on weekdays and three hours on weekends, with access entirely banned between 10pm and 8am.
They are also required to actively monitor and block access to violent, sexual or otherwise harmful material.
The regulations also mandate real-name registration systems, ensuring that all users are accurately identified and thus preventing minors from bypassing age restrictions.
To support parents, service providers must offer user-friendly tools that enable activity monitoring and screen time management.
In the European Union, the Digital Services Act, which came into force in February this year, set out comprehensive obligations for online platforms to protect users, particularly minors, across all 27-member states.
The Act requires platforms to implement default privacy settings tailored for children, restrict data collection and provide tools that allow parents to limit screen time and block harmful content.
These measures are bolstered by the Better Internet for Kids (BIK+) Strategy, aimed at fostering a safer and more empowering digital environment for children.
The EU’s regulations also mandate enhanced content moderation, requiring platforms to filter out material related to violence, self-harm and inappropriate interactions.
In Spain, new legislation was passed in June, which raised the minimum age for creating social media accounts from 14 to 16.
The law also requires manufacturers to include built-in parental control tools on devices, ensuring that parents have immediate access to monitoring features during the initial device setup.
Outside of Europe, Australia has taken a more aggressive stance with its legislation.
In November, the government introduced a law banning individuals under the age of 16 from accessing major social media platforms, including Facebook, Instagram and TikTok.
Companies that fail to comply face fines of up to A$49.5mil (RM139.47mil).
This follows the launch of an age verification pilot scheme earlier in the year, aimed at preventing minors from accessing explicit content online.