
Introduction
The rapid advancement and proliferation of Artificial Intelligence (AI) has ushered in unprecedented opportunity and vulnerability, particularly for children. As AI-driven platforms increasingly shape the online experiences of young users, concerns about privacy, safety, and psychological well-being continue to mount. Against this backdrop, the September edition of the Hive Pulse Point convened a timely discussion on "Maintaining Child Online Safety in the Age of Artificial Intelligence". The session explored how AI technologies influence children’s online behaviour, the ethical and regulatory implications of these technologies, and the shared responsibility of parents, educators, policymakers, and technology companies in ensuring safer digital spaces.
AI’s Influence on the Digital Experiences of Children
AI now underpins many of the platforms children use daily, from social media feeds and gaming apps to educational software and virtual assistants. Through data collection and predictive algorithms, AI systems curate content based on user activity, such as the time spent on certain posts, comments, reactions, or emotional cues. While this level of personalisation can foster positive engagement and tailored learning, it also poses significant risks. When a child lingers over harmful or age-inappropriate material, algorithms often interpret this as preference and continue to recommend similar content, deepening exposure to unsuitable or damaging material.
Features such as “auto-play” and “infinite scroll” are designed to maximise user engagement but can lead to addictive behaviours, social isolation, and diminished offline interaction. Over time, these mechanisms encourage a cycle of passive consumption, where children continuously absorb algorithmically curated content, leading to emotional dependency that undermines critical thinking and interpersonal communication. The growing prevalence of AI-powered “companions” has further blurred the boundaries between human and machine relationships, sometimes leading to emotional reliance on virtual entities. Such patterns of attachment can distort a child’s perception of reality and human connection, reinforcing the need for ethical design and parental awareness.
Exploiting Curiosity and the Risks of Online Grooming
Children are naturally curious, and AI-powered systems can exploit this trait by targeting them with suggestive content or by enabling inappropriate interactions. For instance, online gaming environments often feature embedded chat functions and AI chatbots that connect players across the world. These can become entry points for malicious actors who use psychological manipulation to build trust and solicit personal information. Even when social media access is restricted, children may still encounter threats through gaming platforms or educational tools that allow unsupervised communication.
The notion of “online grooming” has evolved with technology. It no longer requires direct messaging between predator and victim; rather, it can occur indirectly through algorithms that serve harmful communities or content that normalises risky behaviour. This dynamic underlines the inadequacy of conventional parental monitoring and highlights the need for broader systemic safeguards.
The Subtle Harms of AI and the Question of Autonomy
While overt risks such as exploitation and exposure to explicit content are widely discussed, the more subtle harms of AI-driven environments often go unnoticed. Constant exposure to algorithmically tailored information can erode children’s autonomy by shaping their choices and perceptions. Moreover, heavy reliance on parental control technologies, though well-intentioned, may foster distrust if used without explanation. Children who do not understand why restrictions are in place often seek to circumvent them, further exposing themselves to online threats.
Effective child safety strategies must therefore prioritise empowerment over restriction. Instead of shielding children completely, parents and educators should engage them in open discussions about online risks, guiding them towards responsible digital behaviour. Teaching children the reasons behind certain safety measures not only builds trust but also helps them develop the critical judgement necessary for independent navigation of the internet
Striking the Balance Between Protection and Education
Ensuring children’s safety online requires a delicate balance between leveraging technology for educational growth and mitigating its potential harms. AI tools and digital platforms can offer immense educational value, from interactive learning to creative expression, but excessive or unregulated use can have serious consequences. Establishing this balance depends on collective responsibility. Governments, civil society organisations, educators, and parents all have distinct roles in shaping safer digital environments.
Awareness remains the most powerful preventive tool. Parents must keep pace with technological developments, understanding the platforms their children use and the risks associated with each. Schools should incorporate digital literacy and cybersecurity education into their curricula to reinforce responsible online conduct. Simultaneously, public institutions and private companies must collaborate to ensure that AI tools used by children adhere to strict ethical standards and transparency requirements.
Designing AI Systems with Child Safety in Mind
Creating AI systems that prioritise child welfare requires more than compliance. It demands proactive and child-centred design. This involves embedding privacy by design and privacy by default principles into all stages of development. Developers should recognise that virtually every digital platform can be accessed by minors, whether or not it is explicitly marketed to them. Age-appropriate design codes can help establish necessary boundaries by tailoring platform features, advertisements, and permissions to specific developmental stages.
Child online protection must also be treated as an ongoing process rather than a one-time technical fix. Continuous research and risk assessment are vital, as the challenges evolve alongside technology. Developers and policymakers should design safeguards that anticipate how children of different age groups interact with technology and where vulnerabilities might arise. Moreover, platforms should consider implementing systems that restrict targeted advertising, limit data collection, and provide clearer user controls for younger audiences.
Ethical Boundaries and Regulatory Gaps in Africa
Across Africa, conversations about AI governance and data protection are gaining momentum, yet many countries remain in the foundational stages of implementation. Although countries such as Nigeria, Ghana, and Kenya have made commendable progress in establishing data protection and cybersecurity frameworks, AI regulation across the continent remains fragmented. Consequently, experts argue that instead of rushing to adopt AI-specific laws, governments should first focus on strengthening the enforcement of existing frameworks. Strengthening institutional capacity, investing in digital literacy, and enhancing cross-border collaboration are critical prerequisites for sustainable AI governance.
A phased, context-specific approach is essential. Rather than replicating Western models, African governments should localise their frameworks, taking into account cultural values, literacy levels, and infrastructural realities. Collaboration with international bodies, civil societies, academia, and big tech companies can facilitate the sharing of best practices, but local adaptation remains crucial for meaningful impact. Importantly, efforts to shape digital and AI policies must also include the voices of children themselves. Establishing structured platforms such as "Children’s Assembly" or digital safety councils would allow young people to articulate their experiences, needs, and concerns directly, ensuring that regulations are grounded in lived realities rather than adult assumptions of what is best for them. Public education campaigns should complement these participatory and legal efforts, bridging the knowledge gap between policymakers, parents, and digital users, and fostering a culture of shared responsibility in child online protection.
Building Digital Resilience: The Role of Parents and Educators
Parents and educators play a frontline role in cultivating digital resilience. Online safety should be taught as a component of everyday life, equated with lessons on physical safety and social responsibility. Adults can encourage children to report uncomfortable or confusing online experiences without fear of punishment through encouraging a culture of open communication. Schools and communities can also organise workshops that teach both parents and children about cyber hygiene, privacy, and responsible sharing.
Children must learn that the internet, while vast and resourceful, is also a public space where every action leaves a digital footprint. They should understand the risks of sharing personal details, images, or locations, even in seemingly private conversations. Similarly, parents need to be mindful of “sharenting”, which is the act of publicly sharing their children or ward’s personal information . Innocent posts about birthdays, school uniforms, or family routines can unintentionally reveal sensitive details that compromise a child’s privacy or safety.
Toward Ethical and Inclusive AI
The pursuit of ethical AI extends beyond safety; it includes fairness, inclusivity, and diversity in design. AI moderation systems, trained on limited or biased datasets, often fail to account for regional, cultural, or socio-economic differences, thereby perpetuating inequality. In the context of child protection, this means that children from minority or marginalised backgrounds may either be over-policed or under-protected by algorithmic systems.
Developing fair AI systems requires diverse datasets that reflect the experiences of children across various contexts. It also demands that children, parents, and educators be involved in the design and testing process of child-oriented technologies. Engaging users in co-creation ensures that digital tools align with children’s real-world experiences and promote inclusion rather than exclusion. As aforementioned, establishing structured Children assemblies or consultative platforms can further ensure that children’s perspectives are meaningfully integrated into policy and design processes. Ethical frameworks, such as child impact assessments can help developers anticipate risks and develop measures to mitigate the risks.
Rethinking Regulation and the Future of AI Safety
Debates around age restrictions and social-media bans continue to divide global opinion. Several countries have begun experimenting with formal prohibitions for younger users, citing growing evidence of online harms among adolescents. Australia has taken the most decisive step, enacting the Online Safety Amendment (Social Media Minimum Age) Bill 2024, which prohibits access to major platforms such as TikTok, Instagram, and Snapchat for children under sixteen. France has also passed legislation requiring parental consent for users under fifteen, while Denmark recently announced plans to restrict social-media access for children below fifteen as part of its broader national digital safety strategy. In Asia, Indonesia is developing a similar framework inspired by Australia’s approach, and within Europe, countries such as Belgium and the Netherlands have adopted softer regulatory models introducing parental-consent mechanisms or issuing non-binding age guidelines for online participation.However, universal bans may be counterproductive, as they can infringe upon rights to access information and limit digital learning opportunities. Rather than imposing blanket restrictions, policymakers should focus on building adaptive, evidence-based regulations that emphasise education, awareness, and parental empowerment.
The future of AI governance for child safety lies in striking a balance between innovation and regulation. Governments must engage continuously with technology companies, establishing transparency and accountability mechanisms without stifling creativity. Regulatory frameworks should encourage responsible data use, clear reporting structures for online abuse, and strict penalties for violations, ensuring that digital environments evolve safely and ethically.
Conclusion
The challenge of maintaining child online safety in the age of AI transcends technological innovation. It is a moral, educational, and societal imperative. Protecting children requires collaboration across sectors and generations. Parents must become digitally literate; educators must integrate online safety into learning; and policymakers must design laws that protect without paralysing innovation.In alignment with this vision, we have developed a suite of practical tools designed to support parents, educators, and institutions in promoting safer digital engagement for children. Child safety cannot be left to chance or automation, it must be intentionally designed into every layer of the digital ecosystem. The goal is not to isolate children from technology but to empower them to navigate it safely, confidently, and with the full protection of the systems created to serve them.
This article is based on the Hive Pulse Point Series Moderated by Wisdom Agbonyehemen with Adedolapo Adegoroye, Omolara Esther Hamzat, Emmanuella Aston and Mosadi Moloi as panellists. We thank the guests for their time and input. You can catch up with the session recording here.