Statement on AI and Children
Roundtable of G7 Data Protection and Privacy Authorities
11 October 2024
- We, the G7 data protection and privacy authorities (DPAs),Footnote 1 are mindful of current regulatory and technology issues and developments relating to artificial intelligence (AI), where dignity, autonomy and the need to protect fundamental rights and freedoms of children and young people should play a prominent role.
- In this fast-paced socio-technological landscape, which raises unprecedented challenges for privacy, data protection and other fundamental rights and freedoms, we welcome the acknowledgement of the “evolving and complex challenges that digital technologies, including AI, pose with respect to protecting human rights, including privacy, and of the risks to personal data protection” made explicit in the Ministerial Declaration of the G7 Industry, Technology and Digital Ministerial Meeting.Footnote 2
- We acknowledge that AI has brought innovative changes to our society and is poised to play a transformational role in how we work, live, communicate, and experience the digital world. We agree that AI, when used responsibly and in a manner that respects fundamental rights, can prove to be a tool beneficial to the persons impacted by its use. In fact, AI has already shown promise in helping to address some of today’s most pressing issues, including challenges in the fields of medical research and diagnosis, transportation and logistics, workplace productivity, education and skills development, and energy production and distribution.
- The current generation of children – “Generation Alpha” – will be the first to be raised in a world strongly influenced by AI. Whether through direct interactions with AI-driven products, or through decisions made by automated systems, AI will have a meaningful impact on young people. Given the implications of AI for children, it is important that particular attention be given to child-centric concerns around the use and development of AI. Such an approach should be considered by all AI stakeholders.
- While we acknowledge the significant opportunities that digital technologies such as AI create for children and young people, we believe that the extent of their possible impact over data protection, privacy rights and freedoms must be closely monitored due to children’s and young people’s vulnerability with respect to these technologies. Such vulnerability is due in particular to limited understanding of digital privacy, children’s stage of development, and their limited life experiences.
- Given the current environment of AI development in relation to children, DPAs play an important role in identifying potential privacy harms and promoting awareness of risks to privacy rights. DPAs pay specific attention in their work to children’s and young people’s privacy and data protection rights, by enforcing data protection laws, producing age-appropriate design codes of practice, and issuing guidelines aimed at providing practical guidance concerning the protection and privacy of children and young people.
- Although children’s rights, including the right to privacy and data protection, are clearly upheld in international conventions, in our recent experience acting as supervisory authorities, we are concerned about potential violations of privacy and data protection linked to the use of AI systems which could have serious implications for children and young people. Keeping in mind current and potential future risks, we identify in particular the following as relevant areas of concern:
- AI-based decision making: the complexity of AI systems as well as insufficient transparency of the data processing supporting decision-making may lead to significant risks in particular for the rights and freedom of children. Children and their caretakers may not be provided with sufficient information to know how to challenge decisions that, in turn, may have a significant impact on them. This can ultimately lead to unintended discriminatory biases in the overall process of AI-based decision making.
- Manipulation and deception: AI tools could be used to nudge individuals into taking actions they might not otherwise take, including those that may be against their interests. The availability of AI-aided tools able to generate content that can be manipulative, deceptive or capable of jeopardizing users’ emotional states and decision-making exposes children to serious risks. Being less able to identify or question when such circumstances occur, a child may be particularly vulnerable. Examples of such tools and applications include:
- AI in toys/AI companions: children and young people may be more likely to form bonds with toys or online companions with AI-integration, which could lead them to disclose sensitive personal information or to be otherwise manipulated;
- Deepfakes: children and young people may be particularly vulnerable to the impacts of deepfake information based on their personal data, including, but not limited to, the generation of sexual or otherwise inappropriate images purporting to include that young person (image nudification).
- Training of AI models: the training of models and algorithms plays a vital role in the effectiveness and reliability of AI tools, thus affecting their overall lifecycle. The collection and use of children’s personal data to train AI models, including data scraped from publicly available sources or captured from connected devices, can violate children’s privacy rights, and may lead to harmful consequences.
- In accordance with the concept of the best interests of the child, which is widely shared beyond our jurisdictions and provides solid ground for a collective global discussion due to the global reach of AI systems, we emphasize the need to “promote the development and usage of emerging technologies in ways that reinforce trust and respect privacy”, in line with our Data Protection and Privacy Authorities’ Action Plan.Footnote 3 We encourage the development of a path towards trustworthy, child-appropriate AI that allows children and young people to safely leverage, on a global scale, the many opportunities of this revolutionary technology.
- Considering the aforementioned concerns:
- We welcome ongoing initiatives at a global level that bring together multidisciplinary perspectives and directly involve children and young people, as appropriate, in order to strengthen international cooperation in this area and take steps to address the concerns raised above. Such initiatives include the development and regular updating of codes of conduct to ensure a safe digital world, notably when AI is deployed, for children.
- We recognize the need for AI systems’ developers and users to account for specific age-appropriate measures to allow children and young people to safely use AI-enabled technologies. Such measures, complying with privacy and data protection law and principles, should:
- inform AI-enabled technologies, notably when directly targeted at children or likely to be accessed by them, by the privacy by design principle, keeping in mind especially (but not only) the concerns highlighted above;
- ensure that AI systems appropriately mitigate risks of online addiction, manipulation, or discrimination;
- protect children from harmful commercial exploitation, including the targeting or profiling of children for commercial purposes;
- ensure that AI models likely to impact children are designed in a way that supports their best interests. This includes implementing and documenting constraints both in the collection and use of training data with respect to children, and in relation to (in)appropriate outputs from the model;
- incorporate a documented analysis, through a privacy impact assessment, of possible risks, in order to enable “by design and by default” the development of safe AI-based technologies for children. In this context, service differentiation should be considered as a possible strategy, with the aim of deploying AI tools specifically designed for children;
- firmly respect the principle of transparency in processing personal data, and in research and development activities for AI models, with a view to upholding the concept of explainable AI. The adoption of transparent models – which aim for traceability and explainability of how an algorithm produces specific results – helps young people and their caretakers make meaningful decisions about the sharing and use of their personal data. It also allows for clearer evidence of embedded safeguards, which can enhance trust and encourage a wider and more effective utilization of AI for children.
- We emphasize that risks and considerations specific to children, including those highlighted above, should be taken into account when devising relevant national and international standards concerning the design, the deployment and the marketing of AI applications.
- We encourage the promotion of “digital literacy” within digital innovation policy frameworks. Doing so can help to increase awareness of the opportunities and also the risks to which AI systems expose children and young people. This can also help to ensure that the use of AI in relation to children, and within education systems, is based not only on appropriate knowledge and skills, but also a meaningful understanding of, and respect for, data protection rights and responsibilities. This applies as well to the training of staff with respect to the use of AI in educational contexts.
- We highlight that DPAs must play a leading role in identifying potential risks pertaining to the use of AI in relation to children. This includes continuing supervisory and enforcement activities, as well as a responsibility to conduct awareness activities and make recommendations to stakeholders. These activities should be informed, where appropriate, by meaningful engagement between regulators, policymakers, educators, and other stakeholders.
Footnotes
- Date modified: