Statement on the Role of Data Protection Authorities in Fostering Trustworthy AI
Roundtable of G7 Data Protection and Privacy Authorities
11 October 2024
- Following the adoption of the G7 Data Protection and Privacy Authorities’ Action Plan in Tokyo in June 2023, and the commitment “to promote the development and usage of emerging technologies in ways that reinforce trust and respect privacy”Footnote 1 in a rapidly evolving technological landscape, we, the G7 Data Protection and Privacy Authorities (G7 DPAs), met to discuss the role of DPAs in fostering trustworthy AI.
- We recognize the increasing use of AI technologies across all sectors of society, creating opportunities at an extraordinary pace. At the same time, we note that AI, both in the public and in the private sectors, poses unprecedented challenges, in particular to privacy, data protection and other fundamental rights and freedoms.
- Therefore, we welcome the recognition of privacy and data protection in the Ministerial Declaration of the G7 Industry, Technology and Digital Ministerial Meeting (Verona and Trento, 14-15 March 2024)Footnote 2 which, in line with Ministerial Declaration of the G7 Digital and Tech Ministers’ Meeting (Takasaki, 29-30 April 2023)Footnote 3, remarks that G7 countries:
- “are aware of the evolving and complex challenges that digital technologies, including AI, pose with respect to protecting human rights, including privacy, and of the risks to personal data protection […]” (point 7);
- acknowledge, in particular in the public sector, that “it is more apparent than ever that the development, deployment, and use of AI systems should respect the rule of law, due process, democracy, human rights, including privacy, and protect personal data […]” (point 48 and Annex 2);
- “are committed to prioritising secure and inclusive approaches that respect human rights and protect personal data, [and] privacy […]” (point 55) when developing, deploying, and governing digital government services, including digital public infrastructure.
- We highlight that the position enshrined in the G7 Ministerial Declaration is not only longstanding, having found initial recognition in the Ethics guidelines for Trustworthy AI of the High-Level Expert Group on Artificial Intelligence set up by the European Commission (8 April 2019)Footnote 4 and in the OECD Recommendation on Artificial Intelligence (22 May 2019)Footnote 5, but has found widespread support in other international fora, including:
- the G20 AI Principles, at the Osaka Summit (June 2019)Footnote 6, recently reiterated in the G20 New Delhi Leaders’ Declaration (September 2023)Footnote 7;
- the Recommendation on the Ethics of Artificial Intelligence by the United Nations Educational, Scientific and Cultural Organization (UNESCO, 23 November 2021)Footnote 8;
- the Bletchley Declaration, adopted by the Countries attending the “AI Safety Summit” (Bletchley Park, UK, 1-2 November 2023)Footnote 9, now echoed by the “Seoul Declaration for safe, innovative and inclusive AI”Footnote 10.
- We note that, with particular reference to respect for privacy and data protection, the key policy messages mentioned above are being translated into legal instruments. These include the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law of the Council of EuropeFootnote 11, the EU AI ActFootnote 12, and the US President’s Executive Order on Safe, Secure, and Trustworthy AIFootnote 13.
- We emphasize that many AI technologies, including generative AI, are based on the processing of personal data, which can subject natural persons to unfair stereotyping, bias and discrimination even when not directly processing their respective personal data. This, in turn, may influence larger societal processes with deep fakes and disinformation. Consequently, data protection and the need to protect the right to privacy are more critical than ever.
- We reiterate, as noted in the G7 DPAs’ “Statement on Generative AI” (June 21, 2023), that current privacy and data protection laws apply to the development and use of generative AI products, even as different jurisdictions continue to develop AI-specific laws and policiesFootnote 14.
- We note that DPAs have significant experience in examining and resolving AI issues through the development of recommendations, guidelines, policy documents and also through their enforcement actions. This is witnessed through the work of various international fora, including:
- Resolutions and Declarations adopted by the Global Privacy Assembly, in particular on Big DataFootnote 15, ethics and data protection in AIFootnote 16, facial recognition technologyFootnote 17, generative AI systems and AI and employmentFootnote 18;
- Working Papers on Privacy and Artificial IntelligenceFootnote 19, “Smart Cities”Footnote 20 and Large Language Models adopted by the International Working Group on Data Protection in Technology;
- Résolution sur l’accompagnement du développement de l’intelligence artificielle, adopted by the Association francophone des autorités de protection des données personnelles (AFAPDP)Footnote 21;
- Recomendaciones Generales para el Tratamiento de Datos en la Inteligencia Artificial, adopted by the Red Iberoamericana de Protección de DatosFootnote 22;
- ongoing discussions reported among the Asia Pacific Privacy Authorities (APPA)Footnote 23.
- We acknowledge that the complexity of AI technologies, which often involve extensive collection of personal data and sophisticated algorithmic systems, has led DPAs to emerge as key figures in the AI governance landscape, leveraging their expertise in data protection to uphold privacy and ethical standards. Their role is crucial in fostering truly “trustworthy” AI technologies, ensuring that they are developed and used responsibly. By drawing on their extensive experience and working collaboratively, DPAs can help navigate the complexities of AI, promoting the lawful development and deployment of these technologies whilst respecting human rightsFootnote 24.
- More precisely, just as the fundamental principles of data protection must, by design, be embedded in AI technologies, DPAs too, by design, must be included in the governance that is being built in relation to AI technologies – as highlighted in the Communiqué of the UK Roundtable of G7 Data Protection and Privacy Authorities (7-8 September 2021)Footnote 25.
- In this regard, we note that the G7 DPAs, by applying the data protection principles in place in their jurisdictions and, when appropriate, operating in a coordinated manner at a regional level, have already gained significant experience and expertise in assessing the impact of AI technologies on individuals’ rights from many different perspectives, including:
- supervising the processing of personal data in AI technologies, for example, in the use of manipulative and deceptive AI tools on children; AI-assisted facial recognition; measures tackling tax evasion; handling workplace and employee personal data; education; and generative AI, among others;
- monitoring technological developments through engaging with stakeholders and preparing reports, discussion papers and blog posts;
- drafting opinions on the various legislative initiatives that have led to the introduction of general or sectoral regulations on AI technologies;
- providing information of general interest and publishing guidelines that can inform public and private sector entities intending to use AI technologies;
- operating, usually in cooperation with other regulators, within regulatory sandboxes established at national or regional level;
- conducting compliance activities into providers of AI technologies to determine how personal information is used at various phases of AI models’ development and deployment.
- Moreover, we underline that DPAs act in full independence, which is crucial for ensuring a responsible and efficient governance of the development of AI technologies. This independence guarantees that decisions are impartial, focused on protecting fundamental rights, and free from external influences, thus promoting the ethical and transparent development and use of AI technologies.
- We strongly agree that education is key to helping equip individuals and organisations with the knowledge needed to navigate the evolving AI landscape responsibly. Within this framework, we emphasize that DPAs play a prominent role in promoting public awareness and understanding of AI technologies by way of their ongoing engagement with public and private stakeholders.
- We acknowledge that, as AI technologies continue to evolve, the cooperation between DPAs will be vital in creating a trustworthy AI ecosystem that benefits society as a whole. We recognize, therefore, that the global dimension of AI necessitates a stronger collaboration of DPAs across different jurisdictions.
- As recalled in the 2023 G7 DPAs’ Communiqué “Working toward operationalizing Data Free Flow with Trust and intensifying regulatory cooperation”Footnote 26, we confirm our commitment to work together to continue fostering innovation and advancing the safe and reliable development, deployment and use of trustworthy AI in a secure and reliable manner, while protecting and ensuring a high level of data protection and privacy. We affirm our shared and inextricably linked fundamental values and principles such as freedom, democracy, human rights, and the rule of law and ensure that these values are at the core of our cooperation.
- Furthermore, without prejudice to DPAs’ independence, we acknowledge the importance of collaboration and cooperation among various regulatory bodies, including those focused on competition, electronic communications, consumer protection and other relevant competent authorities, in order to address the multifaceted challenges posed by AI technologiesFootnote 27. We believe that a cooperative approach, in which DPAs are at the forefront in working closely with other authorities and competent bodies, ensures a holistic governance framework that can effectively manage the risks and harness the benefits of trustworthy AI technologies whilst safeguarding fundamental rights.
- Therefore, we call on policymakers and regulators to make available adequate human and financial resources to DPAs, to enable our societies to adequately tackle the new, highly demanding challenges posed by developing trustworthy AI as outlined in this Statement.
- In order to achieve the objective of trustworthy AI in light of all the above considerations, we call for approaches that recognize that:
- DPAs are human-centric by default: DPAs already have human rights and the protection of individuals at the centre of their approach. Data protection seeks to protect people’s rights and freedoms in relation to the processing of their data, and this also includes within AI technologies;
- Many data protection overarching principles can be transposed into broader AI governance frameworks: Fairness, accountability, transparency, and security are not only principles discussed in terms of AI governance but are principles that many DPAs already operationalize;
- DPAs supervise a core component of AI: Personal data sit at the core of AI development, including within applications such as generative AI. The DPAs’ role in ensuring AI technologies are developed and deployed responsibly is critical, and many DPAs are experienced in this role;
- DPAs can help address problems at their source: DPAs can help identify and address AI issues before they grow into systematic problems by examining AI in both upstream (how it is developed) and downstream (how it is being deployed) contexts. This means DPAs already have the capability to address issues at their source, before the technology is deployed at scale;
- DPAs have experience: DPAs already have vast experience with data-driven processing and can address AI thoughtfully and efficiently given this experience.
- Date modified: