Language selection

Search

Remarks at the Privacy and Generative AI Symposium 2023

December 7, 2023

Ottawa, Ontario

Address by Philippe Dufresne
Privacy Commissioner of Canada

(Check against delivery)


Good morning, and welcome to our Privacy and Generative AI Symposium. My name is Philippe Dufresne, and I am the Privacy Commissioner of Canada. I want to start off by thanking all of you for being here, and by thanking the International Working Group on Data Protection in Technology and my OPC team for their work in planning this important event.

This is a distinguished gathering and I know that you are very much in demand so I am incredibly grateful for your presence here, particularly given that many of you have travelled great distances to take part in this event. I am delighted to welcome data protection colleagues from Europe, the Philippines, North Africa, North America, Bermuda (recent hosts of the Global Privacy Assembly); and fellow commissioners from across Canada, all the way from the Yukon and Nunavut, BC, to Atlantic Canada.

It is also my pleasure to welcome our distinguished guests, as well as an impressive group of experts and academia in technology, privacy, public policy, governance, and AI, including our keynote speaker Dr. Gary Marcus.

My team and I are happy to welcome you here this morning, in a venue that is surrounded by some of Ottawa’s most famous landmarks, such as the Parliament buildings, symbols of our democratic values, the historic Chateau Laurier, which opened in 1912, and the former Central Ottawa Train station, which also dates back to 1912 and currently houses the Senate of Canada during the restoration of Parliament’s Centre Block.

This venue we are in, the National Arts Centre, is Canada’s home for the performing arts, promoting the next generation of audiences and artists from across Canada. Its stunning renewal showcases the balance of modern design creativity, while at the same time, as historian Sarah Jennings describes, “demonstrates the ageless importance of the live performance and that ‘there is no substitute for the real thing’”. An interesting quote for a discussion on generative AI.

All of this makes this venue an appropriate place for today’s important discussions on generative AI in the privacy context.

When I was appointed as Privacy Commissioner of Canada 18 months ago, I laid out three pillars of my vision for privacy. They are:

  1. Privacy is a fundamental right;
  2. Privacy supports the public interest and Canada’s innovation and competitiveness; and
  3. Privacy accelerates the trust that Canadians have in their institutions and in their participation as digital citizens.

These three pillars reflect the reality that Canadians want to be active and informed digital citizens, able to fully participate in the digital world without having to choose between this participation and their fundamental privacy rights.

These pillars continue to shape how I view my mandate, and are in my opinion, particularly relevant to any consideration of privacy and generative AI.

Here in Canada, and around the world, we have witnessed the fast-paced development of this innovative technology, with its many potential benefits to productivity, and to solving some of our most pressing challenges such as health care delivery and fighting climate change. But we have also been warned of the significant potential risks if the technology is not properly regulated. Those of us who are privacy regulators have been working closely together to identify ways to promote and protect the fundamental privacy rights of our citizens while at the same time allowing innovation to support the public interest and a strong economy. We have done so as a privacy community but also in collaboration with regulators in other fields such as competition and broadcasting.

Beyond cross-regulatory cooperation, we have also engaged in productive exchanges and discussions with key stakeholders such as industry, legislators, academia, civil service and civil society – each sector having an essential perspective and contribution to bring to the challenge.

That is also our goal today with this symposium: to gather the policy makers and decision makers and experts in the field who are creating and deploying generative AI, to examine the many unique scenarios, challenges, and opportunities that accompany this technology. To determine how to derive the benefits promised by generative AI while mitigating as many of the potential risks as possible.

My goal is that the expertise that is shared at this morning’s symposium will inform discussions among the members of our International Working Group on Data Protection in Technology that follows.

We might compare where we are with generative AI to the dawn of commercial aviation. There was much excitement about flying in the early 20th century, people were alive with the possibilities that it opened up for the economy and for travelling quickly across the country and around the world. But the idea of flight also raised obvious risk.

After a safety incident in 1935, Boeing introduced a mandatory preflight checklist, and other airlines followed suit. Knowing that before every flight the crew will systematically check off a list of safety tasks that must be performed before taking off, is a tremendous factor in the ability of passengers to trust that the plane will operate safely. This, like good brakes in a fast car, should not be seen as an obstacle to innovation, but rather as something that enables innovation. It is not a zero-sum game.

Now, at the dawn of generative AI, it is up to us – the people in this room, and other thought leaders around the globe – to discuss how to fashion similar safeguards for generative AI.

Brad Smith, president of Microsoft, said last week that it is important to get the risks posed by the technology under control nowbefore they become problems. He has advocated for “safety brakes” that would act in a manner similar to emergency mechanisms in elevators, school buses and high-speed trains.

Here in Canada – and elsewhere – some important measures are already being taken.

At the federal level, the government has introduced Bill C-27 which includes a modernization of Canada’s private sector privacy legislation, as well as a new proposed Artificial Intelligence and Data Protection Act, which would require proactive measures to identify and prevent non-privacy harms and biased outputs of AI. I have recommended ways to improve this bill, and among them is a recommendation that organizations be legally required to conduct Privacy Impact Assessments to ensure that privacy risks are identified and mitigated for high-risk activities. We know that privacy harms are one of the top 3 risks of AI according to an OECD report surveying the G7 Digital Ministers.

I have also called for stronger transparency and explainability provisions for automated decision-making. These measures would mitigate privacy risks and would foster Canadians’ trust in these technologies, which is particularly important in these early days where our citizens are still discovering this new technology.

In June, in Tokyo, Japan, my G7 data protection authority colleagues and I issued a statement on generative AI, where we called on developers and providers to embed privacy in new technologies at the ground level, in the design and conception of these new products and services.

In September, Innovation, Science and Development Canada launched a voluntary code of conduct on the responsible development and management of advanced generative AI systems. I was pleased to see that the Code made express reference to our G7 Statement. More than a dozen companies and organizations have signed on, including a group representing more than 100 start-up companies across Canada.

Most recently, Canada joined 28 countries and the European Union in signing the Bletchley Declaration on safe and responsible development of the technology.

In October, the Global Privacy Assembly adopted a resolution on AI in which we jointly urged developers and providers of AI to recognize data protection as a fundamental right, and called for the creation of responsible and trustworthy generative AI technologies.

Earlier this year, I announced that I had launched a joint investigation with three provincial counterparts into OpenAI, the company behind ChatGPT, to determine whether its practices comply with Canadian privacy laws. While our privacy laws need to be modernized, they do currently apply in this space, and we are committed to their implementation.

That investigation is ongoing, and we are continuing to monitor these and other new technologies so that we can anticipate how they may impact privacy, recommend best practices to ensure compliance with privacy laws and promote the use of privacy-enhancing technologies.

Another issue related to technology and privacy is the growing use of biometrics, such as facial recognition and genetic information. In October, my Office released two draft biometrics guidance documents for consultation, one for the private sector and one for the public. We are seeking input to ensure that organizations use these technologies in a privacy-protective way.

And today, my provincial and territorial counterparts and I are announcing the launch of a set of principles for the responsible and trustworthy use of generative AI. The statement outlines key principles and best practices that organizations should consider when developing, providing or using AI models, tools, products and services.

This includes being open and transparent about what they are doing and making their tools explainable; only collecting the information they need for a particular purpose, and only keeping that information for as long as they need it for that purpose. We also highlight the unique impact that AI tools can have on vulnerable populations.

I would like to thank all of my fellow Canadian regulators – some of whom are with us here this morning – for taking part in the effort that has led to the development of these principles.

I am excited to be able to participate in this symposium, to listen and learn as some of the leading voices on AI share your ideas on how we need to think about generative AI.

We certainly have an ambitious agenda this morning.

The experts who have joined us for our panel discussions will look at generative AI from three different angles: the opportunities and risks that the technology offers; the regulatory environment; and AI’s impact on innovation and human rights.

Without further ado, let us move on with our program.

Date modified: