Language selection

Search

hide

Real Results Vol. 4

October 2023

Protecting privacy rights
through innovative research

Q&A: AI and the future of healthcare

The race is on to deploy artificial intelligence (AI) applications in healthcare. But because they rely on mass quantities of patient health information, many are being developed by privately owned companies and rely on public-private partnerships and data sharing arrangements. How will this impact patient privacy? To learn more, Real Results interviewed Tim Caulfield, the lead researcher of “Privacy and Artificial Intelligence: Protecting Health Information in a New Era”, a project funded by the Office of the Privacy Commissioner of Canada (OPC).

Why study the use of AI in healthcare?

Without a doubt, AI is going to be part of the future of healthcare. We're already seeing it in the research setting, and increasingly there's hope it will be used in the clinical setting – from diagnostics and interpreting images such as MRI scans and X-rays to making decisions about clinical care and clinical pathways. This includes everything from organ donation to pharmaceutical use. Admittedly, research continues to unfold on exactly how it's going to be used, which was one reason our team was very interested in this topic. And because it's big business.

How is it big business, especially since it involves healthcare?

We're looking at major tech companies being involved in this area because, understandably, huge patient data sets are required. For AI to be effective in the context of clinical care, you must have access to a huge body of personal patient information. Basically, you're feeding patient information into these ‘black boxes’ in order to develop algorithms for patient care. And, of course, that raises very interesting questions about consent and privacy.

What kinds of questions?

Let's start with consent. We need to make sure that comprehensive consent is being obtained for a patient’s data to be used in the context of AI, especially if the data is not anonymous and truly de-identified – which raises its own kind of paradoxical challenges.

But even if consent is obtained, how will the data ultimately be used? When you have big private companies involved, we have to make sure that they obtain re-consent if the data ends up being used in a way not anticipated in the original consent. This should be part of the very bedrock of privacy law and privacy principles.

What are the “paradoxical challenges” of making sure a patient’s data truly is de-identified?

First of all, traditional data breaches can happen anytime you have large data sets that contain a lot of patient information, which is a concern, full stop. But the reality is that it's becoming more and more difficult to truly de-identify data and make it truly anonymous. And that's a fascinating issue, because one of the common policy tools that is deployed in the context of privacy is to make the data anonymous. But for the data to be useful to AI researchers, it has to contain meaningful information.

That does seem paradoxical…

Decorative elementDecorative element

The idea that you can truly de-identify data is increasingly becoming an illusion and is therefore a less effective policy tool to deal with the privacy challenges that emerge in areas like AI.

So you can only de-identify data to a certain extent. And why this is a paradoxical challenge in the context of AI is that emerging AI technology is making it easier to re-identify individuals. Some interesting studies on big bio banks, for example, have demonstrated that you don't need a lot of patient data to re-identify someone.

Are there other risks involved in having privately owned companies or public-private partnerships involved in AI-powered healthcare applications?

Whether talking about Google or other tech companies, it’s public-private in the sense that private companies are accessing patient information that is part of the public system. Companies may also be partnering with researchers who are being funded by public bodies, like the CIHR or public universities. These public-private collaborations can create regulatory challenges where you have a body of laws and regulations that might apply to one area, but not as much to another area, and vice versa. Our project calls for clarity on how the regulatory regime would play out in this context.

You mentioned the potential for privacy breaches, but what other types of inappropriate use or disclosure of personal health information could come up?

An individual should have ongoing control over their identifiable information, but you can have a hypothetical situation where you gave consent to have your data used for AI and you're envisioning that it's being used for the greater good, such as health research or perhaps even developing a powerful and helpful clinical tool. But then it ends up being used for something more frivolous that's just for profit.

The bedrock of legal, ethical, and research ethics principles is to ensure the ongoing right to withdraw your consent for information that continues to be linked to you. That becomes more difficult when you're talking about your information being aggregated by Google, for example, as part of a massive AI initiative. How can these principles be operationalized? That’s a really interesting question, which we deal with in the report.

How did the research unfold for this project?

It started with gaining a greater appreciation for the role of AI. We wanted to get a good sense of how AI was playing out right now and how it is likely to be applied in the future. We also wanted to make sure our work was very much grounded in the science. We didn’t want any of our policy recommendations to be based on science hype and speculation, which we’ve seen in the past with issues such as human cloning.

We investigated the Canadian legal and policy framework, focusing on two issues: The potential for inappropriate treatment, use, or disclosure of personal health information by private AI companies, and the potential for privacy breaches that use newly developed AI methods to re-identify patient health information.

We also analyzed Canadian legislation, focusing on the federal Personal Information Protection and Electronic Documents Act, as well as applicable common law relating to torts and fiduciary obligation, and key Canadian research ethics policy, namely the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans. The goal was to paint a comprehensive picture of the relevant issues.

What were some of the key findings and recommendations?

Our key recommendations are around the need for appropriate consent and the need to ensure the development of comprehensive regulatory frameworks that capture the public and the private actors involved. We also developed recommendations for how to operationalize the right to withdraw consent in the context of AI, and some that cover non-identifiable information – to deal with how the concept of consent is going to be used in the future as it slowly starts to evaporate when it doesn't have that tangible meaning anymore.

Any next steps for this research?

Our goal is to translate our work in a range of ways. One of the members of the research team, Blake Murdoch, is emerging as a national voice on this topic. We have also created shareable content for social media, including infographics, and I anticipate we'll create more of those in the future. It’s important to think of unique ways we can engage a wide range of communities with our recommendations.

Decorative element

How the public is responding to AI

According to current research, public skepticism continues to exist about the use of AI in the context of healthcare decisions, showing a preference for human intervention. Like driverless car technology, where despite data that demonstrates potentially safer outcomes than human-driven cars, the public still want humans to be involved in driving – and the same holds true for how the public views healthcare.

Legal issues also arise: If something goes wrong with a healthcare decision, who is liable? Two recent studies tackle these questions. In one study, “Attitudes and perception of artificial intelligence in healthcare: A cross-sectional survey among patients”, researchers found that “... patients insist that a physician supervises the artificial intelligence and keeps ultimate responsibility for diagnosis and therapy.” A fundamental issue is trust. In a recent survey, “Do People Favor Artificial Intelligence Over Physicians? A Survey Among the General Population and Their View on Artificial Intelligence in Medicine”, researchers concluded that “the general population is more distrustful of AI in medicine unlike the overall optimistic views posed in the media”.

Other articles from Real Results


Disclaimer: The OPC’s Contributions Program funds independent privacy research and knowledge translation projects. The opinions expressed by the experts featured in this publication, as well as the projects they discuss, do not necessarily reflect those of the Office of the Privacy Commissioner of Canada.

Date modified: