Language selection

Search

Contributions Program projects underway

On August 30, 2024, the Office of the Privacy Commissioner of Canada (OPC) announced funding for a new round of independent research and knowledge translation projects funded under its Contributions Program. These projects will be completed by March 31, 2025. The OPC will post a summary of completed projects, as well as links to their outcomes, once the projects are completed and reviewed by the OPC.


2024-25 Contributions Program funding recipients

Organization: Concordia University
Project title:
Privacy concerns in social login ecosystems
Amount requested: $50,000
Project leader: Mohammad Mannan and Amr Youssef

Project summary:

Social login, which is a form of single sign-on, has become a ubiquitous feature on websites and mobile applications. It allows users to log in or sign up to these platforms using their existing social media credentials, such as Facebook, Google, LinkedIn, X/Twitter and Apple. While social login has advantages, such as simplifying login and reducing password fatigue, it also raises privacy and security concerns.

The aim of the project is to investigate, through a comprehensive and systematic technical/experimental measurement study, the ecosystem of social login on websites and in Android apps. Researchers plan to design and implement a privacy and security analysis framework to find and analyze these websites and apps. Using this framework, the researchers will compare the data sharing practices of real-world social login implementations to gain insights into the privacy implications for users. Based on the above, the researchers will produce a public report summarizing the findings of their investigation. The report will present recommendations for improving the security and privacy of these social login solutions. The report will include easy-to-follow guidelines for Canadians who use social logins on websites and in apps. The researchers will also produce a technical paper detailing the full methodology, results and technical recommendations.


Organization: Vancouver Island University
Project title:
Safeguarding tomorrow’s data landscape: Young digital citizens’ perspectives on privacy within AI systems
Amount requested: $86,601.90
Project leader: Ajay Shrestha
Project team: Ankur Barthwal, Molly Campbell, Austin Shouli, Saad Syed

Project summary:

In the ever-expanding digital landscape where artificial intelligence (AI) plays a central role, it is crucial to address the privacy impacts of these emerging technologies. This research project aims to explore the complexities of AI’s privacy impacts with a focus on understanding the concerns of young digital users and protecting children’s privacy rights.

Surveys, interviews and focus groups will be used to gather insights from young users, educators, parents, AI developers and researchers to explore their perspectives on data control and factors influencing perceptions of privacy in AI applications.

By understanding how young users perceive and expect privacy in AI applications, the project strives to contribute to the responsible integration of AI technologies into the lives of young users, championing ethical AI use and ensuring privacy protection in the digital age. The research will also examine digital literacy levels and prior interactions with AI technologies. This will help develop guidelines aimed at addressing young users’ specific concerns.

The project will engage young digital citizens through workshops and participatory activities, which aim to empower young users and give them a voice in shaping the narrative around privacy in AI systems.


Organization: Internet of Things Privacy Forum
Project title:
The machine-readable child: Governance of emotional AI used with Canadian children
Amount requested: $81,464.10
Project leader: Gilad Rosner
Project team: Andrew McStay

Project summary:

The research project will evaluate PIPEDA for its fitness to govern the use of emotional AI with children, highlighting gaps and offering suggestions where appropriate. The research will delve into the privacy challenges posed by using these technologies. The research project will also yield practical assistance for the makers, sellers, and assessors of child-focused emotional AI technologies. It will do so by developing modules for privacy impact assessments, creating Canada-focused guidelines for the commercial development, deployment and use of these products and services. The project will also develop best practices for fairness, accountability, and transparency of emotional AI systems that collect the data of Canadian children.

Emotional artificial intelligence (AI) is a subset of AI that measures, understands, simulates, and reacts to human emotions. It is an AI system that purports to determine an individual’s emotional state based on analyzing a facial image or other characteristics/information. Emotional AI is being increasingly used to understand and respond to psycho-physiological emotional reactions. With emotion sensing and emotional AI, we refer to technologies that use affective computing, AI and machine learning techniques to sense, learn about and interact with human emotional life. However, these technologies are relatively new and often not well understood by parents, children, school administrators, regulators and legislators. While there will certainly be benefits to these technologies, when used with children, emotion and mood sensing technologies become deeply problematic ethically and in may not be in the best interests of the child.


Organization: Université du Québec à Montréal
Project title:
Dangerous games: protecting the privacy of children under 13 in mobile games
Amount requested: $89,906.00
Project leader: Maude Bonenfant
Project team: Sara Grimes, Thomas Burelli, Hafedh Mili, Alexandra Dumont, Cédric Duchaineau

Project summary:

Mobile gaming is on the rise among young Canadians, even among toddlers. At the same time, the global mobile gaming industry is growing exponentially. In this mobile industry, several business models exist, but one of the most profitable is the collection of personal data for targeted advertising.

While data collection in mobile games is governed by general terms and conditions of use, these are long, tedious to read and complex to understand, if not impossible to find when it comes to third parties. As a result, the terms are difficult to understand for young people and parents, who are obliged to accept the terms and conditions proposed by the game in order to gain access. This means that the youngest children are not as well protected as they should be.

This research will focus on analyzing mobile game applications and comparing them with the compliance criteria of the Children’s Online Privacy Protection Act (COPPA), in order to identify good and bad practices in protecting children’s privacy in the world of gaming.


Organization: University of Ottawa
Project title:
Benchmarking large language models and privacy protection
Amount requested: $83,680.00
Project leader: Rafal Kulik

Project summary:

In the current digital age, the accelerated growth of data generated by individuals has fuelled advances in artificial intelligence (AI), particularly the development and deployment of large language models (LLMs). These sophisticated AI systems, capable of understanding, generating and interacting with human language in ways that mimic human thought processes, are becoming integral to applications ranging from personalized content creation to drug discovery. As these models become more deeply embedded in the daily functions of society, the need to protect individual privacy within these systems is crucial.

The rapid development of LLMs and the pace at which these tools are evolving present a significant challenge in defining current and practical guidelines that can effectively address the use and deployment of these systems. Given the unique capabilities and risks associated with LLMs, there is a growing need to establish robust privacy standards specifically tailored to these technologies.

This project will provide a practical introduction to LLMs and will explore privacy challenges for legal and policy experts and the role of privacy-enhancing technologies. Researchers will survey legal, policy and technical experts, as well as civil society groups to explore the benefits and opportunities of these technologies. They will also provide recommendations and public education materials.


Organization: University of Waterloo
Project title:
Mitigating privacy harms from deceptive design in virtual reality
Amount requested: $58,708.00
Project leader: Leah Zhang-Kennedy and Lennart Nacke
Project team: Hilda Hadan

Project summary:

Deceptive design in virtual reality (VR) is a rapidly evolving privacy concern. This research will explore the effects of deceptive design on user information privacy in commercial VR applications. By identifying and classifying deceptive design patterns in VR that undermine and subvert users’ privacy, the researchers seek to develop countermeasures and guidelines to counteract their negative impact. The researchers also seek to increase awareness and provide design and policy guidelines and recommendations to VR developers, policymakers and government.

Researchers plan to evaluate VR application design to identify manipulative strategies and conduct a large-scale analysis of user perceptions and experiences with respect to privacy and deceptive design. They also plan to systematically document different deceptive practices and patterns in VR, note consequences for privacy and suggest mitigation strategies.

The researchers anticipate that their project will lead to opportunities to improve the design of VR applications and will lead to recommendations for privacy regulations to better protect Canadians. They plan to create a public repository, design guidelines and educational resources for the public, among other deliverables.


Organization: Toronto Metropolitan University
Project title:
Generative AI, Privacy Policy and Young Canadians
Amount requested: $49,640
Project leader: Karim Bardeesy
Project team: Sam Andrey, André Côté, Tiffany Kwok, Christelle Tessono

Project summary:

In this project, researchers seek to understand the privacy implications of generative AI technologies in order to inform the application of current and proposed Canadian privacy legislation and privacy-preserving administrative policies and practices, with an emphasis on impacts on minors.

The project will consist of three core components. First, researchers will conduct interviews with privacy and artificial intelligence experts to help shape an understanding of the privacy consequences of AI in general and specifically for minors, and how best to mitigate them. Second, researchers will also undertake legal and policy analysis to evaluate both current and proposed privacy laws with respect to their capacity to effectively address the specific risks posed by generative AI. Third, researchers will conduct a comparative analysis of privacy and data protection laws in other jurisdictions and technical interventions (e.g., age gating, youth data collection bans, school board policies/bans, etc.). The comparative approach will allow the researchers to draw insights from other jurisdictions’ efforts to manage and mitigate AI-specific risks to privacy.


2023-24 Contributions Program funding recipients

Organization: York University (Ontario)
Project title: Vertical “Just-in-Time" Privacy Policy Videos for Social Media Mobile Apps
Amount awarded: $ 50,000
Project leader: Jonathan Obar

Project summary:

Few people seem interested in accessing or reading text-based privacy policies. New strategies are needed to ensure that notice and consent remain fundamental to information protections. This project will deliver four mobile-app-friendly videos to supplement a text-based social media privacy policy. The project will audience-test the videos through a qualitative focus group analysis and deliver a project report with a preliminary set of best practices for creating videos to supplement text-based privacy notices. The videos created for this project will serve as prototypes to support future notice efforts by digital service practitioners.


Organization: Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic – CIPPIC (Ontario)
Project title: Making Privacy More than a Virtual Reality: The Challenges of Extending Canadian Privacy Law to Extended Reality
Amount awarded: $ 49,450
Project leader: Vivek Krishnamurthy

Project summary:

Extended reality (“XR”) technologies – which include augmented, mixed, and virtual reality – promise to revolutionize many aspects of our lives. Yet, the nature of these technologies’ data collection poses profound new implications for personal privacy. This project will begin with a survey of current XR technologies and anticipated developments to contextualize the privacy concerns surrounding XR hardware. Following the survey, the researchers will undertake a comprehensive study evaluating whether PIPEDA and the proposed CPPA are up to the task of protecting Canadians’ privacy in XR environments. The researchers will explore whether alternative approaches may be required to protect the privacy of Canadians in immersive environments, offering suggestions for Canadian privacy laws to address challenges posed by XR technologies.


Organization: University of Ottawa (Ontario)
Project title: Benchmarking Differential Privacy and Existing Anonymization or Deidentification Guidance
Amount awarded: $ 47,370
Project leader: Rafal Kulik

Project summary:

Government and private industry, including official statistics organizations or health institutions, collect information from individuals and publish aggregate data to serve the public interest. Organizations have long collected information under a promise of confidentiality, on the understanding that the information provided will be used for statistical purposes only and that the release and sharing of information will prevent information from being traced back to a specific individual. Differential privacy provides a means of limiting the information that is released so that an individual’s contribution remains hidden from a statistical release of a single query (or a small number of queries).

Recently there has been a significant push to establish differential privacy as a standard in emerging AI technologies. Though the technique is starting to be widely used by tech companies and government agencies, there are challenges that must be overcome before we can see a full adoption of this technology when it comes to deidentification and anonymization. This project will aid in developing a framework necessary to implement differential privacy in practice, as well as help form a decision-making protocol in terms of other privacy technologies and current guidance.


Organization: Queen’s University (Ontario)
Project title: Large Language Models and the Disappearing Private Sphere
Amount awarded: $ 50,000
Project leader: Catherine Stinson

Project summary:

This project will examine possible futures for large language models (LLMs) and privacy in the private sector in the age of immersive and embeddable technologies. The project will produce a report, guidelines for institutional review boards (IRBs), and a public website to increase knowledge and understanding about the actual and potential future implications of LLMs and the collection of data used to train them. The project will also examine the differential effects of LLMs on the privacy of marginalized Canadians and members of minority language groups.

The researchers will focus on five questions: 1. What is the de facto status of web scraping in Canada, according to IRBs? 2. How much data about individuals can be retrieved from LLMs? 3. Are marginalized groups and minority language groups more susceptible to privacy leakage from LLMs? 4. Are Canada’s privacy laws and regulations capable of dealing with these models and their privacy implications? 5. What changes to law and regulation might be needed?


Organization: University of Western Ontario (Ontario)
Project title: Identifying and Responding to Privacy Dark Patterns
Amount awarded: $ 49,717
Project leader: Jacquelyn Burkell

Project summary:

The aim of this project is to help minimize the impact of privacy dark patterns on Canadian youth by informing the development of effective regulatory frameworks and educational materials that will assist users to resist these tactics. Privacy dark patterns are interface design strategies intended to “nudge” users to reveal personal information, either directly, or by enabling (or failing to disable) privacy-invasive platform or profile settings. Teens are especially vulnerable to the effects of dark patterns on privacy choices, both as avid users of the Internet and social media and because of their awareness levels of commercial surveillance online.

The researchers will conduct focus groups with teen users of social media sites to determine whether they are able to identify privacy dark patterns and how they respond to these strategies. The researchers will also review current regulatory responses to privacy dark patterns, identifying both the variety of approaches and challenges to effectiveness. The results of this research will inform the development of educational materials in collaboration with MediaSmarts that teach teens how to resist privacy dark patterns on social media.


Organization: Option consommateurs (Quebec)
Project title: In the Matrix – Consumer Privacy in the Metaverse
Amount awarded: $ 49,758
Project leader: Alexandre Plourde

Project summary:

The advent of the metaverse—a three-dimensional virtual reality where users can interact with others—poses privacy risks for consumers. The metaverse’s immersive capabilities allow for the unprecedented collection of personal information. It is conceivable that the analysis of data generated in the metaverse could infer thoughts, emotions or other sensitive information about consumers. As a result, the scope of the metaverse’s data-collection abilities raises multiple issues related to the legal framework for personal information protection.

In this research project, Option consommateurs will outline the various metaverse models Canadians have access to, as well as those that could emerge in the coming years. Option consommateurs will also analyze the privacy policies, user agreements and informational content of a representative sample of three types of companies in the metaverse environment: metaverse developers, companies that are doing business in the metaverse and companies that create the devices used to access the metaverse. Lastly, Option consommateurs will look at what legislation applies to personal information protection in these new environments—in Canada and abroad.


Organization: The Centre for International Governance Innovation – CIGI (Ontario)
Project title: Hacking the Human Mind: Lessons for Canada’s Democracy
Amount awarded: $ 30,000
Project leader: Aaron Shull

Project summary:

Companies are now deploying technological, psychological, and sociological methods to get inside the minds of users, collecting data of millions of people, many of whom may not be aware of it. This approach embodies a form of behavioural manipulation that is threatening the right to freedom of thought and opinion and is an invasion of one’s mental privacy. Given this new reality, there is an urgent need to implement strategies to protect Canadians’ human autonomy.

This research project will draw on diverse subject matter perspectives to prepare an explainer video and policy brief for the Canadian public, exploring questions such as: How does privacy protect our inner freedom? In our data-driven world, where do we draw the line between legitimate influence and unlawful manipulation of thought? How are challenges to the freedom of thought present in the Canadian information ecosystem context? What are the risks to privacy and cognitive freedom posed by advances in neurotechnology? How can immersive and embeddable technologies enable individuals to thrive while protecting their privacy? How can we chart a path to effective protection? What should privacy legislation and policies address in response to the risks and challenges that arise from technologies?


Organization: Law Commission of Ontario (Ontario)
Project title: Privacy and Human Rights Impact Assessment in Canadian AI Systems
Amount awarded: $ 39,600
Project leader: Nye Thomas

Project summary:

Notwithstanding AI’s potential, private sector use of AI is often very controversial. There are many examples of private-sector AI systems that have violated privacy protections or proven to be biased or discriminatory. Privacy and human rights compliance are the foundations of trustworthy AI. Canadian AI systems must be both privacy-compliant and human rights compliant to ensure Canadians can “trust” AI systems and to unlock the extraordinary economic potential of this technology. To date, it appears initiatives that promote privacy and human rights in Canadian AI systems have developed on distinct tracks.

This project will produce a comprehensive report identifying law and policy reform recommendations discussing the relationship between privacy and human rights in AI governance and regulation. The project will also make recommendations considering whether an integrated “Trustworthy AI” impact assessment tool addressing both privacy and human rights is desirable, achievable, or practical.


Organization: University of Waterloo (Ontario)
Project title: A Pan-Canadian Data Governance Framework for Health Synthetic Data
Amount awarded: $ 48,875
Project leader: Anindya Sen

Project summary:

Health data, especially electronic medical records, are often stored in disparate systems and formats, rendering integration and standardization difficult. Researchers and developers often depend on de-identified or aggregated data to test theories, data models, algorithms, or prototype innovations, but it takes a substantial amount of time and resources to retrieve, aggregate, and deidentify relevant data before it can be used. One approach proposed to solve these challenges is the creation of realistic, high-quality, synthetic health datasets that capture as many of the complexities of the original datasets, but do not include any real patient data.

However, compared to other countries, health synthetic data have remained implicit in Canada’s regulations, and not received explicit treatment. In addition, there is no universal framework to govern health synthetic data and assess its impacts. This project aims to develop a data governance framework for health synthetic data, ethical guidelines for research involving health synthetic data, recommended policy changes, and a cost impacts framework.


Organization: University of Calgary (Alberta)
Project title: Mitigating Race, Gender and Privacy Impacts of AI Facial Recognition Technology
Amount requested: $ 49,772
Project leader: Gideon Christian

Project summary:

The increasing use of Artificial Intelligence Facial Recognition Technology (AI-FRT) in the private and public sectors has been plagued by issues of privacy as well as racial and gender bias, as the technology regularly misidentifies or fails to identify individuals of a particular gender or race.

This research project seeks to examine and identify the race, gender and privacy issues related mainly to the development of AI-FRT by the private sector in Canada, and its use by both the private and public sectors in Canada. Other objectives of the research include to develop a framework and guidelines to address race, gender and privacy impacts arising from the development and deployment of AI-FRT by private sector developers in Canada; to identify possible reform of the Personal Information Protection and Electronic Documents Act (PIPEDA) to legislatively address race, gender and privacy impacts arising from private sector development and deployment of AI-FRT; to collaborate with the Alberta Civil Liberties Research Center (ACLRC) to increase public understanding and awareness of race, gender and privacy impacts of AI-FRT through webinars, workshops and publication of research papers; and to strengthen research capacity in academia by training graduate students to research on race, gender and privacy issues related to AI-FRT.


Organization: Concordia University (Quebec)
Project title: Privacy Analysis of Virtual Reality/Augmented Reality Online Shopping Applications
Amount awarded: $ 35,454.50
Project leaders: Mohammad Mannan (principal investigator) and Amr Youssef

Project summary:

This project will investigate the ecosystem of virtual reality/augmented reality (VR/AR) e-commerce, retail apps and websites, including virtual try-on, virtual makeup and beauty apps or websites, and the technological tools that are used by developers of these systems. The researchers will design and implement a privacy and security analysis framework to find and analyze these apps and websites as well as a selected set of their software development tools and libraries.

The researchers will produce a public report summarizing the findings of their investigation and presenting recommendations for improving the security and privacy of VR/AR systems. The report will also include some easy-to-follow guidelines for Canadian shoppers who use these apps. The findings of this report will be provided online for free. The researchers will also produce a technical paper detailing their full methodology and results, as well as their technical recommendations.


Date modified: