Contributions Program projects underway
On August 30, 2024, the Office of the Privacy Commissioner of Canada (OPC) announced funding for a new round of independent research and knowledge translation projects funded under its Contributions Program. These projects will be completed by March 31, 2025. The OPC will post a summary of completed projects, as well as links to their outcomes, once the projects are completed and reviewed by the OPC.
2024-25 Contributions Program funding recipients
Organization: Concordia University
Project title: Privacy concerns in social login ecosystems
Amount requested: $50,000
Project leader: Mohammad Mannan and Amr Youssef
Project summary:
Social login, which is a form of single sign-on, has become a ubiquitous feature on websites and mobile applications. It allows users to log in or sign up to these platforms using their existing social media credentials, such as Facebook, Google, LinkedIn, X/Twitter and Apple. While social login has advantages, such as simplifying login and reducing password fatigue, it also raises privacy and security concerns.
The aim of the project is to investigate, through a comprehensive and systematic technical/experimental measurement study, the ecosystem of social login on websites and in Android apps. Researchers plan to design and implement a privacy and security analysis framework to find and analyze these websites and apps. Using this framework, the researchers will compare the data sharing practices of real-world social login implementations to gain insights into the privacy implications for users. Based on the above, the researchers will produce a public report summarizing the findings of their investigation. The report will present recommendations for improving the security and privacy of these social login solutions. The report will include easy-to-follow guidelines for Canadians who use social logins on websites and in apps. The researchers will also produce a technical paper detailing the full methodology, results and technical recommendations.
Organization: Vancouver Island University
Project title: Safeguarding tomorrow’s data landscape: Young digital citizens’ perspectives on privacy within AI systems
Amount requested: $86,601.90
Project leader: Ajay Shrestha
Project team: Ankur Barthwal, Molly Campbell, Austin Shouli, Saad Syed
Project summary:
In the ever-expanding digital landscape where artificial intelligence (AI) plays a central role, it is crucial to address the privacy impacts of these emerging technologies. This research project aims to explore the complexities of AI’s privacy impacts with a focus on understanding the concerns of young digital users and protecting children’s privacy rights.
Surveys, interviews and focus groups will be used to gather insights from young users, educators, parents, AI developers and researchers to explore their perspectives on data control and factors influencing perceptions of privacy in AI applications.
By understanding how young users perceive and expect privacy in AI applications, the project strives to contribute to the responsible integration of AI technologies into the lives of young users, championing ethical AI use and ensuring privacy protection in the digital age. The research will also examine digital literacy levels and prior interactions with AI technologies. This will help develop guidelines aimed at addressing young users’ specific concerns.
The project will engage young digital citizens through workshops and participatory activities, which aim to empower young users and give them a voice in shaping the narrative around privacy in AI systems.
Organization: Internet of Things Privacy Forum
Project title: The machine-readable child: Governance of emotional AI used with Canadian children
Amount requested: $81,464.10
Project leader: Gilad Rosner
Project team: Andrew McStay
Project summary:
The research project will evaluate PIPEDA for its fitness to govern the use of emotional AI with children, highlighting gaps and offering suggestions where appropriate. The research will delve into the privacy challenges posed by using these technologies. The research project will also yield practical assistance for the makers, sellers, and assessors of child-focused emotional AI technologies. It will do so by developing modules for privacy impact assessments, creating Canada-focused guidelines for the commercial development, deployment and use of these products and services. The project will also develop best practices for fairness, accountability, and transparency of emotional AI systems that collect the data of Canadian children.
Emotional artificial intelligence (AI) is a subset of AI that measures, understands, simulates, and reacts to human emotions. It is an AI system that purports to determine an individual’s emotional state based on analyzing a facial image or other characteristics/information. Emotional AI is being increasingly used to understand and respond to psycho-physiological emotional reactions. With emotion sensing and emotional AI, we refer to technologies that use affective computing, AI and machine learning techniques to sense, learn about and interact with human emotional life. However, these technologies are relatively new and often not well understood by parents, children, school administrators, regulators and legislators. While there will certainly be benefits to these technologies, when used with children, emotion and mood sensing technologies become deeply problematic ethically and in may not be in the best interests of the child.
Organization: Université du Québec à Montréal
Project title: Dangerous games: protecting the privacy of children under 13 in mobile games
Amount requested: $89,906.00
Project leader: Maude Bonenfant
Project team: Sara Grimes, Thomas Burelli, Hafedh Mili, Alexandra Dumont, Cédric Duchaineau
Project summary:
Mobile gaming is on the rise among young Canadians, even among toddlers. At the same time, the global mobile gaming industry is growing exponentially. In this mobile industry, several business models exist, but one of the most profitable is the collection of personal data for targeted advertising.
While data collection in mobile games is governed by general terms and conditions of use, these are long, tedious to read and complex to understand, if not impossible to find when it comes to third parties. As a result, the terms are difficult to understand for young people and parents, who are obliged to accept the terms and conditions proposed by the game in order to gain access. This means that the youngest children are not as well protected as they should be.
This research will focus on analyzing mobile game applications and comparing them with the compliance criteria of the Children’s Online Privacy Protection Act (COPPA), in order to identify good and bad practices in protecting children’s privacy in the world of gaming.
Organization: University of Ottawa
Project title: Benchmarking large language models and privacy protection
Amount requested: $83,680.00
Project leader: Rafal Kulik
Project summary:
In the current digital age, the accelerated growth of data generated by individuals has fuelled advances in artificial intelligence (AI), particularly the development and deployment of large language models (LLMs). These sophisticated AI systems, capable of understanding, generating and interacting with human language in ways that mimic human thought processes, are becoming integral to applications ranging from personalized content creation to drug discovery. As these models become more deeply embedded in the daily functions of society, the need to protect individual privacy within these systems is crucial.
The rapid development of LLMs and the pace at which these tools are evolving present a significant challenge in defining current and practical guidelines that can effectively address the use and deployment of these systems. Given the unique capabilities and risks associated with LLMs, there is a growing need to establish robust privacy standards specifically tailored to these technologies.
This project will provide a practical introduction to LLMs and will explore privacy challenges for legal and policy experts and the role of privacy-enhancing technologies. Researchers will survey legal, policy and technical experts, as well as civil society groups to explore the benefits and opportunities of these technologies. They will also provide recommendations and public education materials.
Organization: University of Waterloo
Project title: Mitigating privacy harms from deceptive design in virtual reality
Amount requested: $58,708.00
Project leader: Leah Zhang-Kennedy and Lennart Nacke
Project team: Hilda Hadan
Project summary:
Deceptive design in virtual reality (VR) is a rapidly evolving privacy concern. This research will explore the effects of deceptive design on user information privacy in commercial VR applications. By identifying and classifying deceptive design patterns in VR that undermine and subvert users’ privacy, the researchers seek to develop countermeasures and guidelines to counteract their negative impact. The researchers also seek to increase awareness and provide design and policy guidelines and recommendations to VR developers, policymakers and government.
Researchers plan to evaluate VR application design to identify manipulative strategies and conduct a large-scale analysis of user perceptions and experiences with respect to privacy and deceptive design. They also plan to systematically document different deceptive practices and patterns in VR, note consequences for privacy and suggest mitigation strategies.
The researchers anticipate that their project will lead to opportunities to improve the design of VR applications and will lead to recommendations for privacy regulations to better protect Canadians. They plan to create a public repository, design guidelines and educational resources for the public, among other deliverables.
Organization: Toronto Metropolitan University
Project title: Generative AI, Privacy Policy and Young Canadians
Amount requested: $49,640
Project leader: Karim Bardeesy
Project team: Sam Andrey, André Côté, Tiffany Kwok, Christelle Tessono
Project summary:
In this project, researchers seek to understand the privacy implications of generative AI technologies in order to inform the application of current and proposed Canadian privacy legislation and privacy-preserving administrative policies and practices, with an emphasis on impacts on minors.
The project will consist of three core components. First, researchers will conduct interviews with privacy and artificial intelligence experts to help shape an understanding of the privacy consequences of AI in general and specifically for minors, and how best to mitigate them. Second, researchers will also undertake legal and policy analysis to evaluate both current and proposed privacy laws with respect to their capacity to effectively address the specific risks posed by generative AI. Third, researchers will conduct a comparative analysis of privacy and data protection laws in other jurisdictions and technical interventions (e.g., age gating, youth data collection bans, school board policies/bans, etc.). The comparative approach will allow the researchers to draw insights from other jurisdictions’ efforts to manage and mitigate AI-specific risks to privacy.
- Date modified: