Language selection

Search

Consultation on age assurance – What We Heard

About the process

From June to September 2024, the OPC ran an exploratory consultation on age assurance. In that consultation, we set out our preliminary positions on, and understanding of, age assurance technologies and the privacy implications of their use. We invited interested parties to provide both feedback on what we had put forward and any additional contextual information that could inform our future approaches to the topic.

The significant public interest in, and importance of, a well-considered policy position on age assurance is reflected in the 40 responses we received. These responses came from a wide variety of stakeholder groups, including industry (both age assurance service providers and organizations that may deploy age assurance systems), civil society, academia, technology policy think tanks, interested individuals, as well as overseas data protection authorities.

Responses varied, with some submissions supporting the OPC’s initial positions and others challenging them. A number of submissions highlighted issues that we may have overlooked or not included in our initial document. We appreciate the quality and thoughtfulness of responses received and have learned a great deal from them.

Global developments in age assurance have, of course, not stopped in the time since we issued our consultation. For example, in September 2024, the International Age Assurance Working Group (of which the OPC is a part) issued a Joint Statement on a Common International Approach to Age Assurance, which sets out a series of key shared principles on age assurance practices as they relate to data protection and privacy. October saw the French Regulatory Authority for Audiovisual and Digital Communication (Arcom) issue a technical reference document setting out requirements for age assurance under the Loi Visant à Sécuriser et à Réguler l’Espace Numérique (SREN). In November, Australia passed the Online Safety Amendment (Social Media Minimum Age) Bill 2024 and launched an Age Assurance Technology Trial. Suffice to say, this continues to be a rapidly developing space. That said, as noted by a respondent, given the potential impacts of age assurance on the privacy (and other) rights of individuals, there is need for speed but not haste in moving forward on this topic.

With this document, we intend to summarize what we heard and outline our intended next steps on age assurance. We also want to take this opportunity to thank all of the individuals and organizations who took the time to contribute to our exploratory consultation.

What we heard

Among the wide variety of viewpoints expressed in responses, we identified six key themes that are set out below, along with a summary and OPC response for each.

Theme one: Differentiate between forms and uses of age assurance.

Summary: The exploratory consultation describes multiple forms of age assurance, and recognizes multiple potential purposes for which it could be used. This is appropriate, and taking the differences between these various forms and purposes into account when developing future guidance or positions would make them more effective and nuanced.

OPC response: We agree that privacy considerations may vary depending on the type of assurance used and the purpose for its use. In our future work, we will make these differentiations where appropriate.

Multiple responses highlighted that the OPC’s position and future guidance would be improved by maintaining distinctions between forms and uses of age assurance. For example, respondents noted that:

  • Age assurance is not monolithic: ‘Age assurance’ is correctly identified as an umbrella term encompassing many different solutions. However, some of the positions put forward appeared to treat ‘age assurance’ as a singular process, not recognizing variations amongst approaches. For instance, asserting that one is over 18 (age declaration) will almost certainly have less privacy impact than, for instance, undergoing facial age estimation. As such, the OPC should be cautious about establishing policies that treat broad ranges of technologies as equivalent.
  • Age assurance is not single-purpose: While age assurance is frequently associated with the online safety of youth, it may also be used in situations where youth are not involved – such as being a component of anti-fraud measures undertaken by financial institutions. As well, the appropriateness of certain age assurance practices may vary depending on intended purpose. As such, the OPC should be cautious to clarify what intended purpose(s) of age assurance are covered by future policy and guidance work, and to consider whether expectations should differ between purposes.
  • Age assurance does not always take the form of an access gate: The forms of age assurance described in the exploratory consultation created an impression for some that age assurance will generally take the form of an access gate – an individual is asked to prove their age when attempting to access restricted content, register an account, etc. However, responses from online service providers identified multiple ways in which they act in the background to detect the presence of users who have, for example, self-identified as being in an older (or younger) age demographic.

Overall, respondents suggested that future OPC work would benefit from additional clarity – both on the points above, and on key terminology (such as ‘children’ vs. ‘young people’ or ‘restricted content’).

Theme two: The impacts associated with the use or misuse of age assurance should not be underestimated.

Summary: Neither the harms that age assurance may be able to address nor the harms that it may cause if implemented inappropriately (or without appropriate privacy protections) should be underestimated.

OPC response: We appreciate the additional context provided with respect to both the potential benefits and harms of age assurance systems. We will maintain our priority focus on this area, with next steps as set out at the end of this report.

Perhaps the clearest, and most consistent, message put forward by respondents was that the potential impacts – positive and negative – of the use or misuse of age assurance technologies should not be underestimated. These impacts can be broadly divided into three categories:

  • The harms age assurance is intended to mitigate are significant: As noted by one respondent, while age assurance is often positioned as a way to mitigate the risk of harms to youth online, the nature and extent of those harms are not always articulated. To elaborate on this, some responses provided information about the extent of youths’ exposure to sexually explicit material online, the frequency with which this material is of an aggressive or violent nature, and the potential harms to body image or mental health it may cause. Others emphasized that, per one submission, “Canadian children have also been the targets of ever-increasing harmful interactions with adults, including grooming, sextortion, and other online sexual violence” and that effective age assurance can help to safeguard children from such preventable harms. As one respondent stated, “we cannot overstress the urgency of implementing meaningful age assurance.”
  • The harms caused by loss of access are also significant: On the other hand, many responses spoke to the harms associated with limiting young people’s access to online content or forums. Per one response, “the Internet provides ample avenues for community-building, civic engagement, and education, especially for members of marginalized groups who might not have access to the same opportunities offline.” As noted in another response, “research shows that LGBTQ+ youth without access to affirming spaces or content are more likely to consider or attempt to commit suicide … requiring age verification could hinder LGBTQ+ children from accessing information that could be vital to their self-discovery and quite literally save lives.” Particular concern was raised about access to information about gender expression, sexual orientation, or reproductive rights being restricted because a government deems it age inappropriate. As noted in one response, “Objectively determining what content is harmful to youth or other groups is extremely difficult and subject to political agendas of the time.”
  • Harms associated with data breaches are enhanced: Finally, multiple responses emphasized the importance of considering the potential harm caused by any breach or misuse of personal information associated with an age assurance system. For instance, certain kinds of online content subject to age assurance may be legal but deeply stigmatized within a community, and individuals who access or create that content could be subject to significant psychological or physical harms should their online activities be publicly exposed. Other responses emphasized that the potential that an individual’s identity may be linked to their online activity “discourages people from operating freely in the digital environment and … [may] exclude users from encountering important information about employment, healthcare, housing, and other essential services based on their ethnicity, religion, or sexual orientation.”

Industry groups – particularly those representing small or medium-sized enterprises – also argued that the burden on businesses and the potential impacts on innovation of overly broad or prescriptive age assurance requirements should be taken into consideration. This is particularly the case where collection of personal information about a youth may be incidental or minimally harmful. It is also important, they suggested, that any requirements be harmonized or aligned between federal, provincial and territorial governments, as well as internationally.

The scope of potential benefits and harms of age assurance led some respondents to highlight the need for a clear and strong regulatory regime. It was argued that the potential harms associated with age assurance suggest that voluntary industry standards or legislation without enforcement powers may not be sufficient. Responses proposed that age assurance systems (and the organizations deploying them) should be auditable or demonstrably compliant with clearly articulated privacy requirements, and breach (in particular, intentional misuse) of associated personal information should be subject to clear penalties. Other responses pointed to the need for a legislative framework to govern age assurance or argued that the appropriate form(s) of age assurance for given situations should be assessed and approved by regulators (rather than being left to organizations).

Theme three: Age assurance is not the goal; it is a way to achieve the goal of a safer online experience for young people.

Summary: Knowing the age of individuals online should not be the goal of age assurance; rather, age assurance should be positioned as one of many potential measures that support the goal of creating of a safer online experience for youth.

OPC response: We agree with this position, and will ensure that our work on age assurance is appropriately contextualized moving forward.

Respondents highlighted the importance of properly contextualizing the use of age assurance. As summarized in one response, “knowing the age of users is not the purpose or objective in itself; the purpose of age assurance should be child protection.” By clarifying and focusing on the intended outcome, a more nuanced approach to evaluating age assurance emerges. For example:

  • Clarifying the meaning of “effective”: One respondent noted that “effectiveness is … a precondition for processing data legally.” It is then important (per that same respondent) to clarify what “effectiveness” means in the context of age assurance. As noted by another respondent, a given age assurance technology may be effective at determining the age of a user, but the underlying mandate for age assurance may not be effective at reducing harms if it cannot be broadly enforced.
  • Age assurance does not need to be perfect to reduce harms: Respondents – particularly those who spoke to the harms faced by youth online – highlighted that if the purpose of an age assurance system is harm reduction, that purpose is advanced even if harm is reduced for the large majority of (but not all) young people. For instance, even if workarounds (such as VPNs) exist, an age assurance method should not necessarily be considered ineffective (as unintentional exposure to harmful content would still be reduced). One respondent also noted that even baseline age assurance measures, such as age declaration, could serve an important educational role by providing an indication of whether a content host considers material to be appropriate for youth.
  • Age assurance is one means among many: When positioned as a tool to promote safer online experiences for youth, age assurance’s place amongst a suite of other tools becomes clearer. Respondents characterized age assurance as “not a panacea”, “not a silver bullet”, and as “one tool among many … [though] an essential tool.” Complementary measures mentioned included legislation, transparency, family or parental controls, and youth-centric design, among others. Even among supporters of the technology, age assurance was generally characterized as a necessary but not sufficient measure to protect youth online.

Respondents argued that positioning age assurance as a means and not an end would help bring clarity to future guidance, by allowing the OPC to focus on the use of age assurance for particular purposes. This would allow for greater opportunity to develop contextual recommendations based on the type of age assurance, the situation in which it is to be used, and the ultimate purpose it is intended to serve.

Theme four: Consider who should be responsible for age assurance.

Summary: In circumstances where age assurance is appropriate, responses differed with respect to by whom it should be implemented. Options put forward included options such as: (i) at the individual device level; (ii) at the individual website or online service level; and, (iii) at the app store level.

OPC response: Overall, we agree that a multi-faceted approach to youth online safety is most effective. However, we also agree that individual-level controls may not themselves be sufficient for this purpose, and that organizations must play a central role. In future guidance, we will further explore the potential accountabilities of various players in the online ecosystem.

Another area of focus for respondents was the role(s) that various parties should play with respect to age assurance. Broadly, responses could be divided into three groups:

  • Individuals, including parents, should be empowered to make appropriate choices: Some argued that the most appropriate focus for the OPC’s age assurance work (particularly with respect to content blocking) would be to increase the empowerment of individuals to make appropriate choices for their respective situations through a combination of education and promoting the development and use of options such as on-device controls. This approach would allow for individuals to make determinations about what content is appropriate for themselves or their family members rather than having this determined by government policy or private-sector actors. This would also allow for more recognition of the varied maturity levels of young people (even of the same age). It would also recognize that, as one response put it, “culture and demographics influence interpretations and experiences of age appropriateness across populations.” This was also identified as the most privacy-protective option, as it would not require service providers to collect personal information about users.
  • Responsibility should centre on websites / online services: On the other hand, some respondents felt that a focus on individual-level controls is misplaced. Per one respondent, “there has historically been, and continues to be, a constant downloading of responsibility onto parents, child end-users and civil society institutions to act within their limited resources in the face of predictable harms ….” As argued by another respondent, society does not hold parents responsible for preventing access to other legal-but-restricted substances, such as cigarettes and alcohol. Some also noted that while individual or household-level restrictions (such as end-user internet filtering or parental control tools) can play a role as complementary tools, they should not be assumed to be an effective and accessible option for all. Another response challenged whether parental controls alone would be sufficient, noting a study highlighting the conflict parents may feel between enforcing age restrictions and allowing their child to fit in socially. These responses argued that responsibility should be located at, or closer to, the actual source of harm, meaning that accountability for ensuring that age restrictions are enforced should rest with the website or online service that is using practices or hosting content that may be harmful to young people.
  • Service providers have a responsibility but should be supported in a collaborative process: Finally, some respondents acknowledged that online service providers play a central role with respect to age assurance but felt that a collective or collaborative approach would be more effective. Per one submission, “preventing children from accessing adult content, having harmful interactions with adult users, or accessing age-inappropriate services will require a layered approach that includes several stakeholders.” Another submission from an industry stakeholder spoke to an ecosystem-based approach in which “each player in the chain must assume its responsibility by implementing the necessary tools and mechanisms to enable age assessments at every step”, arguing that this “overcomes the inherent limitations of trying to rely solely on one step of the age assurance process to be wholly effective.” This could include, for instance, online service providers’ security checks being supplemented by an age check mechanism at the app-store level intended to limit distribution of an app. Per one online service provider, “An app store/OS-level solution does not exempt apps from implementing their own age assurance tools, but it can complement and support these efforts.”

To be clear, none of these positions was without detractors. For instance, we heard that “applying age-assurance systems deeper in a technology stack—such as at the internet service provider, device, or app store level—is overbroad, intrusive, and inappropriate.” Another response argued that app store or operating system-level age assurance does not allow for service providers to utilize their design experience and knowledge of both their users and the risks associated with their site to design appropriately nuanced protections. On the other hand, one response argued that mandating age assurance at site level would be ineffective, as government would necessarily have to focus compliance efforts at a small minority of larger service providers – driving traffic to non-compliant sites which may have riskier practices.

Theme five: Age estimation deserves special caution – or could be preferable to age verification.

Summary: Responses differed with respect to the appropriateness of age estimation, particularly where it is based on measurement of an individual’s physical characteristics (such as their facial features). Some argued that the sensitivity of the information involved suggested the process should be restricted; others argued that it is a more privacy-protective option than (for example) verifying age with a government-issued ID.

OPC response: We continue to believe that age estimation can be implemented in a privacy-protective and sufficiently accurate manner. However, we also recognize the sensitivities associated with its use, particularly when based on a person’s physical characteristics. In future guidance, we will examine whether its use should be limited to certain situations (such as those that pose a higher risk to youth) and what privacy protections should or must be built into such systems.

Another issue for which there was a distinct split among responses was the use of age estimation, particularly when it relies on facial age estimation or similar measures of a person’s physical characteristics.

Some responses highlighted the potential involvement of biometric data, arguing for instance that

“special provisions for processing biometric data during age assurance processes should be introduced because biometric data are unique, unalterable, and comprehensive”

or that

“the OPC has an important role to play in educating legislators, organizations and the business community about the dangers of implementing [AI-based age estimation] systems that can process highly sensitive personal information ….”

It was also noted that, from an individual’s perspective, there is likely to be little difference in experience between, for example, an age estimation process in which data is only used for a single purpose and immediately deleted and a process in which biometric characteristics are processed and used for additional purposes – again pointing to the need for effective oversight of such systems.

However, others argued that age verification posed a higher risk, referring to age estimation as a “less onerous” approach (as compared to data-intensive age verification) or arguing that

“As age verification collects sensitive user information, it carries the highest privacy risk and is thus most suitable for highly age restrictive services where potential for underage user harm is highest. … Estimation falls between age verification and declaration for privacy risks and accuracy level, and may be appropriate for moderate-risk situations.”

We also heard from one academic who stated, “while conducting my research on age assurance … I observed a range of opinions among users. Many, from young individuals to seniors, expressed a preference for biometric solutions for age assurance.” Other responses also highlighted the importance of user choice, with one age assurance service opining “we don’t believe there is a single age assurance method that is suitable for everyone. What might be one person’s preference might not be possible for another individual, nor may they feel comfortable providing the required information.”

Finally, responses from age assurance service providers noted that age estimation systems do not necessarily rely on measurements of a person’s physical characteristics. For instance, one response described a system in which age was estimated by examining with which other services a person had registered their email address.

Theme six: The use of age assurance should be subject to a risk-based assessment.

Summary: While some responses supported our preliminary position of restricting the use of age assurance to situations that pose a high risk to the best interests of youth, many others suggested that this did not recognize the breadth of both forms and potential benefits of age assurance. Instead, the focus should be on ensuring that age assurance is proportionate to the risk of harm being addressed.

OPC response: We intend to nuance our initial position given it may not capture all potentially appropriate uses of age assurance. Our future work will take into account a risk-based approach to the issue, noting that this does not rule out the possibility of finding that a given form of age assurance poses a high enough risk to privacy that its use would only be warranted in a limited number of higher-risk situations.

In our exploratory consultation, we took the preliminary position that the use of age-assurance systems should be restricted to situations that pose a high risk to the best interests of young people. Many responses expressed support for this position – or, at least, support for the concept that any legal obligation to use age assurance should be restricted. For instance, one think tank stated that in the absence of such an obligation, age assurance should only be used when strictly necessary and only to prevent particular harms.

However, others disagreed. Some responses from child-safety advocates argued that the preliminary position was too restrictive. For instance, one respondent made the point that “we do not tolerate unmitigated ‘medium’ risks to children in other contexts.” Another noted that limiting the use of age assurance to positions that pose a high risk to the best interests of young people could create loopholes, and argued that “revising the position towards a mandate for proportionate age assurance ensures companies cannot evade responsibility.” A third said that “given the extent of online harm to children, we believe restricting age assurance to situations that pose a “high risk” to the best interest of children is unworkable.”

This sentiment could also be seen from some in the business community, with one respondent arguing that “while we understand the important goal of safeguarding children’s interests, we are concerned that [a restriction to high-risk] does not account for the need to balance various interests based on specific circumstances.”

Many respondents proposed that the OPC should focus on ensuring that any use of age assurance is proportionate to the risk being addressed. One organization, for example, suggested that “allowing for a contextual and proportionate approach would enable platforms to provide age-appropriate experiences that respect young people’s right to autonomy and self-exploration on the Internet.”

Next steps

The OPC’s Strategic Priorities for 2024-27 include both “addressing the privacy impacts of the fast-moving pace of technological advancements” and “ensuring that children’s privacy is protected and that young people are able to exercise their privacy rights.” The issue of age assurance – which intersects with both of these priorities – will remain a key issue for the office.

Based on what we heard from responses to our exploratory consultation, we believe that the most beneficial next step will be to draft guidance on the following two topics:

  • Assessing when Age Assurance Should be Used – As stated in our revised preliminary position, the OPC believes that there is a critical role for risk-based assessment in determining when and how various age assurance techniques should be deployed. We will look to clarify our expectations on this point, including a consideration of what forms of age assurance are (or are not) appropriate for certain circumstances or purposes based on their potential privacy impacts.
  • Designing Age Assurance – The appropriateness of an age assurance technique in a given circumstance may well depend on the extent to which privacy protections have been designed into it. As such, we will consider both (i) what design features or privacy considerations should be addressed in any age assurance technique, and (ii) what features or privacy considerations might be unique to a given technique.

As stated in the exploratory consultation, we will consult further on any guidance that we create.

Feedback provided in the context of our exploratory consultation will be considered in the development of our new guidance. We will also seek opportunities to directly engage with youth to better understand their perspectives on online safety, and age assurance in particular. We will also continue to monitor Canadian and international developments related to age assurance, and welcome continued conversations with stakeholders.

As highlighted by respondents, age assurance can be highly impactful on individuals – whether that impact is ultimately positive or negative will depend on its use and design. The OPC intends to take a leadership role in ensuring that age assurance is used appropriately and in a privacy-protective manner.

Date modified: