Language selection

Search

Briefing note: Proposed OPC Position and Actions on Age Assurance

Purpose:

For decision.

Issue:

This note proposes general positions and activities for the OPC in relation to age assurance. It is a broad examination of the topic; as such, it does not set out specific positions on related matters such as S-210 (Protecting Young Persons from Exposure to Pornography Act), C-63 (Online Harms Act), or the OPC’s on-going investigation of TikTok.

Background:

Types of Age Assurance

There are three primary classes of age assurance techniques:Footnote 1

  • Declaration: The individual (or a person that knows them) declares that they are above a defined age, with no further effort to confirm that declaration. This may include:
    • Self-declaration: e.g. the individual clicks a button that reads: “I am over 18 years old.”Footnote 2
    • Account holder confirmation: e.g. the primary account holder (known to be over 18) creates ‘linked’ accounts, indicating for each whether the user is above or below 18.
    • Vouching: e.g. the individual requests that one or more other account holders (each known to be over 18) vouch that the individual is also over 18.Footnote 3
  • Verification: The individual provides (directly or indirectly) proof of age. This may include:
    • Provision of ID document: e.g. the individual provides the service an image of a government-issued ID containing their date of birth (this may be accompanied by a check that the user is actually the person in the ID; for instance, the live capture of a photo of the user’s face).
    • Digital credential: e.g. an individual holds a digital credential (issued by a government or other trusted party) which can be provided, in whole or in part, to the service.
    • 3rd-Party Age Verification: e.g. an individual verifies their age with a third-party, which signals to the service the individual is attempting to access that they are over 18. This can be a dedicated age verification service, or a trusted institution that has authenticated the individual’s age. This is similar in concept to being able to access the CRA website by logging into an online banking account (i.e. the service the individual is attempting to access trusts the authenticity of the information provided by a third-party).
    • Inference: e.g. an individual provides proof that they hold a credential only accessible to people over a defined age (for instance, credit card information in their own name in a jurisdiction that only issues credit cards to individuals over 18).
  • Estimation: The individual’s age is estimated based on their features or behaviours, generally by an artificial intelligence system. This can include:
    • Biometric analysis: e.g. an image of an individual’s face, or a sample of their voice, are captured and analysed to determine an estimated age or age range.
    • Behavioural analysis: e.g. an individual’s interactions with the site or some other form of online activity is analysed to determine an estimated age or age range.

Known or Proposed Uses of Age Assurance

The primary currently known or proposed uses for age assurance are to differentiate between users for the purpose of:

  • Permitting or restricting access to content: An online platform decides to limit children’s access to content considered harmful to them, or is required by legislation, regulations or guidance to limit children’s access to certain content.
  • Tailoring online experiences: Some services will logically be adapted based on the user’s age – for instance, an educational app might provide different lessons based on grade level, or a streaming video service might recommend youth-centric content to younger viewers.
  • Determining appropriate data practices: Certain online practices (such as profiling) may be considered less- or unacceptable when applied to users under a certain age. Individuals determined to be adults may be asked to consent (or otherwise be subject) to more invasive practices, where minors would be subject to limited practices.

Legislation to restrict access to harmful (often, this means sexual) content exists in countries including Spain, France, the United Kingdom, as well as multiple US States; (redacted) such laws vary in the level of detail provided with respect to requirements for age verification or assurance systems. In jurisdictions where a regulator or government has issued detailed guidance respecting appropriate data practices for youth (such as the UK ICO’s Children’s Code and associated guidance, or the California Age-Appropriate Design Code), this guidance will generally include specific requirements about whether (and what form of) age assurance is necessary to determine whether users are minors.

It is worth noting that in August 2023, the Australian government opted not to mandate age assurance for online pornographic material. This was due to a finding that available age assurance technologies are not sufficiently mature to work reliably, be comprehensively applied, and balance privacy and security. Instead, Australia has chosen to develop a series of industry codes and standards to address matters such as ensuring the availability of filtered internet services and other technological solutions that would limit access to pornographic material, as well as providing information to parents about how to supervise and control access to such content. This option was explicitly included in Australia’s Online Safety Act.Footnote 5

Key considerations

While not strictly privacy considerations, the following should be understood about age assurance systems in making a privacy-related analysis.

  • Age estimation is not precise: Age estimation systems are best operated with a buffer. For instance, where a requirement exists to restrict access for any user under 18, an age estimation service will estimate whether that person is over 21 (a 3-year buffer), rejecting access or requiring additional verification steps for anyone deemed to be under 21. A December 2023 white paperFootnote 6 by Yoti (a leading facial age estimation company) finds that a buffer of 3-5 years is appropriate for highly-regulated sectors (such as adult content). As such, by design an age estimation system will reject (or refer to an alternative assurance method) a number of individuals who are above the threshold age for the restriction, but below the buffer age.
  • Age assurance can pose equity issues: In general, age verification methods will rely on the individual having access to an authenticated identity (a government-issued ID; an account with a trusted party, such as a bank; etc.). This may pose challenges for a number of groups who do not have ready access to such identifiers, including younger teenagers (in situations where those over 13 are permitted access to content or a service), unhoused or unbanked individuals, and non-citizens. Age estimation techniques (such as facial analysis) have also faced issues with respect to performance across skin tones and genders.
  • Age assurance techniques can exacerbate the digital divide: Some age assurance technologies may rely on individuals having specific technologies (such as a web camera or a non-shared cell phone) or to undertake a complex sign-up and authentication process.
  • Age assurance is not the only available option: Tools such as parental controls on devices or browsers, public education and awareness campaigns, and other non-technical approaches can work either in place of, or in concert with, age assurance methods to increase the protection of youth online.

Strategic considerations:

Role for the OPC

Age assurance is a cross-jurisdictional issue with potential to impact multiple rights. To maximise effectiveness, the OPC should focus our comments and activities on the areas most closely connected to our mandate and expertise. This means the following:

  • Refraining from discussion of content restrictions: The issue of “what content should be available to youth” is a societal one and does not fall within the OPC’s mandate. As such, while we can acknowledge that some content may be harmful to youth we should generally avoid commenting on whether access to specific kinds of material is harmful.
  • Opining on harmful data practices: On the other hand, harmful data practices are squarely within the OPC’s mandate. As such, we can provide analysis and commentary on what practices may be harmful to youth (and why), as well as on appropriate mitigations (including whether such mitigations should involve improving practices and/or restricting youth exposure to those practices through age assurance).
  • Highlighting privacy risks: The OPC is well-placed to highlight the potential or known privacy impacts of age assurance systems, both with respect to the impact of any collection, use, and disclosure of personal information associated with the age assurance system itself and to the broader potential impacts on anonymous use of the Internet. This can involve making comparisons with alternative approaches, including individualized measures such as parental controls (in the case of age assurance used to restrict access to content) or improving privacy practices for all users (in the case of age assurance used to determine appropriate data practices).
  • Supporting strong design: The OPC is well-positioned to comment on appropriate privacy-design practices for online services. This should include how websites or other online services can or should accommodate the interest of all users (minors and adults).
  • Encouraging the development of privacy-protective age assurance techniques or services: By drafting guidelines, developing or reviewing Codes of Practice, supporting standards development, or other measures, the OPC can play a key role in ensuring that privacy-protective age assurance techniques are available and expectations or requirements for such systems are known.

We note that these measures can be accomplished through both policy and compliance activities.

Privacy Impacts and Mitigations

The privacy impacts of age assurance will differ significantly based on form, making it challenging to speak to them broadly. However, the most likely high-level impacts of age verification and age estimation include:

  • Identifiability / linkability: Similar to a digital identity system, age assurance can increase the identifiability of an individual or the linkability of their action across time or across websites/services. It is of critical importance to address this given the sensitivity of the information that may be revealed based on an individual’s access to restricted material.Footnote 7 Addressing this problem also mitigates the risks associated with data breaches.
  • Residual information / metadata: Beyond directly identifying information, it is possible that metadata generated or retained by an age verification system could create privacy impacts. For example, it is undesirable (from a user design perspective) for a user to be required to undergo an age assurance process each time they access a given service; as such, it is likely that some form of persistent token will be used to indicate that the individual has already undergone an age check. Improperly designed, this token could serve as a unique identifier and/or identify them as a youth during online interactions that do not require age assurance.
  • Normalization of identification: The 2022 FPT Resolution on the Digital Identity Ecosystem emphasized:

“Digital identification should not be used for information or services that could be offered to individuals on an anonymous basis and systems should support anonymous and pseudonymous transactions wherever appropriate.”

This general principle – that, wherever possible, individuals should be able to use online services without establishing identity characteristics – may be violated if access to general use services (such as search engines or social media) become subject to age assurance (particularly if this involves age verification or estimation). Should age assurance become a common practice, individuals may also become accustomed to providing biometric samples or identity information upon request, exacerbating the privacy and cybersecurity risks discussed below.

Many of the above impacts can be mitigated through the use of privacy-enhancing technologies or well-designed policy measures. Regulators and standards organizations have released, or are developing, guidelines, requirements, or other measures that would reduce many of these privacy impacts; this work is summarized in Annexes 2 and 3, respectively. Privacy-protective age assurance methods are also being examined; see, for instance, the proofs of concept from CNIL and AEPD discussed in Annex 4.

However, it is important to acknowledge that some (real or perceived) privacy impacts are unlikely to be mitigated.

  • Phishing, hacks, etc.: Individuals using any age verification or age estimation system will inevitably be subject to phishing attempts (e.g. false requests to provide information – be that a biometric, account information, ID information, etc.). Untrustworthy sites may create false age assurance systems, and even trustworthy age assurance systems can be subject to hacks. In short, the common adoption of age assurance will increase the likelihood that an individual will provide personal information to a malicious actor.

This is a significant difference between online and offline age assurance – in the offline situation, it would be exceedingly rare for an individual to receive a fraudulent request to verify age.

  • Trust deficit: The modern technology ecosystem suffers from a warranted trust deficit, in which individuals understandably do not trust that their information will be collected and used appropriately. With age assurance, individuals will be asked to ‘trust’ that websites and services that may be unknown (third-party age assurance services) or which have frequently proven themselves untrustworthy (social media sites; pornography sites; etc.) will not retain or re-use information they receive during the age assurance process.

Even if privacy-enhancing technologies are implemented and age assurance services are subject to regular independent auditing, it is very likely that a percentage of individuals will forego accessing content or information in order to protect their privacy or seek the content in riskier ways (such as via the dark web).

Recommended action:

(redacted)

Attachments:

Consultations:

Compliance (M. Maguire); Legal (N. Sayid); Technology Analysis (M. Townsend)

Approval:

Prepared by

Vance Lockton
Senior Technology Policy Advisor

Date: 27-03-2024

Revisions

Approved by Director and/or Executive Director

Lara Ives
Executive Director, Policy, Research and Parliamentary Affairs Directorate (PRPA)

Date : 28-03-2024

Approved by Deputy Commissioner

Gregory Smolynec
Deputy Commissioner, Policy and Promotion

Date

Approved by Privacy Commissioner

Philippe Dufresne
Privacy Commissioner

Date

Distribution: Commissioner; Deputy Commissioner, Compliance; Deputy Commissioner, Policy & Promotion; Deputy Commissioner, Legal Services Directorate; PRPA.

Annex 1 – Sample Q/As

Q: Age assurance is a common practice in the physical world; what makes the online world different?

A: The difference lies in the potential privacy impacts and risks.

To be clear, offline age assurance is not without privacy risks — the nature of most ID cards means that more information than just age is disclosed, and that *could* be retained by the person examining the card. However, this is generally not an accepted practice. Footnote 8 Moreover, in most situations the individual need not be concerned with issues such as whether the request to prove age is authentic, and overall privacy impacts can be lessened by only reviewing ID cards when there is question about whether the individual is of the appropriate age.

A well-designed online age verification system could address the first challenge, providing the relying service with only an assurance that the person is of the necessary age. However, without careful design, it may be possible to track an individual across the Internet, or to prevent anonymous access to information or services (leading to potential breaches of information that would cause a real risk of harm).

The other named challenges – such as phishing and the need to restrict the application of age assurance processes – may simply not be solvable through design or the application of privacy enhancing technologies.

We do not reject the possibility that online age verification may be appropriate in some situations. However, it must be acknowledged that the potential privacy impacts of the practice are quite different than those of offline age verification, a fact which should be taken into account in designing policy or regulation.

Q: Does the OPC support/oppose age assurance restrictions for <X or Y> content?

A: The OPC does not claim to be the arbiter of what content or services a child should be able to access; in some cases that will be up to parents and guardians, and in others it is up to Parliament. 

Our role is to ensure that those parties — but particularly Parliament — are aware that there are privacy impacts associated with age assurance. (We do not discount the impacts on other rights, such as freedom to access information, but they are not our focus.)

Where Parliament opts to require age assurance — or where a parent opts to restrict content — our role is also to ensure that age assurance methods are privacy protective. 

Annex 2 – Summary of Key International Guidance

Below, we summarize a number of the key guidance on age assurance from regulators, with a focus on DPAs. Much of this work is emerging from Europe, including the UK, France and Spain, following the passage of laws mandating the use of age assurance.

Various other stakeholders – particularly including industry groups such as the Digital Trust and Safety PartnershipFootnote 9 – have also created principles and best practices documents. We do not consider them further here, though would do so in creating future guidance.That said, they do not tend to differ significantly from the themes raised by regulators.

UK ICO — Age assurance for the Children’s code

In January 2024, the UK ICO published its updated opinion on age assuranceFootnote 10 for the Children’s Code (an enforceable Code of Practice established by the UK ICO). It sets out “how age assurance can form part of an appropriate and proportionate approach to reducing or eliminating personal information risks children face online and facilitate conformance with the Children’s Code.”

The three main themes of this opinion are:

  • Manage risk: If personal information processing activities are likely to present a high risk to children’s rights and freedoms, you should either apply all relevant code standards to all users to ensure risks to children are mitigated, or introduce age assurance methods. Where high risks to children exist, a DPIA must be completed.
  • Apply the data protection principles: In summary, organizations must:
    • Make sure [processing] is fair.
    • Establish a lawful basis to process the information.
    • Be transparent about how you use information.
    • Not use information collected for the purpose of age assurance for any other incompatible purpose.
    • Collect the minimum information required for the process.
    • Make sure the method is accurate.
    • Not retain any information collected by the method for longer than is needed.
    • Make sure the method is secure.
    • Be accountable for your compliance with the law (e.g. by adopting relevant policies and procedures).
  • Consider the implications of using AI-driven age assurance methods: This includes considering whether biometrics are processed (which would be subject to additional protections); balance any risk of profiling against the benefits of establishing age; address bias and not be discriminatory; make sure methods are statistically accurate.

UK Office of Communications (Ofcom) - Guidance on age assurance and other Part 5 duties for service providers publishing pornographic content on online services

Where the UK ICO oversees the Children’s Code, the UK’s communications regulator (Ofcom) oversees the Online Harms Act, which imposes a duty on certain internet services providers to use “age verification or age estimation (or both)” to prevent children from accessing pornographic content.

Ofcom’s draft guidanceFootnote 11 (currently released for consultation) to service providers states that they should fulfil the following criteria:

  • Technical Accuracy: the degree to which an age assurance method can correctly determine the age of a user under test lab conditions.
  • Robustness: the degree to which an age assurance method can correctly determine the age of a user in unexpected or real-world conditions
  • Reliability: the degree to which the age output from an age assurance method is reproducible and derived from trustworthy evidence
  • Fairness: the extent to which an age assurance method avoids bias or discriminatory outcomes.

Service providers should also consider:

  • Accessibility: the principle that age assurance should be easy to use and work for all users, regardless of their characteristics or whether they are members of a certain group.
  • Interoperability: the ability for technological systems to communicate with each other using common and standardized formats.

Ofcom does not specifically focus on privacy-related measures, though they do provide guidance on the requirement to keep a written record of how privacy was considered in choosing an age verification or estimation method per s. 81(4)(b) of the UK Online Safety Act, referring to the UK ICO Opinion (above) for further detail.

CNIL – Online age verification: balancing privacy and the protection of minors

In August 2021, the French CNIL published 8 recommendations to enhance the protection of children online. Recommendation 7Footnote 12 was to “check the age of the child and parental consent while respecting the child’s privacy.”

In explaining this recommendation, the CNIL set out six rules that any age or parental verification system should respect:

  • Proportionality: When choosing an age verification system, online service providers should consider the proposed purposes of the processing, the target audiences, the data processed, the technologies available and the level of risk associated with the processing. A mechanism using facial recognition would therefore be disproportionate.
  • Minimisation: Any system should be designed to limit the collection of personal data to what is strictly necessary for the verification, and not retain the data once the verification has been completed. The data should not be used for other purposes, including commercial uses.
  • Robustness: Age verification mechanisms must be robust when they are for practices or processing that involves a risk (e.g. targeted advertising for children). For these cases the use of self-declaration methods alone should be avoided.
  • Simplicity: The use of simple and easy-to-use solutions that combine verification of both age and parental consent could be encouraged.
  • Standardisation: "Industry standards" and a certification programme could be encouraged to ensure compliance with these rules and to promote verification systems suitable for a wide range of websites and apps.
  • Third-party intervention: Age verification systems based on the intervention of a trusted third party who can check a data subject's identity and status (attribution of parental authority) could be investigated in order to meet the requirements as described above.

The CNIL has since expandedFootnote 13 on this guidance, flagging in particular the benefits of ‘third-party intervention’ (above) as well as the need for such third parties to be subject to an evaluation or certification process.

AEPD — Decalogue of Principles — Age verification and protection of minors from inappropriate content

In response to a combination of the EU Digital Services Act, the EU Better Internet for Kids strategy, requirements under the Spanish General Audiovisual Communication Law, and other considerations, in December 2023 the Spanish AEPD released a detailed set of principlesFootnote 14 for age verification systems that protect minors from inappropriate content.

  • Principle 1: The system for protecting minors from inappropriate content must guarantee that the identification, tracking or location of minors over the Internet is impossible.
  • Principle 2: Age verification should be aimed at ensuring that users of the appropriate age prove their condition of person “authorized to access” and not at verifying the status of “minor”.
  • Principle 3: Accreditation for access to inappropriate content must be anonymous for Internet service providers and third parties.
  • Principle 4: The obligation to prove the condition of the person “authorized to access” will be limited to only inappropriate content.
  • Principle 5: Age verification must be carried out accurately, and the age categorized as “authorized to access.”
  • Principle 6: The system for protecting minors from inappropriate content must ensure that users cannot be profiled based on their browsing.
  • Principle 7: The system must guarantee the non-linking of a user’s activity across different services.
  • Principle 8: The system must guarantee the exercise of parental authority by parents.
  • Principle 9: Any system for protecting minors from inappropriate content must guarantee all people’s fundamental rights in their Internet access.
  • Principle 10: Any system for protecting minors from inappropriate content must have a defined governance framework.

International Age Assurance Working Group — Joint Statement

In February 2024, the OPC received for review a draft of a ‘joint statement on a common international approach to age assurance’ from the International Age Assurance Working Group (of which we are a member).

In summary, the proposed principles include:

  • Any age assurance methods should be in the best interests of the child.
  • Age assurance methods should be fair and non-discriminatory.
  • Age assurance methods should be undertaken in compliance with data protection requirements in a risk-based and proportionate way to reduce the risk of harm.
  • Services are accountable for their approach to age assurance and for demonstrating that it is lawful, effective and proportionate.
  • Services should assess and document the potential data protection risks to users.
  • Services should balance the risks posed by the age assurance method against the benefit in helping establish the age of users.
  • Services should be aware of the state of the art in age assurance technology in order to ensure they implement methods that are technically feasible and effective, which also protecting users’ rights and freedoms.
  • Services should be aware that where there is a high risk to users, relying on self-declaration alone is unlikely to be appropriate.

Annex 3 – Summary of Key Standards and Certification Schemes

Standards

BSI PAS 1296:2018 – Online Age Checking – Code of Practice

The British Standards Institute (BSI) published Publicly Available Specification (PAS) 1296 in March 2018. PAS1296 provides recommendations (for both age check services and parties relying on such services) “on the due diligence businesses can exercise to ensure that age check services deliver the kind of solution that will meet a business’s specific regulatory compliance needs,” as well as “processes that can be applied when providing and using age check services in order to protect consumers and the online merchant.”

The introduction to PAS 1296 states that while consumers are not the primary audience for the specification, “the privacy, security and consumer protection mechanisms that ought to put in place by both age check services and the businesses they serve are referred to throughout.” The ICO makes reference to this document in their Age Appropriate Design CodeFootnote 15 (see ’Further Reading’ section).

Should the OPC wish to review this document in more detail we would need to purchase access (£108).

(In progress) ISO/IED WD 27566 – Age assurance systems

ISO Subcommittee 27Footnote 16 – which focuses on standards related to information security, cybersecurity and privacy protection – is currently developing a standard (ISO/IEC WD 27566) related to security and privacy in age assurance systems. This project was approved (and moved into development phase) in October 2023.

There are currently three parts under development or envisaged for this standard:

  • Part 1: FrameworkFootnote 17
    • This is the central component of the standard, in which the OPC would have the greatest interest.
  • Part 2: Benchmarks for benchmarking analysisFootnote 18
    • This document establishes benchmarks for specifying, differentiating and comparing characteristics of age assurance methods and components.
  • Part 3: Interoperability, technical architecture, and guidelines for use
    • This work has been proposed, but not yet agreed to by the Working Group.

The draft of this standard is currently confidential and given ISO procedures the standard would likely not be finalized until 2025 at the earliest, with 2026 being a more likely timeline. However, the OPC is closely monitoring this work given the standard’s likely eventual impact.

(In progress) IEEE 2089.1 – Draft Standard for Online Age Verification

In 2021, the IEEE Standards Association release IEEE 2089-2021Footnote 19, a design framework for age-appropriate digital services. Building on this work, the IEEE is currently developing IEEE 2089.1Footnote 20, which “establishes a framework for the design, specification, evaluation and deployment of age verification systems”. It sets out:

  • Key terms and definitions, as well as roles and responsibilities of key actors in the age assurance process;
  • requirements for establishing different levels of confidence associated with the types of age assurance systems; and,
  • requirements for privacy protection, data security and information systems management specific to the age assurance process.

This work is ongoing. The OPC has not reviewed a draft of this standard, which is currently available for purchase for $80USD.

(In progress) Digital Governance Standards Institute – Technical Committee 18 – Age Verification

Canada’s Digital Governance Standards InstituteFootnote 21 has recently established a Technical Committee (TC18) to develop a standard on Age Verification, on which the OPC is represented by Vance Lockton and Malcolm Townsend. The first meeting of TC18 took place on February 21, 2024.

Though the work is in early stages, the DGSI intends to complete it by November 2024.

(In progress) EU Standard

The EU’s 2022 updated “Better Internet for KidsFootnote 22 strategy included the commitment that the European Commission will “issue a standardisation request for a European standard on online age assurance / age verification in the context of the eID proposal, from 2023”, and “support the development of an EU-wide recognised digital proof of age based on the date of birth within the framework of the eID proposal, from 2024.”

To date, we are unaware of any further developments with respect to these commitments, though we are aware of a ‘Task Force on Age Verification’Footnote 23 established under the Digital Services Act. The EU Commission also provided initial funding the on-going euCONSENTFootnote 24 project, which (in its pilot phase) was intended to demonstrate a way for a single age check to be used across multiple websites with no additional user action. This project is now being continued by an NGO.

Certification and Evaluation

UK – Age Check Certification Scheme (2021)

Developed in the UK, the Age Check Certification SchemeFootnote 25 (ACCS) is a program to audit the operations of online and offline age- and identity-verification systems, based in part on PAS 1296:2018. The scheme includes a section (ACCS 2:2021Footnote 26) that sets out technical requirements for data protection and privacy, which were reviewed and approved by the UK ICO in July 2021. To date, 46 servicesFootnote 27 have been certified under the ACCS.

ACCS 2:2021 sets out extensive requirements related to the overall accountability structure for the service (Section 4: “Data Protection Management Systems”) and how it must meet obligations related to privacy (Section 5: “Requirements for Data Processing”) and security (Section 6: “Technical Evaluation”).

The ACCS also includes technical requirements for age estimation, appropriate age design, and other considerations.

US NIST – Face Analysis Technology Evaluation

NIST’s ‘Face Analysis Technology Evaluation’ (FATE) program is a standardized testing program for various subcategories of face analysis, including age estimation and verification.Footnote 28 Formerly called the Face Recognition Vendor Tests (FRVT), in August 2023 the program was split into two distinct components: FATE and Face Recognition Technology Evaluation (FRTE). This program focuses on the accuracy of a face analysis algorithm, as opposed to considering any other privacy-related aspects.

The results of all evaluations under the FATE program are made public; however, to-date there have been no reports from the FATE-Age Estimation and Verification sub-program.

Annex 4 – Current Proofs of Concept

As noted in Annex 2, the European Commission has been particularly emphatic about the need for privacy-protective age assurance mechanisms. This has included providing €1.4M in funding for the pilot version of euCONSENTFootnote 29, a mechanism for interoperable browser-based age assurance and parental consent.

European regulators – including the French CNILFootnote 30 and Spanish AEPDFootnote 31 – have also worked with researchers to develop and evaluate potential mechanisms for age verification. The Technical Analysis Directorate has undertaken an analysisFootnote 32 of the privacy protective age verification proofs on concept described by the AEPD and CNIL, finding that both of them are technically feasible. This is not surprising as age verification shares many of the same technical challenges (and thus solutions) as digital identity, which is a well-advanced area of research and development.

However, to be clear, the AEPD and CNIL programs do not:

  • Involve AI-based age estimation;
  • Necessarily involve market-ready (or commonly-used) technologies; or,
  • Solve for the ‘difficult to mitigate’ privacy impacts set out in this note.

As such, in speaking of these solutions it is valid to say that they are examples of technological innovation that will make age assurance more privacy protective. However, it would not be correct to state that they demonstrate that modern age assurance methods are privacy protective, or not privacy-impactful.

Date modified: