Language selection

Search

Bill S-203 Issue Sheets

Online Age-Verification vs. Identity Verification

Key Messages

  • Age verification and identity verification serve different purposes. It is not always necessary to verify the identity of an individual when performing an age check, but in the process of verifying identity, it is likely you will also confirm a person’s age.
  • Online identity verification:
    • May be required by organizations to confirm that you are the person you say you are when undertaking activities such as opening a bank account, filing taxes online or signing up to services.
    • When undertaken with a high-level of assurance, can help protect privacy and help reduce the risk of identity fraud.
  • Age verification can allow you to prove your age online with minimal disclosure of personal information to the service you want to access.
  • For the purposes of S-203, age verification would be more appropriate than identity verification because it requires the collection and sharing of less personal information to achieve the objective.

Background

  • In order to ensure that a claimed identity matches a real identity (as part of an identity verification process), additional checks and information may be needed (e.g. follow-up questions, verifying biometrics),
  • This could lead to a greater privacy impact in the event of a privacy breach if the collected data is not adequately secured.

Prepared by: Technology Analysis Directorate (TAD)


Age Verification: Technology and Safeguards

Key Messages

  • Current digital age verification systems use diverse technologies, analytical methods, and safeguards. No two systems are identical in their design, implementation, or potential for privacy and security vulnerabilities.
    • With so many types of systems available, there is no “perfect” technology or method to use in all circumstances.
    • Each system design has unique privacy and security risks based on how, where, and the length of time user data is stored.
  • Important safeguards that should form part of any age-verification system include:
    • Encryption: strong encryption methods should be in place for data in use, in transit, and at rest.
    • Data minimization: collecting, using, and disclosing the minimum amount of information may reduce data linkage.
    • Access controls: restrict who has access to user data.
    • Tokenization: tokens can substitute sensitive information with a random string of characters, which have no value.

Background

  • Digital age verification systems can rely on having users doing one or more of the following:
    • upload copies of government-issued identity documents;
    • agree to “liveness detection” of biometrics (including via facial and voice recognition);
    • answer knowledge-based authentication questions;
    • enter their date of birth at the point of access;
    • provide existing trusted identity credentials.
  • Digital age verification systems are constantly evolving; both the private and public sectors have developed novel technologies in this space, including but not limited to attribute exchange ecosystems and state-led e-ID schemes.

Prepared by: TAD


Third Party Intermediaries

Key Messages

  • To the extent a third party intermediary is an organization that collects, uses or discloses personal information in the course of a commercial activity, PIPEDA will generally apply.
  • PIPEDA does not specify particular security safeguards that must be used, but requires that companies protect personal information in a manner appropriate to its sensitivity.
  • Companies we are aware of offer diverse services, each having unique privacy concerns based on the technology used.
  • We have not examined any specific method in detail, so cannot offer definitive views on whether one technology is more privacy friendly than another.

Background

  • Third party providers, such as Yoti, AgeID, Age Check Certification Scheme, verify age in a number of ways:
    • Age identity providers: verify age attributes and issue a reusable identity card, digital token, voucher code, or app.
    • Age check providers: verify attributes on a transaction-by-transaction basis.
    • Age verification providers: determine the age of an individual based on consumer information databases, public data sets, or other data elements.
    • Attribute exchange providers: offer an online gateway to access verified age attributes.
    • Age estimation providers: estimate the age of an individual based on static images or live detection of biometrics.
    • Age certification schemes: offer independent audits, assessments and validation to age verification services
  • Certain providers only indicate if a user is over a certain age (e.g. 18); they do not share any additional personal information with age-restricted services or providers.

Prepared by: TAD


Alternatives and Challenges

Key Messages

  • No digital age verification approach is reliable or appropriate in every circumstance. Analysis should be undertaken to ensure that a specific technical solution is appropriate in the circumstances.
  • Regardless of the solution chosen, measures (as outlined in the “Technology and Safeguards” sheet) should be taken to prevent breaches and protect privacy.
  • Digital age verification systems face a number of vulnerabilities:
    • Online documentation (videos, forums, online tutorials, etc.) explaining how minors can evade restrictions.
    • Use of VPNs that allow users to appear in other jurisdictions.
    • Rapidly evolving attacks from cybersecurity threat actors.
  • Alternatives to digital age verification exist, including:
    • Purchasing passes from vendors (government service desks, convenience stores, gas stations, etc.) who verify government-issued identity documents in-person.
    • ISPs filtering to allow an accredited adult access to certain websites based on blacklisted IP addresses, content, or URLs.
    • Installing device-level filtration on laptops, smart TVs, tablets, etc., which an adult can enable and protect via a password.

Background

  • Digital age verification is one potential technical solution but not the only one: tools like parental filters on devices and other non-technical approaches can play a role.
  • Many jurisdictions, including Australia and the United Kingdom, advocate a multi-method approach to restricting young persons’ online access to pornography (e.g. combining technical solutions such as age-verification and device-level filters with educational programs for youth) which is meant to be more comprehensive and inclusive. (UK ICO’s Age Appropriate Design Code and Australia’s Online Safety Bill 2021)
    • However, adoption of multiple methods could increase data privacy and security risks by requiring users to share information across more services and with more service providers

Prepared by: TAD


Facial Recognition (FR) for Age Verification

Key Messages

  • Facial age estimation is an emerging technology that has not yet reached levels of accuracy seen in other FR applications.
  • The best algorithms today exhibit relatively high margins of error (2–3 years on average), especially for minors and children.
  • All FR technologies, including facial age estimation, have the potential for bias.
    • For example, the rate at which a face “ages” is influenced by factors (e.g. genetics, environmental conditions, diet, stress, health habits, use of narcotics, etc.), which could affect error rates or create disproportionate impacts on individuals.
  • The use of FR is also not subject to a clear and comprehensive set of rules in Canada. This creates room for uncertainty concerning what uses of FR may be acceptable, and under what circumstances.

Background

Bias and Accuracy

  • FR bias concerns have focused on disparate error rates based on demographics such as race and gender. Age estimation introduce additional sources of bias.
  • There is a lack of research into effectiveness of algorithms in real-world settings; it is unknown how well they perform in uncontrolled environments with challenges such as suboptimal lighting, spoofing attempts, and the use of makeup.
  • The lack of available training data to improve accuracy / reduce potential biases is exacerbated by the sensitivity associated with collecting personal information from minors and children.

Regulation of FR in Canada

  • FR in Canada is regulated by a patchwork of laws, except for in Quebec, which has a specific biometrics regime. Since FR uses personal information, its use by private-sector organizations may be subject to PIPEDA, and the Privacy Act for federal institutions.
  • We need stronger protections in our privacy laws to effectively regulate emerging technologies like FR.

Prepared by: TAD / Policy, Research and Parliamentary Affairs (PRPA)


United Nations material on Children

Key Messages

  • Canada is a signatory to the UN Convention on the Rights of the Child, which protects the privacy rights of children.
  • The UN Committee on the Rights of the Child recently emphasized that the rights of every child must be respected, protected and fulfilled in the digital environment, and this includes the right to privacy.
  • The OPC supports efforts to incorporate special considerations to protect children’s rights in the digital environment.
  • We have proposed that a preamble be added to our private-sector privacy law recognizing our responsibilities to uphold the UN Declaration on the rights of the child, stating “the processing of personal data be designed to serve humankind and must respect the best interests of children.”

Background

  • The UN Convention on the Rights of the Child:
  • Protects the right to privacy for children, stating that a child shall not be “subjected to arbitrary or unlawful interference with his or her privacy… or correspondence” and children have “the right to the protection of the law against such interference…”
    • Recognizes the important function performed by the mass media and requires states to ensure that children have access to information and material from a diversity of national and international sources.
    • Requires states to encourage the development of appropriate guidelines for the protection of the child from information and material injurious to his or her well-being, bearing in mind their right to free expression.
  • The UN Committee on the Rights of the Child recently published a General comment on children’s rights in relation to the digital environment stating, “Privacy is vital to children’s agency, dignity and safety and for the exercise of their rights.”
  • The UN Committee’s General comment also states that any interference with a child’s right to privacy should be “provided for by law, intended to serve a legitimate purpose, uphold the principle of data minimization, be proportionate and designed to observe the best interests of the child…”

Prepared by: PRPA


Bill C-11 and Children’s Privacy

Key Messages

  • Bill C-11, tabled in November 2020, would replace the existing private-sector privacy law, PIPEDA. Many of the recommendations made in our recently published submission would benefit children.
  • For example, we propose a preamble be added that recognizes our responsibilities to uphold the UN Declaration on the rights of the child.
    • Specifically, we propose it recognizes that “the processing of personal data be designed to serve humankind and must respect the best interests of children.”
  • We also recommend C-11 include an explicit right to de-indexing and/or removal of personal information from search results and other online sources.
    • Although this would be for all individuals, it is particularly important for children who should be provided with a means of reinventing themselves as they mature and enter adulthood.
  • We also recommend that the law ensure there are limits on the collection of publicly available personal information where an individual would have a reasonable expectation of privacy.
    • Given the extensive online presence of children, and the posting of information about children by others (peers, parents, etc.) this would help protect their interests.

Background

  • Regarding consent, we recommend privacy law should incorporate the requirement that individuals understand what they are consenting to and that information be presented in an intelligible and easily accessible format, using clear and plain language - this would assist children in making informed decisions about their online activities.
  • Our 2016 consultation on Online Reputation found that maintaining one’s online reputation poses a particular challenge for youth given they often have little or no option but to engage online (e.g. social pressures, school obligations) and are also in a time of experimentation, in which boundaries are being tested.

Prepared by: PRPA


Bill C-10 (An Act to amend the Broadcasting Act) and Potential ‘Online Harms’ Bill

Key Messages

  • Our Office has not taken an official position on either Bill C-10 or the potential ‘Online Harms’ Bill, but we are following their developments and any potential privacy issues.
  • Bill C-10, Bill S-203 and the proposed ‘Online Harms’ Bill have a common feature of regulating online content to some degree, which may create some overlap for Regulators including the OPC.
  • From a privacy perspective, addressing online harms is in part about ensuring that individuals have the ability to assert appropriate control over the information available about them online - to protect their online reputation.
  • Our recommendations for Bill C-11 include clear and explicit rights for individuals with respect to the deindexing and/or removal of personal information from search results and other online sources.

Background

  • Online reputation is an important issue from a privacy perspective given the significant impacts that information posted online can have on individuals and the close connection of online reputation to other rights, such as freedom of expression.
  • Bill C-10 amends the Broadcasting Act to regulate online broadcasters, which could bring services such as Netflix, Spotify, etc. under the Act.
    • Recent media coverage of the debates surrounding Bill C-10 raised concerns about free speech relating to the removal of s. 4.1 of the Bill, which exempted user-generated content posted by Canadians to social media platforms like YouTube from CRTC regulation. Following public criticism, a revised Charter statement was made and it concluded that the Bill is Charter compliant.
  • Based on media reports, the ‘Online Harms’ proposal from the Minister of Heritage, which has not yet been tabled, would establish new regulations for social media platforms, including a requirement that all platforms remove illegal content, including hate speech, within 24 hours. Other online harms in scope include radicalization, incitement to violence, the exploitation of children and the creation or distribution of terrorist propaganda. It would also create a new Regulator.
    • We have yet to see the text of this Bill, and will comment further when it becomes available.

Prepared by: Legal / PRPA


Offense Provisions

Key Messages

  • We do not see clear overlap between the offence and AMP provisions under Bill S-203 with those in PIPEDA or proposed under Bill C-11.
  • However, Bill S-203, PIPEDA and Bill C-11 all seek to address concerns that arise in the course of commercial activity in a modern digital economy.
  • While PIPEDA and Bill C-11 are focused on privacy interests, Bill S-203 seeks to deter anyone who makes sexually explicit material on the Internet for commercial purposes from allowing young persons to access that material.
  • There could be some overlap in the application of Bill S-203 and PIPEDA or Bill C-11; for example, where sexually explicit material being made available on the Internet to young people also involves a disclosure of an individual’s personal information in the course of a commercial activity.

Background

  • Bill S-203 creates an offence to make sexually explicit material available to young persons on the Internet. Fines can be issued to individuals or corporations who violate the law. It also creates liability for a director, officer or agent of a corporation.
  • Section 10 of Bill S-203 provides that any ISP that fails to comply with any notice requirement commits a violation and is liable to an administrative monetary penalty (AMP) (amount to be determined in accordance with Regulations).
  • Under PIPEDA, it is an offence for an organization to knowingly contravene certain retention requirements, retaliate against an employee in certain circumstances, knowingly contravene breach reporting, notification and record-keeping requirements or to obstruct the Commissioner in the investigation of a complaint.
  • Bill C-11 includes the existing offences under PIPEDA and introduces new offences for knowingly contravening s. 75 (use of de-identified information) or a compliance order under ss. 92(2). It also increases the fines associated with those offenses.
  • Bill C-11 also introduces a provision allowing for the imposition of AMPs in certain circumstances.

Prepared by: Legal


Pornhub and MindGeek

Key Messages

  • Our Office has been following ETHI’s study on the “Protection of Privacy and Reputation on Platforms such as Pornhub”. We are aware of the many serious concerns that have been raised, including pertaining to privacy, given the highly sensitive personal information that is implicated.
  • The OPC has received a complaint related to consent for collection, use and disclosure of intimate images on the MindGeek websites and has begun an investigation.
  • We cannot comment on the details of ongoing investigations.
  • That said, it is of the utmost importance that websites collecting, using or disclosing intimate images comply with the law, minimize privacy harms and respect Canadians’ fundamental right to privacy.

Background

  • ETHI began their study on February 1, 2021, and have held over five meetings on the topic with over 20 witnesses. Issues discussed include age and identity verification; non-consensual distribution of intimate images; and manual reviews combined with AI software to remove underage and non-consensual content.
  • The Toronto Star published an article on the complaint in question in January 2021.
  • As noted in the Ashley Madison investigation (2016) into adult dating websites, it is crucial for organizations that hold personal information electronically to adopt clear and appropriate processes, procedures and systems to handle information security risks, supported by adequate expertise (internal or external).
  • This is especially important where the personal information held includes information of a sensitive nature that, if compromised, could cause significant reputational or other harms to the individuals affected.

Prepared by: PRPA In Consultation With: PIPEDA Compliance


International Examples

Key Messages

  • Age-verification tools and regulations are still a developing area, and we do not have a great deal of direct experience.
  • We note that the UK Information Commissioner’s Office has developed an Age appropriate design code of practice for online services, which came into force in September 2020 and outlines a number of key privacy by design elements for establishing the age of users online, such as:
    • making sure to only collect the minimum amount of personal data needed to give an appropriate level of certainty about the age of individual users.
    • prohibiting re-purposing of that information for other purposes, such as targeting children with advertising.

Background

  • The law in France, and proposals in the UK and Australia, have not stipulated any specific technology that must be used for age verification.
  • The ICO’s Age appropriate design code of practice is also not prescriptive about methods should be used to establish age, but guides organizations to consider the risks to children that would arise from the processing of their personal data.
    • Under the current wording of Bill C-11, the OPC would not have the authority to produce codes on our own. As an alternative, we could produce non-binding guidance on protecting children online.
  • UK - Mandatory age-verification for access to online pornography was initially set out in part 3 of the UK Digital Economy Act 2017 – which was never adopted. The UK Government outlined new plans in an Online Harms Whitepaper, which re-introduced the issue of age-verification and ISP blocking and responsibilities of the regulator. However, the Online Safety Bill tabled in UK Parliament on May 12 2021 did not include provisions requiring age-verification.
  • France’s age-verification system for pornography was included as an amendment to a broader law on domestic violence (adopted July 2020). The French audiovisual regulator enforces the law, and can sanction companies that do not comply, or block access to the websites in France with a court order.
  • Australia’s proposed Online Safety Bill 2021 (Feb. 2021) includes provisions to setup “restricted access systems” for certain (18+) materials, and a complaints regime.

Prepared by: PRPA


Date modified: