Language selection

Search

Study of the Use and Impact of Facial Recognition Technology Issue Sheets

How Facial Recognition Technology Works

Key Messages

  • Facial Recognition (FR) technology uses image processing techniques to detect and analyze the biometric features of an individual’s face for the purposes of identification or verification of an individual’s identity.
  • The use of FR involves the collection and processing of highly sensitive personal information.
  • FR is an especially powerful technology because it enables identification and authentication rapidly, at scale, for a relatively low cost, using existing sources of images and videos.
  • Both the ease of deployment and potentially grave impacts on privacy speak to the need for clear regulation to mitigate personal and societal harms.

Background

  • FR works in the following general way:
    • FR takes as input one or more images of individuals whose identities it attempts to discover or verify - this image is known as a “probe” image.
    • The face in the image is converted into a biometric template or “faceprint”.
    • The system then compares the faceprint against an existing database of images that have already been converted into faceprints, and calculates the probability of a match.
    • When used for identification, FR can be configured to return a list of labelled images that exceed a given threshold of similarity to the probe image.

Prepared by: Technology Analysis Directorate

Consulted: Policy, Research and Parliamentary Affairs Directorate


Public Sector Uses

Key Messages

  • We have engaged with departments on FR initiatives for immigration and border security, national security, and law enforcement.
  • Some of our recommendations included:
    • Advising institutions to consider whether use of FR is necessary, proportionate and effective in the context.
    • Regarding accuracy, we commented on potential bias and possible disproportionate impacts on certain individuals in relation to the Canada Border Services Agency’s (CBSA) Primary Inspection Kiosks.
    • We made recommendations to the CBSA on NEXUS regarding transparency and consent, including ensuring that travellers are aware of all disclosures of their personal information, and of the change from iris scanning to facial recognition.
    • That institutions assess privacy risks and mitigate concerns before an FR program goes live, given the sensitivity of biometric personal information.

Background

  • Since 2010, the Government Advisory Directorate has received five (5) Privacy Impact Assessments and engaged in nine (9) advisory consultations on initiatives explicitly involving facial recognition technology. We have also established a relationship with the CBSA’s Office of Biometrics and Identity Management, which will play a significant role in the Agency’s (and potentially the government’s) use of FR technology moving forward.
  • We are only aware of FR initiatives that have been reported to us by the CBSA, the Royal Canadian Mounted Police, the Department of National Defence, the Canadian Security Intelligence Service, the Canadian Air Transport Security Authority and Immigration, Refugees and Citizenship Canada.

Prepared by: Government Advisory Directorate


Private Sector Uses of Facial Recognition

Key Messages

  • In contrast to our experience with federal departments, private companies have not engaged my Office to seek proactive advice on initiatives involving FR.
  • We have had two previous investigations into Cadillac Fairview and Clearview AI, in which we found both companies to be non-compliant with PIPEDA, and had collected millions of Canadians’ facial biometrics covertly.
  • That said, we are aware of reports that the private-sector is using FR for a variety of purposes, such as monitoring employee attendance, facilitating payments, combatting shoplifting, and for digital IDs.
  • The ability of FR to covertly identify people, reveal secondary information about them, and potentially be used as a universal identifier to track them across different contexts, raises significant privacy concerns.

Background

  • We have not received breach notifications involving facial biometrics, nor have we had advisory engagements with businesses.
  • In an effort to provide privacy advice on this issue, we are currently developing guidance for the public and private sectors on biometrics, which includes FR.
  • The use of FR in facilitating payments: Apple’s FaceID processes biometric faceprints on-device to authenticate identity and does not disclose that information to other parties. This is in contrast to some merchants that are looking to deploy their own FR systems for payment, where in-store cameras perform the authentication.

Prepared by: Policy, Research and Parliamentary Affairs Directorate


Use and Security Considerations

Key Messages

  • The justification to employ FR must be compelling. However, how it is used and configured as well as what safeguards are in place also matter greatly.
  • FR can be used for one of two primary purposes: identification or verification (also known as “authentication”). Verification is generally considered less privacy invasive than identification.
  • In identification, FR compares the probe image against all other images in a database of pre-enrolled faces in an attempt to learn the individual’s identity.
  • In verification, FR compares the probe image to a single image in the system corresponding to the identity claim made by the individual, so as to verify the individual’s identity.
  • Identification is used primarily in investigative contexts, whereas verification has become an increasingly common measure for security and digital identification.
  • It is important to consider safeguards such as local storage and processing, secure enclaves and encryption.

Background

  • Identification is also referred to as “one-to-many” or “1:N” matching, whereas verification is referred to as “one-to-one” or “1:1” matching.
  • Because of its reliance on a face database, identification is typically implemented in a “centralized” model, where one entity stores and processes all the data. This increases risks in terms of data security and purpose limitation.
  • In contrast, verification is typically implemented in a “decentralized” manner, where each user stores and processes the data locally, for example, on their phone. While not immune to security and repurposing risks, decentralized architectures are generally considered less risky than centralized models.

Prepared by: Technology Analysis Directorate


Consent for Facial Recognition

Key Messages

  • FR use involves the collection and processing of highly sensitive biometric information, and under PIPEDA would generally require express consent for its use.
  • But for consent to be meaningful, organizations must be fully transparent and clear about how biometric information is collected, used and disclosed.
  • In today’s complex information economy, privacy protection cannot hinge on consent alone. It must be buttressed with a rights-based foundation, including necessity and proportionality requirements, and clear no-go zones.

Background

  • ETHI testimony on consent: Ms. Khoo (Citizen Lab) said clear, informed consent warrants attention in commercial vendor context. Ms. Piovesan (INQ Law) said PIPEDA and QC law require companies to seek consent yet what lacks is a clear understanding of safeguards and a focused law on FRT; Law must be mindful of beneficial FR uses but uses must be contained; and consent matters to individual autonomy and could be modeled after the GDPR and QC regime.
  • ETHI testimony on no-go zones: High-risk FR uses should be banned at the border (Ms. Molnar, Refugee Law Lab); workplace and public space (Ms. Watkins, Princeton); and for mass surveillance (Tim McSorley, civil society).
  • While the GDPR broadly prohibits processing special categories of personal data (including biometrics), it recognizes certain bases to justify its processing (e.g. explicit consent of data subject) (Article 9).
  • Amendments to Quebec’s public and private sector privacy laws define sensitive information to generally include biometrics. Quebec’s Act respecting the legal framework for information technology does not permit public and private entities from using biometric characteristics and measurements to confirm or verify a person’s identity without their express consent (section 44).
  • To help AI develop in situations where meaningful consent may not be practicable, our AI proposal recommends PIPEDA incorporate a consent exception when collecting or processing PI for the public good, with de-identification as a condition.

Prepared by: Policy, Research and Parliamentary Affairs Directorate


Canada - Current Laws on Facial Recognition Use

Key Messages

  • FR in Canada is regulated by a patchwork of laws including PIPEDA, the Privacy Act, and the Canadian Charter of Rights and Freedoms, as well as provincial legislation, such as Quebec’s biometrics regime.
  • This patchwork of existing laws creates legal ambiguity and gaps.
  • The laws that do exist generally use high-level principles. This approach is useful for adapting to evolving technology. However for privacy invasive technologies like FR, it leaves considerable discretion in implementation, and therefore does not confer adequate certainty that individuals’ rights will be respected.
  • A lack of specific legislation could mean years of fragmented, confidential approaches to FR use until jurisprudence develops.
  • The lack of specific legislation for police FR use stands in contrast with legislation such as the DNA Identification Act and the Identification of Criminals Act, which regulate the National DNA Databank, fingerprints, and mugshots.

Background

  • In Quebec, an Act to Establish a Legal Framework for Information Technology governs biometrics. Recent amendments will require organizations to notify the Commission d’accès à l’information at least 60 days before a biometric database is brought into service.
  • The Charter section 8 grants that, where an individual has a reasonable expectation of privacy, any police search must be under a warrant or authorized by a reasonable law. Anonymity is part of the right to privacy (R v. Spencer, 2014 SCC 43), and being in a public space does not automatically negate individuals’ expectations of privacy (R v. Jarvis, 2019 SCC 10).
  • The DNA Identification Act and Identification of Criminals Act set out thresholds for the collection of their specified biometrics, as well as conditions for collection (including compelled production and judicial warrants). The DNA Identification Act uses highly specific rules to limit how the National DNA Databank may be used.

Prepared by: Legal Services Directorate


Laws in other Jurisdictions

Key Messages

  • Approaches to regulating FR vary widely between jurisdictions. Some have called for a full or partial moratoria (as in some jurisdictions in the United States) or enacted provisional laws until FR risks are better understood and can be sufficiently mitigated (as in Massachusetts).
  • Some jurisdictions have implemented legislation specific to public sector FR use, such as in Washington and Utah.

Background

  • Under the GDPR, biometrics (including facial images as defined under Article 4) are considered a special category of data which, under Article 9, are subject to prohibitions on processing, with certain exceptions. A Data Protection Impact Assessment is required for any processing likely to be high risk, which could include processing biometric data.
  • Elsewhere, laws that govern biometrics generally entail certain privacy obligations and enhanced rights, such as requiring public organizations to notify the privacy regulator before implementing a biometrics database, as in Quebec, or the right to sue a company for collection of biometric data without consent, as in Illinois.
  • Washington and Utah regulate government FR use, including law enforcement, by specifying allowable uses. Vermont and Virginia have enacted full bans on FR use by local law enforcement until such use is explicitly authorized by law. Massachusetts has a provisional law regulating government use until it can study and better regulate FR risks. US federal laws have also been proposed that would impact FR use:
    • The Fourth Amendment Is Not for Sale Act, introduced in April 2021 and now with the Judiciary Committee, seeks to stop data brokers selling personal information to law enforcement agencies without court oversight. The Bill would also ban the use of data by public agencies that was illegally obtained.
    • The Algorithmic Accountability Act, introduced in February 2022 and referred to subcommittee, would require new transparency and accountability for automated decision systems. The Bill would require private organizations to conduct assessments for algorithmic bias, effectiveness and other factors.

Prepared by: Policy, Research and Parliamentary Affairs Directorate


European Union Artificial Intelligence Act

Key Messages

  • Europe’s proposed AI Act, if adopted, would clearly outlaw harmful AI applications that are incompatible with human rights. It would prohibit real-time remote biometric identification in public spaces by law enforcement (subject to exceptions). It also takes a risk-based approach, and would designate all other remote biometric identification and categorization (real-time and post) as “high-risk”, subject to stringent requirements.
  • While we do not believe there should be a complete ban on the use of FR, we do believe specific no-go zones should be considered. Even in the absence of no-go zones, we believe independent oversight is necessary to enforce rules such as necessity and proportionality to ensure the appropriate use of FR.

Background

  • Exceptions to prohibition on remote biometric identification include: (1) for search of victims, (2) prevention of terrorism or specific, substantial, and imminent threat to life or safety, and (3) identification of suspects of criminal offences that carry a minimum 3 year sentence.
  • Other proposed prohibitions include: (1) AI that deploys “subliminal techniques” to manipulate behaviour, and (2) use by public authorities for social scoring.
  • The European Data Protection Board and the European Data Protection Supervisor favour stronger prohibitions including: expanding scope of prohibition beyond "real-time" to include non-live FR, ban biometric categorization, and emotion detection.
  • All other remote biometric identification, including by commercial organizations, are designated as “high-risk” and are subject to a number of requirements such as: risk and quality assessments, logging and record-keeping for traceability, general human oversight, accurate and representative data for AI training, ex-ante conformity assessments, and demonstrable accountability.
  • Though there are commonalities with our AI proposals (risk assessments, traceability, demonstrable accountability, and enhanced enforcement through penalties), our AI recommendations also included: a balancing test to assess the purpose, necessity, and proportionality of the measure and consideration of the interests and fundamental rights of the individual; use of de-identification where possible; actionable rights to meaningful explanation and to contest.

Prepared by: Policy, Research and Parliamentary Affairs Directorate


Facial Recognition Moratoria

Key Messages

  • FR use should not likely be banned outright, though the idea of no-go zones for certain specific practices should be considered.
  • Used responsibly and in the right circumstances, FR can provide benefits to security and public safety.
  • FR requires a strong legal framework to ensure responsible use and effective oversight. Necessity and proportionality, and measures such as transparency, accountability and independent oversight must be clearly set out in law.
  • Ultimately, the decision rests with Parliament to consider a moratorium until such time as any specific legislative changes are recommended and adopted.

Background

  • Regarding police use of FR we recommended the law impose limits on authorized uses to prevent deployments that could have indefensible effects on privacy and other human rights, including prohibiting any use that can result in mass surveillance.
  • The EU has proposed legislation on AI that prohibits biometric identification in publicly accessible places for the purpose of law enforcement, with exceptions.
  • Additionally, other jurisdictions have proposed or implemented moratoria on FR use. Among them:
    • In December 2021, the Italian Parliament introduced a moratorium on FR in public places that would only allow police to use the technology subject to case-by-case approval by the Italian Data Protection Authority.
    • In the United States, legislation was proposed in June 2021 with the goal of preventing government use of biometric technology, including facial recognition tools.
    • Several US cities and states have banned law enforcement use of FR including Boston, San Francisco, Oakland, Portland, Virginia, and Vermont. Montreal has also passed a motion placing restrictions on the use of facial recognition by city police.

Prepared by: Policy, Research and Parliamentary Affairs Directorate


PIPEDA Reform: Need for Rights Based Framework

Key Messages

  • Our investigation into Clearview AI highlighted the shortcomings of the current law, many of which were left unaddressed by the former Digital Charter Implementation Act (C-11).
  • The question of where acceptable FR use begins and ends is in part a question of the expectations we set now for the future protection of privacy.
  • Our laws must better regulate FR use and other technologies, and ensure that where there is a conflict between commercial objectives and privacy protection, privacy rights prevail.
  • We have recommended that a reformed PIPEDA include a rights- based framework, which would help to ensure that novel technologies such as FR are used in a manner that upholds rights.
  • Privacy law should prohibit using personal information in ways that are incompatible with our rights and values.

Background

  • In our 2018-19 Annual Report we recommended a revised PIPEDA include a preamble and purpose clause that would provide guidance as to the values, principles and objectives that shape the interpretation and application of the law.
  • In response to Bill C-11 we recommended the inclusion of a proposed preamble, and edits to sections 5 (purpose clause); 12 (appropriate purposes); and 13 (limiting collection).
  • Some stakeholders have argued that a rights-based approach is not possible under Canadian federal law, as the protection of civil rights falls within provincial jurisdiction under the Constitution. To the contrary, the OPC has received expert legal advice which holds that that our proposals do not increase the risk that the courts would find the law unconstitutional and, in fact, some of our proposed amendments would make the new federal law even more viable from a constitutional perspective.

Prepared by: Policy, Research and Parliamentary Affairs Directorate


PIPEDA Reform: Enforcement Powers

Key Messages

  • Our law needs to be enhanced to more effectively regulate new technologies such as facial recognition. This must include strong enforcements mechanisms such as order making and Administrative Monetary Penalties (AMP) to provide quick and effective corrective action for individuals when their rights are violated.
  • Our Clearview AI investigation demonstrates the need for such powers, given Clearview AI disputed our findings and refused to follow any of our recommendations.
  • Order-making powers and fines would change the dynamic of our discussions with companies during investigations. Currently organizations found in contravention of the law can simply ignore our recommendations and “wait it out” until the courts have come to the same conclusion as my Office.
  • Privacy commissioners at both the provincial and international levels who are empowered to make orders and impose fines report that these enforcement tools have led to much more cooperation from companies.

Background

  • Witnesses before ETHI (such as the Canadian Civil Liberties Association and International Civil Liberties Monitoring Group) raised the need for the OPC to have stronger enforcement powers to more effectively address risks posed by FR.
  • AMP regimes in other jurisdictions – such as the United Kingdom, Australia, Singapore, Quebec (Bill 64), Ontario (PHIPA), and California apply to a broad range of violations and are not limited in scope as would have been the case under the former C-11.
  • Privacy laws (such as those in Alberta, British Columbia, Quebec, the United Kingdom, or New Zealand) grant order-making powers to their DPAs that are not, for instance, constrained by a “reasonably necessary to ensure compliance” threshold (as was provided for under the former C-11).

Prepared by: Policy, Research and Parliamentary Affairs Directorate and Legal Services Directorate


Privacy Act Reform

Key Messages

  • A number of DOJ ’s proposals on Privacy Act reform would bring positive changes to the law to help deal with the privacy risks posed by emerging technologies such as FR.
    • For example, the proposal to define and more clearly regulate the collection and use of publicly available personal information by having all of the Act’s rules apply to such information.
  • Obligations for “privacy by design”, PIAs , and oversight (including proactive audit powers) are also necessary measures to promote and enforce privacy compliance for FR initiatives.
  • We made a number of recommendations to DOJ that will be important for ensuring effective regulation of FR, including identifying key elements for regulating AI, strengthening the collection threshold, and refining the proposed definition for publicly available personal information.

Background

  • Artificial Intelligence: We recommend a definition for automated decision-making, a right to meaningful explanation and human intervention, a standard for the level of explanation required, and obligations for traceability.
  • Collection Threshold: We noted a “reasonably required” standard as proposed is workable if the aim is to add clarity to the law while yielding results similar to Necessity and Proportionality. We recommended amendments to the framework for assessing reasonably required (limiting purposes; considering cost and proportionality).
  • “Publicly Available” Personal Information: We recommended the definition be explicit that publicly available personal information does not include information in respect of which an individual has a reasonable expectation of privacy.
  • Regarding police use of FR, we have noted that the Act should continue to govern where it provides sufficient protection but that additional changes are needed to make certain provisions more specific to risks that can arise. These changes could be in privacy laws or a separate law, given impacts beyond privacy (such as equality and non-discrimination).

Prepared by: Policy, Research and Parliamentary Affairs Directorate


OPC Artificial Intelligence Proposals

Key Messages

  • FR systems rely on AI to detect, analyze, and compare facial biometrics in images. AI regulation can address some risks of FR.
  • In our view, an appropriate legal framework for AI would:
    • Allow personal information to be used for public and legitimate business interests, including for the training of AI; but only if privacy is entrenched as a human right;
    • Create provisions specific to automated decision-making to ensure transparency and fairness (explanation and contest);
    • Require businesses to demonstrate accountability to the regulator upon request, ultimately through proactive inspections and other enforcement measures.
  • Both ISED and Justice have made proposals with specific requirements for AI in reformed privacy laws, with which we agree in principle; that said, we have offered detailed recommendations in submissions pertaining to both reform initiatives.

Background

  • Our PA reform proposals: Define automated decision-making, include rights to meaningful explanation and human intervention, denote a specific standard for explanations, create obligations to log and trace personal information used in automated decision-making. We support DOJ proposal to include inferences as PI.
  • Our PIPEDA reform proposals: Include inferences as PI, new consent exceptions for: (1) Internal research and statistical purposes (was reflected in the former C-11 (2020)), (2) compatible purposes, (3) legitimate commercial interests.
    • With safeguards: Privacy Impact Assessments (PIA), balancing test with fundamental rights of individual, and de-identification (where possible for legitimate interests exception).
  • Denote minimum criteria for the right to explanation, introduce a right to contest to human intervener, implement privacy by design (possibly through PIAs), algorithmic traceability and record-keeping, and binding orders and penalties.

Prepared by: Policy, Research and Parliamentary Affairs Directorate


Regulating Public and Private Sector Facial Recognition Uses

Key Messages

  • My office’s PIPEDA and PA reform recommendations would help address privacy risks posed by FR use, such as mandating Privacy Impact Assessments (PIA), designing for privacy, clarifying that publicly available personal information may still retain a privacy interest, defining sensitive information to include biometrics, and strengthening oversight powers (via proactive audits, issuing orders and AMPs).
  • We also recommended federal privacy laws include a rights-based framework that in the context of FR use by companies would ensure privacy rights prevail over commercial interests.
  • Our proposals for regulating AI would also apply to FR in so far that its use involves automated decision-making (e.g. mandating traceability, as well as explanation and contestation rights).
  • Our work on FR use by police revealed the limitations of a principle-based and tech-neutral law in sufficiently addressing risks posed by such potentially invasive technologies. In this context, we called for a new legal framework that sets out appropriate limits on police use of FR. Outside this context, FR’s unique attributes and risks may need supporting regulations or guidelines, in addition to our existing reform proposals.
  • Finally, we are open to other oversight models to address FR risk, as in Quebec where there is a more defined role for the regulator.

Background

  • Our reform proposals largely align with testimony heard at ETHI on rules needed for regulating public and private sector FR use: a rights-based foundation; specific rules for AI (human review, explainability, etc.); necessity and proportionality; PIAs and AIAs; designing for privacy; improving accountability/transparency of third parties; enhanced powers for OPC (ability to issue orders/fines). Additional recommendations included banning high risk uses (public spaces and for mass surveillance) and specific rules for biometrics (prior review, opt-out of facial verification, consent requirements).

Prepared by: Policy, Research and Parliamentary Affairs Directorate


Accuracy and Bias

Key Messages

  • Privacy laws require that personal information collected and used as part of a FR initiative be accurate and up to date. Inaccuracies in FR systems pose serious risks to individuals’ privacy rights in the event that decisions are made based on inaccurate or biased information.
  • Current FR algorithms vary widely in quality. Although overall accuracy has improved over time, many still exhibit differences in error rates across demographic groups (e.g., race and gender). That said, some of the highest-performing algorithms have shown “undetectable” differences across demographics in test settings.
  • To ensure accuracy obligations are met, the FR system as a whole must be considered, from the training data that is used, the FR algorithm employed, the face database against which images are compared, and the nature of human review.
  • If the training data used to generate a FR algorithm lacks diversity, the algorithm may encode biases by default.

Background

  • FR technology has undergone significant improvements in recent time. Studies have shown that the majority of developers’ algorithms tested in 2018 outperform the most accurate algorithm from 2013; and that the most accurate algorithm from 2020 is substantially more accurate than any in 2018. Yet disparities across demographic groups remains an industry challenge.
  • Studies have indicated that a lack of diverse training data is the main culprit of bias: algorithms developed in China tended to perform better on East Asian faces than Eastern European individuals, thereby reversing the general trend.
  • Biases in FR accuracy could lead to higher rates of misidentification for individuals from certain racialized groups, which could lead to unfair arrest and detention, and undue suspicion, surveillance, and investigation of members from those groups.
  • Over-representation of a group in a face database may produce a “feedback loop” where disproportionate searches against that group lead to further surveillance of group members, their associates or their community.

Prepared by: Technology Analysis Directorate and Policy, Research and Parliamentary Affairs Directorate


Impact of Facial Recognition Technologies on Democratic Rights

Key Messages

  • The use of FR in public spaces, including virtual spaces, raises concerns about protection of democratic rights and freedoms.
  • Privacy is vital to dignity, autonomy, and personal growth, and a basic prerequisite to free, open participation by individuals in a democracy.
  • For democracy to flourish, individuals must be able to navigate public and private spaces without being tracked, monitored or their identities being revealed at every step.
  • As such, it is imperative that our privacy frameworks be reformed to recognize and protect privacy as a human right.

Background

  • International Data Protection Agencies (DPA) response: In 2020, the OPC co-sponsored an international resolution specific to use of FR that expressed concern that the widespread use of facial recognition can entail discriminatory effects and impact the ability to exercise certain other fundamental rights.
  • Global concerns: the United Nations High Commissioner for Human Rights has stated that there should be a moratorium on the use of FR technology in the context of peaceful protests, until States meet certain conditions including transparency, oversight and human rights due diligence before deploying it.
  • Recent example: A recent example of reporting includes that from the civil society group Human Rights Watch which has expressed in recent reports a “heightened concern” over certain FR systems armed with AI technology that can scan faces of protestors, and alert authorities to those on a wanted list.

Prepared by: Policy, Research and Parliamentary Affairs Directorate


Investigation into the Royal Canadian Mounted Police’s use of Clearview AI

Key Messages

  • We reported our investigation results to Parliament in June 2021.
  • We found the RCMP contravened the Privacy Act by collecting information from Clearview AI, since an institution cannot collect personal information from a third party agent if that third party agent collected the information unlawfully.
  • We remain concerned that the RCMP did not agree with our conclusion that it contravened the Privacy Act.
  • We encourage Parliament to amend the Privacy Act to clarify that the RCMP has an obligation to ensure that third party agents it collects personal information from have acted lawfully.
  • We are monitoring the RCMP’s progress in implementing, by June 2022, recommendations we made to address serious systemic gaps in the RCMP’s policies and systems to track, assess and control personal information collections using new technologies.

Background

  • OPC’s investigation report also notes that the common law clearly sets limits on the RCMP’s collection powers as a police body. In our view, the use of FR technology by the RCMP to search through massive repositories of Canadians who are innocent of any suspicion of crime presents a significant violation of privacy and clearly warrants careful consideration against these constraints.
  • OPC made recommendations to address systemic failures by the RCMP to assess legal constraints before using Clearview, as well as serious gaps in tracking (it initially erroneously told OPC it wasn’t using Clearview), and control (it could not account for the purpose of the majority of its searches via Clearview).
  • While the RCMP is making progress in its implementation of our recommendations, we are concerned that it has not dedicated sufficient resources to meet the June 2022 deadline.

Prepared by: Compliance Sector


Clearview AI Investigation

Key Messages

  • We published our report on February 2, 2021, finding that:
    1. Clearview failed to obtain consent for the collection, use and disclosure of millions of images of Canadians it scraped from various websites, or of the biometric profiles it generated.
    2. Its practices amounted to continual mass surveillance, and were for an inappropriate purpose.
  • Clearview disputed our findings and refused to follow any of our recommendations. Clearview’s database has grown from 3 billion images at the time of our report, to 20 billion now.
  • In December 2021, the Provincial privacy regulators of Alberta, British Columbia and Quebec ordered Clearview to comply with the recommendations arising from our report, specific to their Provinces. Clearview has challenged all three orders in the relevant Provincial courts.

Background

  • Provincial cooperation: This was a joint investigation with Alberta, BC and Quebec.
  • Consent: Clearview did not seek consent for the use of individuals’ personal information, claiming that the websites it scraped were publicly accessible, such that the information was “publicly available”. We found that the information was not publicly available, as defined in the Regulations.
  • Purposes: Clearview indiscriminately collected, used and disclosed personal information in order to allow third-party organizations who subscribed to its service to identify individuals by uploading photos in search of a match.
  • Outcomes: Mid-way through the investigation, Clearview agreed to exit the Canadian market. However, Clearview refused to follow the recommendations, in our final report, that it: (i) commit to not re-entering the Canadian market; (ii) cease collection, use and disclosure of images and biometric profiles of Canadians; and (iii) delete the images and biometric arrays in its possession.

Prepared by: Compliance Sector


Clearview AI Provincial Orders and Judicial Review

Key Messages

  • Our provincial counterparts (British Columbia, Alberta, and Quebec) issued orders against Clearview AI following joint investigation with the OPC.
  • The orders required that Clearview AI stop offering its services, stop collecting and using images, and delete all information collected without consent.
  • These orders are now being challenged in provincial courts.
  • The OPC does not have order-making powers and is currently not participating in those provincial litigation matters.

Background

  • OPC and provincial counterparts in BC, Alberta, and Quebec issued a joint report, in February 2021, of findings outlining Clearview AI’s non-compliance with Canadian private sector privacy laws.
  • Although the OPC does not currently have order-making powers, our provincial counterparts who participated in this joint investigation do have such powers.
  • Orders against Clearview AI were issued by BC, Alberta, and Quebec requiring it to do the following in their respective provinces:
    1. Stop offering its facial recognition services;
    2. Stop collecting, using and disclosing images of people; and
    3. Delete images and biometric facial arrays collected without consent.
  • Clearview AI is challenging those provincial orders in court (via judicial review) arguing, amongst other things, that:
    1. provincial privacy laws don’t apply to it;
    2. the personal information in question was publicly available and collected, used, and disclosed reasonably;
    3. certain sections of the provincial private sector privacy acts violate section 2(b) of the Canadian Charter of Rights and Freedoms; and
    4. provincial orders as worded cannot be complied with (i.e., unreasonable and unenforceable).

Prepared by: Legal Services Directorate


Cadillac Fairview Investigation

Key Messages

  • Our investigation into Cadillac Fairview found that the company used embedded inconspicuous cameras in digital information kiosks at 12 shopping malls to collect customers’ images, and used FR technology to guess their age and gender.
  • Shoppers had no reason to expect that their sensitive biometric information would be collected and used in this way, and did not consent to this collection or use.
  • While the images were deleted, the sensitive biometric information of 5 million shoppers was sent to a third party service provider, and was stored in a centralized database for no discernable purpose.
  • We remain concerned that Cadillac Fairview refused to commit to ensuring express, meaningful consent is obtained from shoppers should it choose to redeploy the technology in the future.

Background

  • Provincial cooperation: This was a joint investigation with Alberta and British Columbia, and involved information sharing with Quebec. Our findings were published in October 2020.
  • Collection purposes: Personal information was collected in order to track foot traffic patterns and predict demographic information about mall visitors (e.g. age and gender). Unknown to Cadillac Fairview, a biometric database consisting of 5 million numerical representations of faces was also created and maintained by a third-party processor.
  • Outcomes: Cadillac Fairview has ceased use of this technology and has advised that they have no current plans to resume its use. We are concerned that Cadillac Fairview could simply recommence this practice, or one similar, requiring us to either go to court or start a new investigation.
  • Law reform: Cadillac Fairview’s refusal to commit to obtaining express and meaningful consent for future use of this technology demonstrates our need for stronger enforcement powers, including order making and Administrative Monetary Penalties to better protect Canadians’ privacy.

Prepared by: Compliance Sector


Facial Recognition Guidance

Key Messages

  • The current legislative context for police use of FR is insufficient. Absent a comprehensive legal framework, there remains uncertainty about when FR use by police is lawful.
  • My office collaborated with our provincial and territorial counterparts to publish guidance to clarify police agencies’ privacy obligations relating to the use of FR, to help ensure any use of the technology complies with the law, minimizes privacy risks, and respects privacy rights.
  • A draft of the guidance was published in June 2021, at which time we ran a public consultation to solicit feedback on the guidance.
  • We made a number of amendments to the guidance in response to stakeholder feedback and released a final version today.

Background

  • We received 29 stakeholders submissions from civil society, academia, government, police, legal and industry sectors, as well as individuals. Throughout the fall of 2021, our Office met with law enforcement agencies, civil society groups, organizations representing marginalized communities and equity seeking groups, as well as FPT human rights commissioners to seek their feedback.
  • Stakeholders mostly found the guidance to be relevant, timely and beneficial, but recommended we:
    • clarify recommendations regarding accuracy, retention, and third party involvement in FR initiatives, as well as explanations of legal principles and concepts;
    • Further explain what should be contained in police agencies’ public notices and how they should be published to meet transparency obligations;
    • include a commitment for police to develop a formal FR policy given the high risks involved in using FR;
    • tailor the guidance to specific use cases of FR to be more instructive; and,
    • add further procurement and testing requirements for FR initiatives and systems.
  • We made a number of amendments to address this feedback, and intend to advise police agencies on specific use cases as they are developed.

Prepared by: Policy, Research and Parliamentary Affairs Directorate


Facial Recognition Consultation

Key Messages

  • In June 2021 my provincial and territorial colleagues and I launched a public consultation, seeking feedback on our draft guidance and on a future legal and policy framework to govern police use of FR.
  • We received 29 submissions from civil society, academia, government, police, legal and industry groups, and individuals. We also met with police agencies, civil society groups, organizations representing marginalized communities, and human rights commissioners.
  • In response to stakeholder feedback, we have made important amendments to the guidance to address issues such as accuracy, retention, procurement and testing.
  • We are also calling for a new legal framework that would set appropriate limits on police use of FR. Stakeholders were clear that guidance alone is not sufficient to address the challenges posed by police use of FR. There was general agreement on the need for a legal framework to regulate police use of FR.

Background

  • Guidance: Stakeholders recommended clarifications concerning accuracy, retention, third parties, transparency, procurement, testing, and use cases.
  • Current Privacy Laws: Stakeholders noted gaps in privacy laws concerning third party accountability, lack of clarity on publicly available personal information, and exceptions that exempt law enforcement from certain accuracy obligations and give them greater latitude for disclosure to specified bodies.
  • Future Legal framework: stakeholders recommended independent oversight, setting limitations on FR uses, establishing no-go zones, clarifying instances requiring prior authorization, restricting data-sharing, and mandating assessments.

Prepared by: Policy, Research and Parliamentary Affairs Directorate


Future Legal Framework for Regulating Police use of Facial Recognition

Key Messages

  • Unlike other forms of biometrics (e.g. DNA, fingerprints), police use of FR is not governed by specific legislation.
  • A lack of clarity and gaps in privacy protection leave space for harms to privacy and other rights.
  • Stakeholders generally agreed on the need for a legal framework specifically governing FR use by police.
  • In our view, the law should include: clear authorization for use, with no-go zones; strict necessity & proportionality requirements; independent, external oversight; and updated privacy rights and protections.

Background

  • The DNA Identification Act and Identification of Criminals Act regulate how police agencies can collect, use, disclose and destroy DNA samples, fingerprints and mugshots.
  • Authorization for investigative and preventive uses of FR by police should be restricted to compelling purposes (e.g. serious crime). No-go zones should prohibit mass surveillance uses, monitoring of peaceful protestors, and the indiscriminate collection of images for comparison databases.
  • In the European Union, where subject to the Law Enforcement Directive, police can process biometric data for identification purposes only when “strictly necessary”.
  • Oversight should include: proactive engagement (e.g. privacy impact assessments), pre-authorization at the program level (or advance notice of proposed use), and enforcement powers (e.g. audits, proactive inspections, and order-making).
  • Changes are needed to make certain principles clearer and more specific to FR risks. This includes accuracy (need to address accuracy of matching procedures, not just database information); retention (different retention periods for different components of FR systems – e.g. probe vs comparison images); and transparency (details of use should be public).

Prepared by: Policy, Research and Parliamentary Affairs Directorate


Date modified: